Logic of Language
Logic of Language
PIETER A. M. SEUREN
1
3
Great Clarendon Street, Oxford OX2 6DP
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide in
Oxford New York
Auckland Cape Town Dar es Salaam Hong Kong Karachi
Kuala Lumpur Madrid Melbourne Mexico City Nairobi
New Delhi Shanghai Taipei Toronto
With offices in
Argentina Austria Brazil Chile Czech Republic France Greece
Guatemala Hungary Italy Japan Poland Portugal Singapore
South Korea Switzerland Thailand Turkey Ukraine Vietnam
Oxford is a registered trade mark of Oxford University Press
in the UK and in certain other countries
Published in the United States
by Oxford University Press Inc., New York
# Pieter A. M. Seuren 2010
The moral rights of the author have been asserted
Database right Oxford University Press (maker)
First published 2010
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
without the prior permission in writing of Oxford University Press,
or as expressly permitted by law, or under terms agreed with the appropriate
reprographics rights organization. Enquiries concerning reproduction
outside the scope of the above should be sent to the Rights Department,
Oxford University Press, at the address above
You must not circulate this book in any other binding or cover
and you must impose the same condition on any acquirer
British Library Cataloguing in Publication Data
Data available
Library of Congress Cataloging in Publication Data
Data available
Typeset by SPI Publisher Services, Pondicherry, India
Printed in Great Britain
on acid-free paper by
CPI Antony Rowe, Chippenham, Wiltshire
ISBN 978–0–19–955948–0
1 3 5 7 9 10 8 6 4 2
to Pim Levelt
for his unfailing support, advice and friendship
This page intentionally left blank
Contents
Preface xii
Abbreviations and symbols xiv
Bibliography 409
Index 421
Preface
This is the second and last volume of Language from Within. The first volume
dealt with general methodology in the study of language (which is seen as an
element in and product of human cognition), with the intrinsically inten-
sional ontology that humans operate with when thinking and speaking, with
the socially committing nature of linguistic utterances, with the mechanisms
involved in the production and interpretation of utterances, with the notions
of utterance meaning, sentence meaning, and lexical meaning, and, finally,
with the difficulties encountered when one tries to capture lexical meanings in
definitional terms. The present volume looks more closely at the logic inher-
ent in natural language and at the ways in which utterance interpretation has
to fall back on the context of discourse and on general knowledge. It deals
extensively with the natural semantics of the operators that define human
logic, both in its presumed innate form and in the forms it has taken as a
result of cultural development. And it does so in the context of the history of
logic, as it is assumed that this history mirrors the path followed in Western
culture from ‘primitive’ logical (and mathematical) thinking to the rarified
heights of perfection achieved in these areas of study over the past few
centuries.
The overall and ultimate purpose of the whole work is to lay the founda-
tions for a general theory of language, which integrates language into its
ecological setting of cognition and society, given the physical conditions
of human brain structure and general physiology and the physics of sound
production and perception. This general theory should eventually provide an
overall, maximally motivated, and maximally precise, even formal, interpre-
tative framework for linguistic diversity, thus supporting typological studies
with a more solid theoretical basis. The present work restricts itself
to semantics and, to a lesser extent, also to grammar, which are more directly
dependent on cognition and society, leaving aside phonology, which appears
to find its motivational roots primarily in the physics and the psychology of
sound production and perception, as well as in the input phonological
systems receive from grammar.
The two volumes are not presented as a complete theory but rather as
a prolegomena and, at the same time, as an actual start, in the overall and all-
pervasive perspective of the cognitive and social embedding of language—a
Preface xiii
perspective that has been hesitantly present in modern language studies but
has not so far been granted the central position it deserves. In this context, it
has proved necessary, first of all, to break open the far too rigid and too
narrow restrictions and dogmas that have dominated the study of language
over the past half-century, which has either put formal completeness above
the constraints imposed by cognition or, by way of contrast, rejected any kind
of formal treatment and has tried to reduce the whole of language to intui-
tion-based folk psychology.
The present, second, volume is, regrettably but unavoidably, much more
technical than the first, owing to the intrinsic formal nature of the topics dealt
with. Avoiding technicalities would have reduced the book either to utter
triviality or to incomprehensibility, but I have done my best to be gentle with
my readers, requiring no more than a basic ability (and willingness) to read
formulaic text and presupposing an elementary knowledge of logic and set
theory.
Again, as in the first volume, I wish to express my gratitude to those who
have helped me along with their encouragement and criticisms. And again, I
must start by mentioning my friend of forty years’ standing Pim Levelt, to
whom I have dedicated both volumes. He made it possible for me to work at
the Max Planck Institute for Psycholinguistics at Nijmegen after my retire-
ment from Nijmegen University and was a constant source of inspiration not
only on account of the thoughts he shared with me but also because of his
moral example. Then I must mention my friend and colleague Dany Jaspers
of the University of Brussels, whose wide knowledge, well-formulated com-
ments, and infectious enthusiasm were a constant source of inspiration.
Ferdinando Cavaliere made many useful suggestions regarding predicate
logic and its history. Finally, I want to thank Kyle Jasmin, whose combined
kindness and computer savviness were indispensable to get the text right. The
many others who have helped me carry on by giving their intellectual, moral,
and personal support are too numerous to be mentioned individually. Yet my
gratitude to them is none the less for that. Some, who will not be named,
inspired me by their fierce opposition, which forced me to be as keen as
they were at finding holes in my armour. I hope I have found and repaired
them all.
P. A. M. S.
Nijmegen, December 2008.
Abbreviations and symbols
because language and logic have, from the very Aristotelian beginnings, been
close, though uneasy, bedfellows, never able either to demarcate each other’s
territories or to sort out what unites them. The last century has seen a
tremendous upsurge in both logic and linguistics, but there has not been
a rapprochement worth speaking of. No logic is taught in the vast majority of
linguistics departments or, to my knowledge, in any psychology department,
simply because the relevance of logic for the study of language and mind has
never been made clear.
All in all, therefore, it seems well worth our while to take a fresh look at logic
in the context of the study of language. But, in doing so, we need an open
and flexible mind, because the paradigm of modern logic has come
to suffer from a significant degree of dogmatism, rigidity, and, it has to be
said, intellectual arrogance. Until, say, 1950 it was common for philosophers
and others to play around with logical systems and notations, but this,
perhaps naı̈ve, openness was suppressed by the developments that followed.
The august status conferred upon logic once the period of foundational
research was more or less brought to an end, which was, let us say, around
1950, has not encouraged investigators to deviate from what was, from
then on, considered the norm in logical theory. Yet that norm is based on
mathematics, in particular on standard Boolean set theory, whereas what is
required for a proper understanding of the relations between logic, language,
and thinking is a logic based on natural cognitive and linguistic intuitions.
We are in need of a ‘natural’ logic of language and cognition drawn from
the facts not of mathematics but of language. The first purpose of writing
about logic in this book is, therefore, programmatic: an attempt is made
at loosening up and generalizing the notion of logic and at showing to
linguists, psychologists, semanticists, and pragmaticists why and how logic is
relevant for their enquiries.
An obvious feature of the present book is the attention paid to history. The
history of logic is looked at as much as its present state. This historical
dimension is essential, for at least two reasons. First, there is a general reason,
derived from the fact that the human sciences as well as logic are not
CUMULATIVE the way the natural sciences are taken to be, where new results
simply supersede existing knowledge and insight. In the human sciences and,
as we shall see, also in logic, old insights keep cropping up and new results or
insights all too often prove unacceptably restrictive or even faulty. Since the
human sciences want to emulate the natural sciences, they have adopted the
latter’s convention that all relevant recent literature must be referred to or else
the paper or book is considered lacking in quality. But they have forgotten or
repressed the fact that they are not cumulative: literature and traditions from
Logic and entailment 3
the more distant past are likely to be as relevant as the most recent literature
and paths that have been followed in recent times may well turn out to be
dead ends so that the steps must be retraced. Recognizing that means recog-
nizing that the history of the subject is indispensable.
The second, more specific, reason is that the history of logic mirrors the
cultural and educational progress that has led Western society from more
‘primitive’ ways of thinking to the unrivalled heights of formal precision
achieved in modern logic and mathematics. This is important because, as
is explained in Chapter 3, it seems that natural logical intuitions have only
gone along so far in this development and have, at a given moment, detached
themselves from the professional mathematical logicians, leaving them
to their own devices. It is surmised in Chapters 3 and 4 that natural logical
intuitions are a mixture of pristine ‘primitive’ intuitions and more sophisti-
cated intuitions integrated into our thinking and our culture since the
Aristotelian beginnings. It is this divide between what has been culturally
integrated and what has been left to the closed chambers of mathematicians
and logicians that has motivated the distinction, made in Chapter 3, between
‘natural’ logical intuitions on the one hand and ‘constructed’, no longer
natural, notions in logic and mathematics on the other.
Historical insight makes us see that linguistic studies have, from the very
start, been divided into two currents, FORMALISM and ECOLOGISM (see, for
example, Seuren 1986a, 1998a: 23–7, 405–10). In present-day semantic studies,
the formalists are represented by formal model-theoretic semantics, while
modern ecologism is dominated by pragmatics. It hardly needs arguing that,
on the one hand, formal semantics, based as it is on standard modern logic,
badly fails to do justice to linguistic reality. Pragmatics, on the other hand,
suffers from the same defect, though for the opposite reason. While formal
semantics exaggerates formalisms and lacks the patience to delay formaliza-
tion till more is known, pragmatics shies away from formal theories and lives
by appeals to intuition. Either way, it seems to me, the actual facts of language
remain unexplained. If this is so, there must be room for a more formal
variety of ecologism, which is precisely what is proposed in the present book.
One condition for achieving such a purpose is the loosening up of logic.
It may seem that logic is a great deal simpler and more straightforward
than human language, being strictly formal by definition and so much more
restricted in scope and coverage, and so much farther removed from the
intricate and often confused realities of daily life that language has to cope
with. Yet logic has its own fascinating depth and beauty, not only when
studied from a strictly mathematical perspective but also, and perhaps even
more so, when seen in the context of human language and cognition. In that
4 The Logic of Language
context, the serene purity created by the mathematics of logic is drawn into
the realm of the complexities of the human mind and the mundane needs
served by human language. But before we embark upon an investigation of
the complexities and the mundane needs, we will look at logic in the pure
light of analytical necessity.
What is meant here by logic, or a logic, does not differ essentially from the
current standard notion, shaped to a large extent by the formal and founda-
tional progress made during the twentieth century. As far as it goes, the
modern notion is clear and unambiguous, but it still lacks clarity with regard
to its semantic basis. In the present chapter the semantic basis is looked at
more closely, in connection with the notion of entailment as analytically
necessary inference—that is, inference based on meanings. This is not in itself
controversial, as few logicians nowadays will deny that logic is based on
analytical necessity, but the full consequences of that fact have not been
drawn (probably owing to the deep semantic neurosis that afflicted the
twentieth century).
During the first half of the twentieth century, most logicians defended
the view that logical derivations should be defined merely on grounds of
the agreed FORMS of the L-propositions or logical formulae,1 consisting of
logical constants and typed variables in given syntactic structures. The deri-
vation of entailments was thus reduced to a formal operation on strings
of symbols, disregarding any semantic criterion. Soon, however, the view
prevailed that the operations on logical form should be seen as driven by
the semantic properties of the logical constants. I concur with this latter view,
mainly because there is nothing analytically necessary in form, but there is
in meaning. This position is supported by the fact that a meaning that is well-
defined for the purpose of logic is itself a formal object, in the sense that it is
representable as a structured object open to a formal interpretation in terms
of a formal calculus such as logical computation.
In earlier centuries, the ideas of what constitutes logic have varied a great
deal. In medieval scholastic philosophy, for example, a distinction was made
between logica maior, or the philosophical critique of knowledge, and logica
minor, also called dialectica, which was the critical study and use of the logical
apparatus of the day—that is, Aristotelian-Boethian predicate calculus and
syllogistic. Logica maior is no longer reckoned to be part of logic but, rather,
of general or ‘first’ philosophy. Logica minor corresponds more closely to the
modern notion of logic. During the nineteenth century logic was considered
to be the study of the principles of correct reasoning, as opposed to the
processes actually involved in (good or bad) thinking, which were assigned
to the discipline of psychology. The Oxford philosopher Thomas Fowler, for
example, wrote (1892: 2–6):
The more detailed consideration of [. . .] Thoughts or the results of Thinking
becomes the subject of a science with a distinct name, Logic, which is thus a
subordinate branch of the wider science, Psychology. [. . .] It is the province of
Logic to distinguish correct from incorrect thoughts. [. . .] Logic may therefore be
defined as the science of the conditions on which correct thoughts depend, and the art
of attaining to correct and avoiding incorrect thoughts. [. . .] Logic is concerned with
the products or results rather than with the process of thought, i.e. with thoughts rather
than with thinking.
Similar statements are found in virtually all logic textbooks of that period.
After 1900, however, changes are beginning to occur, slowly at first but
then, especially after the 1920s, much faster, until the nineteenth-century
view of logic fades away entirely during the 1960s, with Copi (1961) as one
rare late representative.
But what do we, following the twentieth-century tradition in this respect,
take logic to be? Since about 1900, logic has increasingly been seen as the study
of consistency through a formal calculus for the derivation of entailments. In this
view, which we adopt in principle, logic amounts to the study of how to derive
L-propositions from other L-propositions salva veritate—that is, preserving
truth. Such derivations must be purely formal and independent of intuition.
According to some logicians, they are based exclusively on the structural
properties of the expressions in the logical language adopted, but others,
perhaps the majority, defend the view that the semantic properties of certain
designated expressions, the LOGICAL CONSTANTS, co-determine logical deriva-
tions, provided these meanings are formally well-defined, which means in
practice that they must be reducible to the operators of Boolean algebra (see
Section 2.3.2 for a precise account). On either view, logic must be a CALCULUS—
that is, a set of formally well-defined operations on strings of terms, driven
only by the well-defined structural properties of the expressions in the logical
language and the well-defined semantic properties of the logical constants.
6 The Logic of Language
When one accepts the dependency on the meanings of the logical constants
involved, one may say that logic is an exercise in analytical necessity.
This basic adherence to the twentieth-century notion of what constitutes
a logic is motivated not only by the fact that it is clear and well-defined but also
by the consideration that it allows us to re-inspect the ‘peasant roots’ of logic,
as found in the works of Aristotle and his ancient successors, from a novel
point of view. Traditional logicians only had natural intuitions of necessary
consequence and consistency to fall back on for the construction of their
logical systems, lacking as they did the sophisticated framework of modern
mathematical set theory. Yet this less sophisticated source of logical inspiration
is precisely what we need for our enterprise, which aims at uncovering the
logic people use in their daily dealings and their ordinary use of natural
language. Pace Russell, we thus revert unashamedly to psychological logic.
Though Aristotle, the originator of logic, did not yet use the term logic, his
writings, in particular On Interpretation and Prior Analytics, show that his
starting point was the discovery that often two sentences are inconsistent
with regard to each other in the sense that they cannot be true simultaneously.
He coined the term CONTRARIES (enántiai) for such pairs (or sets) of sentences.
When two sentences are contraries, the truth of the one entails the falsity of
the other. He then worked out a logical system on the basis of contrariety and
contradictoriness—and thus also of entailment—as systematic consequences
of certain logical constants.
Of course, the question arises of what motivates the particular selection of
the logical constants involved and of the operations they allow for, given their
semantic definition. A good answer is that the choice of the relevant constants
and of the operations on the expressions in which they occur is guided by the
intuitive criterion of consistency of what is said on various occasions. Such
consistency is of prime importance in linguistic interaction, since, as is argued
in Chapter 4 of Volume I, speakers, when asserting a proposition, put
themselves on the line with regard to the truth of what they assert. Inconsis-
tency will thus make their commitment ineffective. When a set of predicates is
seen to allow for a formal calculus of consistency, we have hit on a logical
system, anchored in the syntax of the logical language employed and in the
semantic definitions of the logical constants, whose meanings are specified in
each language’s lexicon. That being so, a not unimportant part of the seman-
ticist’s, more precisely the lexicographer’s, brief consists in finding out how
and to what extent natural language achieves informational consistency
through its logical constants.
Consistency is directly dependent on truth and the preservation of
truth through chains of entailments, also called logical derivations. The
Logic and entailment 7
objects and the tenses used must have identical or corresponding temporal
values. Thus, in the example given, the proper name Jack must refer to the
same person and the present tense must refer to the same time slice in both
statements. This is the MODULO-KEY CONDITION on the entailment relation. This
condition may seem trivial and is, in most cases, silently understood. In fact,
however, it is far from trivial. It is defined as follows:
THE MODULO-KEY CONDITION
Whenever a (type-level) L-proposition or set of L-propositions P entails
a (type-level) L-proposition Q, the condition holds that all coordinates in
the underlying propositions p and q that link up p and q with elements in
the world take identical or corresponding keying values in the
interpretation of any token occurrences of P and Q, respectively.
The Modulo-Key Condition, however, does not allow one to say that if the
terms Jack and Dr. Smith refer to the same person, (the L-proposition
underlying) the statement Jack has been murdered entails (the L-proposition
underlying) the statement Dr. Smith is dead, and analogously with the names
interchanged. This is so because entailment is a type-level relation and at type-
level it is not given that Jack is the same person as Dr. Smith. To have the
entailment it is, therefore, necessary to insert the intermediate sentence Jack is
Dr. Smith. All the Modulo-Key Condition does is ensure that the term Jack is
keyed to the same person every time it is used.
The Modulo-Key Condition implies a cognitive claim, since keying is the
cognitive function of being intentionally focused on specific objects in the
world in a specific state of affairs. This cognitive claim involves at least the
existence of a system of coordinates for the mental representation of states of
affairs. Whenever the sentences in question are used ‘seriously’, and not as
part of a fictional text presented as such, these coordinates have values that are
located in the actual world. For the entailment relation to be applicable, and
indeed for the construction of any coherent discourse, the participants in the
discourse must share a system of coordinates needed for a well-determined
common intentional focusing on the same objects and the same time. The
mechanism needed for a proper functioning of such a mental system of
coordinates and their values is still largely unknown. We do know, however,
that it is an integral part of and a prerequisite for an overall system of
discourse construction, both in production and in comprehension—the
system that we call anchoring. Since most of this system is still opaque, we
are forced to conclude that what presented itself as a trivial condition of
constancy of keying for entailment relations turns out to open up a vast
Logic and entailment 9
area of new research, a terra incognita for the study of language, meaning,
and logic.
In addition to entailment, there is also EQUIVALENCE, normally defined as
entailment in both directions: ‘P is equivalent with Q’ is said to mean that
P Q and Q P. This will do no harm for the moment, but in Chapter 3 it is
argued that it is probably not a good way of making explicit what (semantic)
equivalence amounts to in natural language and cognition. In natural language
and cognition, equivalence is not so much a (meta)relation, yielding truth or
falsity when applied to any n-tuple of L-propositions, as a cognitive operation
taking two or more L-propositions and turning them into one at a certain level
of representation. As a relation, equivalence makes little cognitive sense, since
when two L-propositions are equivalent at some level of cognitive representa-
tion, they count as one, not as two. As an operation, however, equivalence
makes a great deal of cognitive sense, since what counts as two or more at
some level of representation can be made to count as one at a different level. In
this sense one can say that Jack sold a car to Jim is equivalent with Jim bought a
car from Jack (modulo key). To say that these two sentences are equivalent
then amounts to saying that they are turned into one, or are identified, salva
veritate, at some level, but not necessarily at all levels, of representation.
In the definition of entailment given above we have inserted the condition
‘on account of the specific linguistic meaning of P’. This is, in itself, not
controversial, but the wording implies that necessarily true L-propositions
cannot properly be said to be entailed by any arbitrary L-proposition (the
medieval inference rule ‘verum per se ex quolibet’), and likewise that neces-
sarily false L-propositions cannot properly be said to entail any arbitrary
L-proposition (‘ex falso per se ad quodlibet’). These theorems may be said
to hold in a strictly mathematical sense, yet they fail to satisfy the definition
given, since no specific semantic properties of the entailing L-proposition are
involved. We also consider them to be irrelevant for a proper understanding
of natural language. The entailments that are relevant are subject to the
condition that they derive from the lexically defined meanings of the predi-
cates occurring in the entailing sentence, as it is predicates that produce truth or
falsity when applied to objects of the proper kind, due to their satisfaction
conditions—that is, the conditions that must be satisfied by any object to
‘deserve’ the predicate in question. Since more specific conditions imply less
specific conditions (for example, the condition of being a rose implies the
condition of being a flower), the satisfaction of a more specific predicate by
certain objects implies the satisfaction by the same objects of a predicate
defined by less specific conditions. This is the basis of the entailment relation
we wish to consider. It means that, as long as the objects and the state of affairs
10 The Logic of Language
involved remain the same, the predicates can do their entailment work. We
thus require of the relation of NATURAL ENTAILMENT from P to Q that it be
subject to the condition that the preservation of truth rests on the meaning of the
predicates in the entailing sentence P and on their structural position in P.
Henceforth, unless otherwise specified, when we speak of entailment, what
is intended is natural entailment.
language and cognition. The question of what is the, or a, proper model for
the logic of language has so far received little or no attention. It is this
question that is central to the present study.
they presuppose that certain conditions are fulfilled in the states of affairs the
logic is to be applied to. That having been said, the feeling is that all is well. In
fact, however, such specific logics must be caught under the umbrella of
some sound universally applicable logic, or else there is no specific logic at
all. In the case of examples (1.1) and (1.2), the ‘umbrella’ is completed by the
addition of a silently understood contingent condition, namely that prisoners
of war are protected by the Geneva Convention. In this book, we are not
concerned with ‘specific’ or ‘nonmonotonic’ logics. What we are concerned
with is the more basic, though technologically less challenging, question of
the meanings of the logical constants concerned in the overall, universally
applicable, ‘umbrella’ logic.
Consider the well-known example of traditional Aristotelian-Boethian
predicate calculus. This logic is sound only for states of affairs where the
class of things quantified over is nonnull: it has so-called ‘existential import’,
which makes it nonvalid as a logical system. It is widely believed that this logic
is saved from its undue existential import by the mere stipulation that it
PRESUPPOSES that the class of things quantified over (the F-class) is nonnull.
Once that condition has been stated, so it is thought, the logic is safe. But this
cannot be correct. For either Aristotelian-Boethian predicate calculus is to be
considered a specific logic, in which case it is in need of a general ‘umbrella’
logic, or it is meant to be a general logic, in which case it must specify what
entailments are valid in any arbitrary state of affairs, no matter whether the
class of things quantified over does or does not contain any elements.
Strawson failed to see this when he proposed (1952: 170–6) that (the
L-proposition underlying) a statement like (1.3) lacks a truth value (falls
into a ‘truth-value gap’) because there is in fact not a single Londoner alive
of that age:
(1.3) All 150-year-old Londoners are bald.
If Strawson were right, it would follow that (1.3), which otherwise has
impeccable papers for serving in Aristotelian-Boethian predicate calculus,
falls outside that calculus when used here and now. But as soon as one
Londoner were indeed to reach the age of 150 years, (1.3) suddenly would
have a truth value and would take part in Aristotelian-Boethian predicate
calculus. And then, all of a sudden, it would entail that at least one 150-year-
old Londoner is bald—an entailment defined as valid in that logic. It is easily
seen that this is in stark conflict with the concept of entailment as used in
logic, since it makes entailments dependent on contingencies, on what
Logic and entailment 15
If that is so, there must be a regular mapping system relating the two levels of
representation. And it is up to any one individual whether he or she prefers to
call that mapping system the grammar of L or part of its semantics.
It is customary to say that logical forms or L-propositions are either ATOMIC
or COMPLEX. An atomic L-proposition is seen as consisting of a predicate
F expressing a property and one or more terms used to denote the objects to
which the property expressed by F is attributed. A complex L-proposition
contains at least one propositional operator. We adhere to this distinction,
although it should be understood that it is of a purely logical and not of a
linguistic or grammatical nature. A sentence like:
(1.4) Despite the fact that it had been snowing heavily the whole day, she
decided to drive to the factory, hoping that she would find the answer
there.
is, of course, grammatically complex. Yet it is considered logically atomic, as
it contains no propositional operator. It is up to linguistic analysis to show
that sentence (1.4) is structured in such a way that indeed a property is
assigned to one or more objects. This can only be shown if it is assumed
that some objects are of a kind that allows for linguistic expression by means
of an embedded sentence or S-structure that functions as a term to a predicate
at L-propositional level, so that recursive embeddings of S-structures are
allowed. Embedded S-structures must then be considered to refer to abstract
objects of some kind (see Section 6.2.3 in Volume I). Seuren (1996) gives an
idea of how the grammatical analysis of sentences shows up a hierarchical
predicate–argument structure. The development of a semantics to go with
this type of grammatical analysis is part of a comprehensive research
programme leading to an integrated theory of language.
So far, we have seen that a logic is a calculus of entailments, and that
an entailment of an L-proposition P—which may be a set of L-propositions
conjoined under AND—specifies on analytical, semantic grounds what L-
propositions Q, R, S, . . . (apart from P itself) must likewise be true (modulo
key) when an assertive token utterance of P is true.
of truth-value assignments, with just two truth values, True and False, in
which all L-propositions with a given key always have a truth value. It was not
until the 1920s that variations on this theme began being proposed, in
particular by the Polish logician Lukasiewicz, but by many others as well
(see Rescher 1969, ch. 1). These variations on the theme of multivalence were
not, on the whole, supported by linguistic intuitions. On the contrary, they
were motivated by a variety of considerations, covering modal logic, future
contingency, mathematical intuitionism, undecidable mathematical state-
ments, and logical paradoxes. It was not until after the 1950s that the notion
of trivalent logic was mooted in connection with natural language, in partic-
ular with presuppositional and vagueness phenomena. Given the great variety
in the motivations for multivalued logics, it is understandable that a certain
amount of confusion ensued, which in turn led to a situation where investiga-
tions into multivalued logics did not achieve a high degree of respectability. In
fact, logicians have, on the whole, been anxious to safeguard logic from any
incursions of multivalence.
Since we, too, are threatening the bivalent shelter of standard logic, it is
important to state as exactly as possible what is meant by the PRINCIPLE OF
BIVALENCE. We define the Bivalence Principle as consisting of two independent
subprinciples:
PRINCIPLE OF BIVALENCE
(i) SUBPRINCIPLE OF COMPLETE VALUATION OF L-PROPOSITIONS:
All well-anchored and well-keyed L-propositions have a truth value.
3 We leave aside the many cases where universal quantification does require a specific context, as in
Tout Paris était là (All Paris was present), which, snobbishly, selects only a specific section of Parisian
society.
Logic and entailment 21
Here we see that Aristotle rules out, perhaps not from his theory of language
but certainly from his logic—the main topic of Prior Analytics—all proposi-
tions that need a specific form of anchoring or keying for the assignment of
a truth value—Quine’s occasion sentences. Aristotle’s entire logic is built on
L-propositions corresponding to Quine’s eternal sentences. And of these
eternal sentences, it is only the universally and existentially quantified ones
that play a role in the logic. ‘Indefinite’, or as we might say, generic, sentences
are given a good deal of attention in On Interpretation, but they play no part
in the logical system. Aristotle would have no truck with occasion sentences,
probably, one surmises, because he saw the problems coming, as one cannot
deal with the logic, or indeed the semantics, of occasion sentences without
taking into account conditions of anchoring and keying, which pose an
immediate threat to the simplicity of the system. His refusal, or perhaps his
inability, to face this threat was canonized during the first half of the twentieth
century and vestiges of that attitude are still found today. This, however, is a
luxury we cannot afford when we investigate the logic and the semantics of
natural language sentences and words.
On the whole, logicians dislike the complications arising out of the condi-
tions of anchoring and keying. What they want is a logic that operates solely
on expressions whose grammatical wellformedness is a sufficient condition
for their having a truth value. They want to read the subprinciple of complete
valuation of properly anchored and keyed L-propositions as the subprinciple
of complete valuation of sentences as grammatical objects. But this is totally
unrealistic with regard to the logic and semantics of natural language. In
natural language, wellformedness of a sentence is condition one for an expres-
sion to have a truth value (though a great deal of implicit correction is allowed
for in practice). Condition two is that it be properly anchored and keyed.
Logicians want the latter condition to be either otiose or nonexistent, a wish
we must reject as being out of touch with the reality of language. We also say
that when an assertive sentence is uttered as a properly anchored and keyed
statement, it necessarily has a truth value, because it is impossible mentally to
assign a property to one or more objects without there being some form of
correspondence or noncorrespondence to what is the case—that is, some
form of truth or lack of truth.
The question now is: how many truth values are there in the natural
language system? Strawson considered it possible for a properly anchored
and keyed statement to have no truth value at all. That must be deemed
inadequate, as was shown in Section 1.3. Strawson’s proposal invites one to
treat his ‘lack of truth value’ as a truth value after all, though inappropriately
named. But if one does that, one needs a logic that takes more than the two
22 The Logic of Language
values True and False. And this is precisely what we find in natural language,
which, in our analysis, operates with the values True, False-1 or ‘minimally
false’, and False-2 or ‘radically false’ (probably with intermediate values
between the three).
So we uphold the subprinciple of complete valuation of properly anchored
and keyed L-propositions as being necessary by definition. But we are
prepared to tamper with the subprinciple of binarity. We feel free to do so
because giving up the subprinciple of binarity enables us to present a more
adequate account of the semantics of natural language, and also because in
logic this (sub)principle seems to be motivated merely by a desire to keep
logic free from the complications arising in connection with anchoring, key-
ing, and gradability. We need to consider the possibility not only of two
different kinds of falsity and thus of three truth values, but also of fuzzy
transitions between truth values. This places the Aristotelian axiom of strict
bivalence in a wider metalogical context, in that standard strictly bivalent
general logic turns out to be the limiting minimal case of an infinite array of
possible, and logically richer, general logics that vary either on an axis of
intermediate truth values or on an axis of semantically defined presupposi-
tional restrictions to certain contexts.
mental contingencies is easily factorized out. But it has also been applied,
especially since the 1960s, to the study of linguistic meaning, whereby it was
assumed that linguistic meaning, like mathematical meaning, is independent
of mental contingencies and nonvague. This assumption has, however, proved
unwarranted over the past quarter-century. Not only do most predicate
meanings in natural language impose contextual restrictions, called precon-
ditions, which generate presuppositions; they are also often vague and/or they
incorporate all kinds of purely cognitive (often evaluative) conditions, besides
the conditions to be satisfied by the objects themselves to which the predicate
is applied.
We have no choice but to reject PCI as being irreconcilable with natural
language. For if sentences normally require anchoring and keying to have a
truth value, it follows that the machinery that does the anchoring and the
keying—that is, the human mind—must be at some suitable point in the
development of a discourse or context and must be intentionally focused on,
or keyed to, a particular state of affairs for the truth value assignment to take
place successfully.
This is amply borne out by natural language, which violates PCI in a
number of ways. For example, most uttered sentences contain DEFINITE TERMS
referring to specific objects or sets of objects to which, truly or falsely, a
property is assigned. For the reference relation to be successful it is necessary
that the means be available to identify the object or objects in question (Clark
and Wilkes-Gibbs 1990). In most cases, these means can only be provided if
the mind is in a contextually and referentially restricted information state.
Reference clearly requires specific anchoring and keying.
The same is found with regard to type-level lexical meanings. There are
cases where the satisfaction conditions of predicates depend on (contain an
open parameter referring to) what is taken to be normal in any given context.
Consider, for example, the predicate many. If it is normally so that out of an
audience of three hundred taking part in a TV quiz nobody gets the one-
million Euro prize, then, when in one session three participants get it, one can
say in truth that there were many one-million Euro prize winners in that
session. But if only three out of three hundred people voted for me in an
election, then, one fears, it is false to say that many people voted for me.
Or consider cases of what is called DYNAMIC FILTERING in Section 9.6.3 of
Volume I, found all over the lexicons of natural languages. For example, the
conditions for the predicate flat to be satisfied differ considerably when it is
applied to a tyre, a road, or a mountain. And a definite term like the office will
have different interpretations according to whether it stands as a subject term
to the predicate be on fire or, for example, have a day off. In the former case the
24 The Logic of Language
creates the impression that one may adequately operate with a strictly bivalent
system of simple truth and simple falsity. Since, however, natural language is
very much more complex and more varied, there is room for the idea that
the logical system of language requires a more finely grained set of truth
parameters than just True and False. This is one of the questions we explore
in this book.
2
order must maintain consistency, or else the social fabric will collapse. For
that reason it is of the greatest importance, first to language users and
secondly to language theorists, to spot inconsistencies or, in logical terms,
to spot sets of propositions that cannot simultaneously be true. When such a
set consists of just two propositions, we speak of contrary propositions. This is,
in fact, how Aristotle set up his predicate logic in On Interpretation, which
revolves around the notions of contrariety and contradictoriness. Aristotle
called any two L-propositions which, for semantic reasons, cannot be simul-
taneously true ‘contraries’, and any two L-propositions that can, again for
semantic reasons, be neither simultaneously true nor simultaneously false
‘contradictories’. On the standard (yet oversimplified) assumption that all
natural languages have a logical constant of negation—not in English—which
inverts truth values (modulo key) under the axiom of bivalence, the contra-
dictory of an L-proposition P is its negation NOT(P), or ¬P: the truth of the
one entails the falsity of the other, either way.
To give a trivial example, the two sentences in (2.1a) are contraries, since
anyone who asserts both is guilty of inconsistency (‘><’ stands for contrari-
ety). This means that when (the L-proposition underlying) Joe has been
murdered is true, Joe is not dead must be false and, therefore, Joe is dead
must be true (assuming bivalence). In other words, Joe has been murdered
entails Joe is dead, as stated in (2.1b). (One notes that we have silently passed
from entailment in a general sense to logical entailment. This is legitimate
given the definition of bivalent negation.)
(2.1) a. Joe has been murdered >< Joe is not dead
b. Joe has been murdered Joe is dead
Entailment can thus be defined in terms of contrariety and the contradiction-
producing negation operator. It is also possible, of course, to define contrari-
ety in terms of entailment and contradictoriness, but that seems less natural,
given the basic requirement of consistency in the use of language. Nor is it
how Aristotle proceeded.
Any relation of contrariety between two L-propositions P and ¬Q, thus
brings along a relation of entailment from P to Q. Figure 2.1 shows the
triangular relation arising from the assumed contrariety of P and ¬Q,
which causes the entailment from P to Q, the contradictory of ¬Q.
This triangle is arguably ‘natural’ in the sense that it may be taken to reflect
natural, as opposed to constructed, set-theoretic structures and relations—a
supposition that is further elaborated in Chapter 3. It forms the natural basis
of the logical system of propositional calculus with the operators AND, OR, and
Logic: a new beginning 29
P
: entails
CD: contradictories
C C: contraries
CD
Q ¬Q
FIGURE 2.1 The natural logical triangle
P ¬P
CD
C SC SC: subcontraries
CD
Q ¬Q
FIGURE 2.2 The natural logical triangle extended with contraposition
30 The Logic of Language
a. P ¬Q b. P ¬Q
C C
CD CD
CD
SC
Q Q ¬P
FIGURE 2.3 The logical triangle and square in the Boethian arrangement
/¬P/
/P/
P ¬P
sitact ∈ /P/ T F
FIGURE 2.5 Truth table of the standard bivalent negation ¬ in terms of VS-modelling
‘not true’ equals ‘false’ and ‘not false’ equals ‘true’. Clearly, if P is necessarily
false, as, for example, the L-proposition underlying the sentence He was
dead for the rest of his life, /P/ ¼ ; if P is necessarily true, as in He
was alive for the rest of his life, /P/ ¼ U.
Thus defined, negation can be said to ‘toggle’ between truth and falsity. The
property of the negation simply to invert truth values of propositions is
expressed in the TRUTH TABLE of the standard negation shown in Figure 2.5
(adapted to a valuation space interpretation), where ‘T’ stands for True and
‘F’ for False.
It is possible, however, to vary on this theme. For example, one may define a
negation operator that selects the complement of /P/ within a subset of situa-
tions in U defined on the basis of preceding context or the meaning of the main
predicate in P, in which case room is made for more than one negation. In this
book we argue that this is, in fact, the situation in natural language, where the
main function of the negation NOT is to toggle between the values ‘true’ and
‘minimally false’, minimal falsity being caused by those situations that are
outside /P/ but within P’s subuniverse in U. Further comment on this issue is
provided in Chapter 3, but a full discussion has to wait till Chapter 10.
But what is meant by internal negation? So as not to complicate matters
unduly at this stage, let us say that the internal negation is a negation not over
an L-proposition but over a predicate. This is not an adequate definition, since,
as is shown in Sections 2.3.5.2 and 2.4.1, internal negation is better defined as a
small scope negation over an embedded L-propositional structure, but it will
do for the moment. If it is accepted that a predicate, for example human,
expresses a property possessed by all objects in the world that are indeed
human, then the ‘negation’ of this predicate, not-human, functioning as the
internal negation of an L-proposition, expresses the lack of that property for all
objects that are not human. Call the set of all human objects the EXTENSION of
the predicate human, or [[Human]], in the set of all objects OBJ. Then the set
Logic: a new beginning 33
of objects that are not human is the extension of the predicate not-human, or
[[NOT-Human]], in OBJ. This makes [[NOT-Human]] the complement of
[[Human]]—that is, OBJ minus [[Human]], or ½½Human—as long as there
are no vague boundaries between what is and what is not human.
The logical interest of internal negation, in the present context, lies in the
relation of DUALITY (Löbner 1990):
DUALITY
Two logical constants X and Y are each other’s duals just in case there is
logical equivalence between X preceded by the external negation and Y
followed by the internal negation and, of course, vice versa.
In standard predicate calculus, for example, the quantifiers ALL and SOME are
each other’s duals, since in that system an L-proposition corresponding to the
form NOT ALL F is G (where F and G are predicates) is equivalent with
an L-proposition of the form SOME F IS NOT-G and, analogously, NOT SOME
F IS G/NO F is G is equivalent with ALL F IS NOT-G. These equivalences are
standardly known as the CONVERSIONS. Henceforth, when dealing with external
versus internal negation, we will use the standard symbol ‘¬’ for external
negation and the symbol ‘*’ for internal negation. That is, ¬P is the external
negation of the L-proposition P and P* is the L-proposition P but with its
main lexical predicate negated by the internal negation. Obviously, in a
strictly bivalent system, double external negation and double internal nega-
tion cancel out: for any L-proposition P, ¬¬P P and, since for any predicate
C, NOT-NOT-C C, P** P.
It must be noted, at this point, that the logical properties of two
L-propositions P and P* are identical for the simple reason that the choice of
lexical predicates is irrelevant for the logic, which is defined by the logical
constants only. We call this the MODULO-*-PRINCIPLE. The relevance of the
internal negation for predicate logic lies in any logical relation of duality
or of a one-way entailment between two logical constants X and Y when
one is preceded by the external negation and the other is followed by the
internal negation. The Modulo-*-principle provides an extra check for
the soundness of a predicate-logic system in that any system that violates the
Modulo-*-principle is by definition unsound.
Logicians have observed that a similar relation of duality exists between the
propositional operators AND and OR, a form of duality known as DE MORGAN’S
LAWS. In standard propositional logic, the operators of negation, conjunction
and disjunction are defined in such a way that for the L-propositions P and Q,
NOT(P AND Q) is equivalent with NOT(P) OR NOT(Q). Analogously, NOT(P OR Q)
34 The Logic of Language
P ¬Q* ¬Q
b. C
a. c.
CD CD CD
P ¬Q* Q* P ¬P Q*
CD CD
C
SC C SC
SC
Q Q* ¬P SC
C
improved Boethian 1
P ¬Q P*
CD C CD
Q ¬Q P* Q ¬P* P*
natural isomorphic nonnatural isomorphic
CD CD CD
SC
Q ¬P* Q*
improved Boethian 2
FIGURE 2.6 (a) the natural isomorphic square, (b) the two improved but noniso-
morphic Boethian squares, and (c) the nonnatural isomorphic square
lack this regularity. There the triads <P,Q,Q*> and <P,Q,¬Q> of the left-
hand side triangles correspond to the nonisomorphic triads <¬Q,¬P,¬Q*>
and <P*,Q*,¬P*>, respectively, of the right-hand side triangle. Clearly, the
isomorphic representation displays more regularities in the logical system
than the nonisomorphic representation.
Figure 2.6c shows a second way of making a square out of two isomorphic
triangles with the same logical power.1 Here we have a triangle <P,¬P,Q>,
linked up with the isomorphic triangle <P*,¬P*,Q*>. Now the relations
involved are those of entailment, contradiction, and subcontrariety, with
contrariety thrown in as a bonus due to the linking up through the Conver-
sions. In Chapter 3, however, it is argued that subcontrariety is not a basic-
natural logical relation but a relation that requires a great deal of scholastic
training to be grasped, whereas contrariety is just about maximally natural.
For that reason, the square of Figure 2.6c is called ‘nonnatural isomorphic.’
From a strictly logical point of view, the difference is immaterial, but it is not
when we are in search of natural logic. Since natural logic is what we are after,
we consider Figure 2.6a to be the preferred representation.
1
I am indebted to Dany Jaspers (2005: 34–5) for calling this fact to my attention.
36 The Logic of Language
a. A E b. A ¬I* I*
C CD
C
CD CD SC
C
CD
I ¬I A*
SC
I O A: All F is G
I: Some F is G
E: No F is G / All F is not-G
O: Some F is not-G / Not all F is G
FIGURE 2.7 ABPC represented as the Boethian Square of Opposition and as the natural
square consisting of two isomorphic triangles
Logic: a new beginning 37
2
Jaspers (2005) calls the I-vertex the PIVOT of the triangle, mainly on the grounds that particular
existential knowledge is cognitively prior to general knowledge. The concept of pivot is useful in
natural logic also because the negation of the pivot establishes the relation of contrariety between the
A-vertex and the ¬I-vertex, thus stressing the triangular character of the relations of contrariety,
entailment, and contradictoriness.
3
Interestingly, the view that quantifiers are, logically speaking, predicates is also found in some
nineteenth-century logic textbooks. Thus we find (Sigwart 1895: 160):
38 The Logic of Language
Thus, according to its original meaning. ‘All A’s are B’ can only be said in reference to definite particular objects. And
here from a logical point of view the ‘all’ is the predicate (the A’s which are B are all A’s).
That this was not an isolated view is proved by the fact that the linguist Meyer-Lübke held the same
view as regards the existential quantifier TWO (Meyer-Lübke 1899: 352; translation mine):
From the point of view of logic there can be no doubt that in the sentence il arrive deux étrangers [two foreigners
arrive] the subject is il arrive while deux étrangers is the predicate.
Logic: a new beginning 39
4
In the Finno-Ugric languages, as McCawley observed, the negation operator often turns up as a
surface verb, though usually with a defective paradigm (which, in fact, corresponds closely to the
defective paradigm of the English modal auxiliary verbs such as can, may, must, will). A sentence like
Kevin didn’t laugh comes out as something like ‘Kevin notted (to) laugh’. Similar phenomena are
found in many other languages. For example, in the Amazonian language Dâw (Andrade Martins
2004: 559), the verbal negation suffix -e~h is derived from the negative verb me~h meaning ‘not have’.
Brown and Dryer (2008) describe a language, Walman (a Torricelli language of Papua New Guinea),
where the conjunctor and as a connective linking two or more NPs turns up in surface structure as a
verb. Brown and Dryer (2008: 563) are puzzled by this fact, because, in a predicate-logic analysis, there
seems to be no way in which the NP-conjoining and can be taken to be a verb or a predicate, since a
nominal conjunction can be nothing but an argument to a predicate. Their proposal to view this
verbal and as a serial verb seems plausible for those nominal conjunctions that do not allow for a
reduction to clausal conjunction, as in Vivian and Lesley are a nice couple. As a clausal connector, and is
easily seen as a (truth-functional) n-ary predicate that takes two or more S-structures as arguments.
40 The Logic of Language
bachelor not only semantically but also logically entails that John is male, not
married, and so on.
A meaning postulate is thus a stipulated entailment schema or ‘inference
rule’ meant to help define a logical system. It is important to realize that
meaning postulates of the type given above must be read as entailment
schemata that are part of the logical system, and not as formulae within the
system, which would turn them into contingent statements about the world.
For example, suppose it happens to be the case that all houses in the village
have two bathrooms, then it is true to say ‘for all x, if x is a house-in-the-
village, then x has two bathrooms’. But this does not provide a licence to
describe the lexical meaning of the predicate house-in-the-village as implying
the condition ‘having two bathrooms’. Meaning postulates are not meant to
be true about the world but true about the language (which has been
incorporated into the logic in so far as it has been fitted out with meaning
postulates) and hence, once the language has been fixed, necessarily true in
any world. For that reason the entailment sign ‘ is placed in front of the
meaning postulate.
Meaning postulates have had something of a career in formal semantics
and in the philosophy of language, where it has been claimed that they can be
used to define lexical meanings exhaustively, specifying the severally necessary
and jointly sufficient conditions for them to produce truth when applied to
their term referents. The dominant view in formal semantics and in the
philosophy of language has been to accept the principle ‘get your entailments
right and you get your meanings right’.
Although this view has a certain appeal, I believe—and have frequently
argued in the past—that it is basically flawed. I take sides with Frege in that I
take lexical meanings to be defined by satisfaction conditions (Frege’s Sätti-
gungsbedingungen), which have their roots in cognitive criteria. Natural
semantic entailments follow from lexical satisfaction conditions but do not
define them. In my view, moreover, lexical meaning is more than just truth-
conditional satisfaction conditions, in that the satisfaction of lexical semantic
conditions, besides leading to truth, often also leads to other sorts of results,
often of an ill-understood nature. Some aspects of this question are discussed
in Chapters 8 and 9 of Volume I.
The theory of meaning postulates constitutes an attempt at incorporating
natural language into, or superimposing it onto, a logical system, which is
considered to be given a priori and is implicitly taken to be standard modern
logic, the only logical system considered viable. What is tried here is the exact
opposite. For us, any logical system is part of some (natural or artificial)
language. We try to see logical properties as epiphenomena—certainly highly
42 The Logic of Language
P ¬P P ∧ Q Q: T F P∨Q Q: T F
———— ————— —————
T F P: T T F P: T T T
F T F F F F T F
It is easy to see that Boolean algebra formalizes set theory. Let the variables
x, y, z, . . . range over sets, while 0 is the null set and 1 stands for the
domain of objects OBJ. Interpret multiplication as set-theoretic intersection
(\), addition as set-theoretic union ([), and complement as set-theoretic
complementðxÞ. Boolean algebra now computes all set-theoretic operations.
Boolean algebra also computes the truth functions of propositional calcu-
lus. One way of doing so was developed by Frege. Instead of taking sets as
values of the Boolean symbols, Frege took ‘truth’ as the value of Boolean 1
and ‘falsity’ as the value of Boolean 0 (which is the origin of the widespread
convention, not followed in this book, to use ‘1’ for truth and ‘0’ for falsity).
No other symbols are required. NEGATION (¬) is now interpreted as Boolean
complement, CONJUNCTION (∧) as Boolean multiplication and DISJUNCTION (∨)
as Boolean addition. Let ç(P) stand for the truth value, that is the Fregean
extension, of any atomic or complex L-proposition P. Figure 2.9 shows how
the truth-functional operators of standard propositional calculus are com-
puted as Boolean functions:
All this is generally accepted and part of the standard foundations
of propositional logic. It is, however, also widely known, but less
widely publicized, that this seductive application of Boolean algebra to the
0 1 0 0 0 0 1 0
For those who are inclined to ask philosophical questions about what is
meant by ‘situation’ here, it is important to realize that one can very well
operate theoretically with a formally defined notion of ‘situation’ while
remaining uncommitted as to the precise status of ‘situations’ in the natural
ontology of speakers or, rather, cognizing humans. Although it would be
utterly unrealistic to assume that natural speakers have a fully elaborated
interpretation of set theory onto a system of valuation spaces at their
disposal when interpreting utterances, it is not at all unrealistic to assume
that they have an intuitive and not fully elaborated idea, probably beyond
the threshold of possible awareness and signalled by physical reactions in
the brain, of ‘possible truth in an overall set of situations’, and hence of
notions like (in)compatibility, contradiction, and necessary consequence.
Correspondingly, one may assume that notions like mutual exclusion,
partial intersection and proper inclusion of sets of situations are likely to be
cognitively real. In Chapter 3, a theory is proposed of how naı̈ve, uneducated
humans deal with the plural objects we call sets. In this NATURAL SET
THEORY, mutual exclusion, mutual partial intersection, and proper inclusion
are taken to be the basic-natural set-theoretical notions. If that is so, it
easily follows that their logical counterparts, contrariety, contradiction,
and necessary consequence (entailment), likewise have some form of psycho-
logical reality.
VS-analysis allows for a different interpretation of Boolean algebra onto
propositional logic, in that the set-theoretic functions are used as an interme-
diary. We remember that an L-proposition P, expressing the proposition p,
has associated with it the set of all admissible situations in which p is true—
that is, the valuation space (VS) of P, or /P/. Since set theory is computable by
means of the functions of Boolean algebra, this algebra is applicable to the
truth functions of propositional calculus, with valuation spaces as the sets
involved and the universe of all admissible situations U standing for the
universe of all objects OBJ. So we use the standard interpretation of Boolean
algebra onto set theory, which treats 1 as the universe OBJ of all objects and 0
as the null set . Since the Boolean variables are taken to range over sets, set-
theoretic COMPLEMENT corresponds to Boolean complement, set-theoretic IN-
TERSECTION to Boolean multiplication, and set-theoretic UNION to Boolean
addition. This translates directly onto the truth functions of propositional
logic, with VSs as sets and the universe U of all admissible situations as OBJ,
in the following way:
48 The Logic of Language
a. b. c.
/¬P/
U U U
/¬P/ /P ∧ Q/ /P ∨ Q/
The standard logical relations of equivalence (), entailment (‘, ), con-
tradiction (#), contrariety (><), and subcontrariety (><) are likewise immedi-
ately expressible in the set-theoretic terms of valuation spaces, as is shown in
(2.4). One notes that contradictoriness (#) combines the conditions of con-
trariety (><) and subcontrariety (><).
(2.4) For all L-propositions P and Q:
a. PQ iff /P/ ¼ /Q/
b. P ‘ () Q iff /P/ /Q/
–
c. P#Q iff /P/ [ /Q/ ¼ U and /P/ \ /Q/ ¼ (or: /P/ ¼ /Q/)
d. P >< Q iff /P/ \ /Q/ ¼
e. P >< Q iff /P/ [ /Q/ ¼ U
The standard set-theoretic relation of identity (¼) corresponds to the logical
relation of equivalence defined in (2.4a). Inclusion () corresponds to entail-
ment, as shown in (2.4b). Contradictoriness is defined by the set-theoretic
relation of complement, as in (2.4c). Contrariety is defined by the set-
theoretic relation of mutual exclusion, as in (2.4d). And subcontrariety corre-
sponds to a set-theoretic relation that has so far not been honoured with
a name in standard set theory but for which we invent the name of FULL UNION
([˚ ), defined, for sets A and B, as follows:
(2.5) FULL UNION: A [˚ B iff A [ B ¼ OBJ
It follows from (2.5) that, when A [ ˚ B holds, it will be impossible, for any
element o in OBJ, that o 2
= A and o 2
= B: o has to be an element in either A or B,
or in both. The logical counterpart of full union is subcontrariety, since two
L-propositions P and Q are subcontraries just in case it is impossible for both
P and Q to be false simultaneously.
50 The Logic of Language
It should be borne in mind also that those predicates that fit into some
logical system may well have other semantic properties that transcend their
strictly logical character and have to do with the often confused or confusing
phenomena of reality and experience. Quantifying predicates like FEW or MANY
clearly have logical properties. But beyond these they also possess nonlogical
semantic properties, such as the implicit appeal to some standard with respect
to which it can be said of a set or collection that it has ‘many’ or ‘few’
members (see Section 1.6). This shows again that it is both useful and natural
to treat logical constants as predicates.
Following this lead we consider ¬, ∧ and ∨ to be predicates. That is, as far
as their role in propositional calculus is concerned, we say that ¬ assigns to a
(well-anchored and well-keyed) L-proposition P the property that sitact is a
member of the COMPLEMENT =P= of the associated valuation space, ∧ assigns to
the (well-anchored and well-keyed) L-propositions P, Q, R, S, . . . the prop-
erty that sitact is in the INTERSECTION of the associated valuation spaces—that is,
of /P/ \ /Q/ \ /R/ \ /S/ \ . . . , and ∨ assigns to the (well-anchored and well-
keyed) L-propositions P, Q, R, S, . . . , the property that sitact is in the UNION of
the corresponding valuation spaces—that is, of /P/ [ /Q/ [ /R/ [ /S/ [ . . . .
Using the notation introduced in Section 3.3 of Volume I for the specification
of the satisfaction conditions of predicates, we now write (2.6a–c), where
‘/P/þ’ stands for any set of two or more valuation spaces of corresponding
L-propositions:
—
(2.6) a. [[¬]] ¼ {P | sitact 2 /P /}
(the extension of ¬ is the set of all L-propositions P such that the
actual situation sitact is a member of the complement of /P/)
\+
b. [[∧]] ¼ {P+ | sitact 2/P/ }
(the extension of ∧ is the set of all sets of two or more L-propositions
P such that sitact is a member of the intersection of all /P/þ)
[+
c. [[∨]] ¼ {P+ | sitact 2/P/ }
(the extension of ∨ is the set of all sets of two or more L-propositions
P such that sitact is a member of the union of all /P/þ)
The propositional constants ¬, ∧ and ∨ are presented as predicates over sets
of (well-anchored and well-keyed) L-propositions, each such L-proposition
having a corresponding valuation space. Technically speaking, the predi-
cates ¬, ∧, and ∨ are thus functions from (sets of) L-propositions to truth
values. This makes them a specific kind of predicate: not first-order predicates
over individual objects but higher-order predicates over (sets of) L-proposi-
tions.
52 The Logic of Language
Pred S+
AND
S S
Pred NP Pred S
sleep Harry NOT
Pred NP
work John
FIGURE 2.11 L-proposition underlying Harry sleeps and John does not work
2.3.5.1 Russellian quantifiers So much for the predicate status of the prop-
ositional operators. But how about the quantifiers of predicate calculus? Here
one must realize that the notion of quantifier was subjected to considerable
refinement during the second half of the twentieth century. In the earlier
perspective, quantifiers were functions of a unique kind, introduced by a
special rule and provided with a special, not altogether transparent, model-
theoretic semantics. Sentences like (2.7a,b) were translated into the language
of logic as (2.8a,b), respectively, where ‘8’ stands for all and ‘∃’ for some.5 Let
us call the quantifiers as defined in terms of this system RUSSELLIAN QUANTIFIERS:
5
The symbol ∃, for the existential quantifier (just like the symbols \ and [ for set-theoretic
intersection and union, respectively, and the symbol 2 for class membership, being the first letter of
Logic: a new beginning 53
the Greek esti ‘is’), was introduced by Giuseppe Peano in the 1890s when he worked at his ‘Formulario’
project, intended to introduce a unified notational system for the whole of mathematics and logic. The
symbol 8, for the universal quantifier, originates with Gerhard Genzen, who introduced it in his
‘Untersuchungen ueber das logische Schliessen’ (Mathematische Zeitschr. 39 (1934) p. 178), as a proper
counterpart to ∃. An earlier notation for ‘for all x’, used in Whitehead and Russell (1910–1913) and also
in Quine (1952), was ‘(x)’.
54 The Logic of Language
Having said this, we can define the Russellian universal and existential
quantifiers as unary higher-order predicates—that is, as unary predicates over
sets of objects—in the following way:
(2.9) For all sets X in OBJ:
a. [[8]] ¼ { X | X ¼ OBJ}
(the extension of the predicate 8 is the set of all sets that equal OBJ)
b. [[∃]] ¼ { X | X 6¼ }
(the extension of the predicate ∃ is the set of all nonnull sets in OBJ)
Note that the format used for the specification of the logical predicates 8 and
∃ in (2.9a,b), and for the propositional operators in (2.6a–c), is again that
used for ordinary lexical predicates as defined in Section 3.3.2 of Volume I.
As before, the condition specified after the upright bar is the satisfaction
condition of the predicate in question. Technically speaking, therefore, the
Russellian quantifiers are treated as functions from sets of objects to truth
values.
Consider, for example, (2.8a). An application of the truth table of the material
implication shows that when (2.8a) is true, then [[Farmer(x) ! Grumble(x)]] ¼
OBJ, since the propositional function Farmer(x) ! Grumble(x) yields truth
for any arbitrary object in OBJ. (2.8a) is falsified by any object in OBJ that does
satisfy Farmer(x) but not Grumble(x). But if (2.8a) is true, then any object in
OBJ either does not satisfy Farmer(x) or it satisfies both Farmer(x) and
Grumble(x). Analogously for (2.8b), which is true just in case [[Farmer(x) ∧
Grumble(x)]] 6¼ . The predicate status of the Russellian quantifiers is thus
saved by the propositional operators of standard logic.
2.3.5.2 Generalized quantifiers During the 1950s, some logicians discovered
that the Russellian quantifiers are not satisfactory as logical translations of
the natural language quantifying words all and some. Besides the unnatural-
ness of the logical translations of quantified natural language sentences—a
perennial source of bewilderment for beginning students—the main problem
with them is the fact that the method of rendering quantifiers with the help of
propositional operators cannot be extended to other quantifiers such as
MOST or HALF. This blocks a unified analysis of natural language quantifiers
in logical terms, which means an always unwelcome loss of generalization.
Since there is an alternative analysis of the quantifiers which restores the
generalization, it would seem that that analysis is preferable.
The alternative analysis was presented by Barwise and Cooper (1981), who,
falling back on Mostovski (1957), proposed to incorporate the L-propositional
function ‘Farmer(x)’ into the quantifier in the following way:
Logic: a new beginning 55
(2.10) a. 8x[Farmer(x)](Grumble(x))
(for all objects x such that x is a farmer, x grumbles)
b. ∃x[Farmer(x)](Grumble(x))
(for at least one object x such that x is a farmer, x grumbles)
A semantics can now be provided in set-theoretic terms. As has been shown,
L-propositional functions of the form F(x), where F is a predicate and x
is a variable ranging over OBJ, are interpreted as expressions denoting sets.
The set denoted by the incorporated L-propositional function, in this
case Farmer(x), is called the RESTRICTOR SET and the set denoted by the
remaining propositional function, in this case Grumble(x), is called the
MATRIX SET. The corresponding expressions are called the RESTRICTOR TERM
and the MATRIX TERM. Now the universal quantifier 8 can be said to require
for truth that the restrictor set be a subset of the matrix set, while the
existential quantifier ∃ can be said to require for truth that the restrictor set
and the matrix set share a nonnull intersection. This analysis is known as
the theory of GENERALIZED QUANTIFIERS, as it allows for a generalized treatment
of all natural language quantifiers.
The analysis provided by Barwise and Cooper can be simplified when one
realizes that the quantifiers 8 and ∃ express a relation between two sets, the
restrictor set and the matrix set. Therefore, the quantifiers are properly
regarded as two-place higher-order predicates—that is, as predicates over
pairs of sets, rather than over (pairs of) objects, as is the case with first-
order predicates. The notation of (2.10a,b) is now no longer desirable and
should be replaced by one in which the quantifiers are represented as pre-
dicates over two terms, each term denoting a set. This produces the following
translations in LL for the sentences (2.7a,b):6
(2.11) a. 8x(Grumble(x), Farmer(x))
(the set of farmers is a subset of the set of grumblers)
b. ∃x(Grumble(x), Farmer(x))
(the set of farmers and the set of grumblers intersect)
6
I deviate from the convention to put the restrictor term first and the matrix term second. My
reason for inverting this order is purely linguistic, not logical: it allows for the syntactic rule of OBJECT
INCORPORATION (OI), whereby the second (object) term is united with the predicate to form a complex
predicate as in, for example, 8x[F(x)](G(x)), or ‘for all x that are F: x is G’, corresponding to (2.10a).
The rule OI has a strong position in universal grammar, whereas SUBJECT INCORPORATION is only weakly
supported (Seuren 1996: 300–9). See also note 3 in Chapter 4.
56 The Logic of Language
The quantifiers are indexed for the variable they bind. Thus, (2.12a) translates
as (2.12b):
(2.12) a. All farmers groom some horse.
b. 8x(∃y(Groom(x,y), Horse(y)), Farmer(x))
In general, sentence types like those given in (2.13a–j) are translated as follows:
(2.13) a. ALL F is G 8x(G(x), F(x)) f. SOME F is NOT-G ∃x(¬G(x), F(x))
b. SOME F is G ∃x(G(x), F(x)) g. ALL NON-F is G 8x(G(x), ¬F(x))
c. NOT ALL F is G ¬8 x(G(x), F(x)) h. SOME NON-F is G ∃x(G(x), ¬F(x))
d. NO F is G ¬∃ x(G(x), F(x)) i. ALL NON-F is NOT-G 8x(¬G(x), ¬F(x))
e. ALL F is NOT-G 8x(¬G(x), F(x)) j. SOME NON-F is NOT-G ∃x(¬G(x), ¬F(x))
This opens a new window on internal negation, discussed in Section 2.2. It
is clear from (2.13a–j) that the negation, ¬, can be used externally, to
negate the whole following L-proposition, but also internally, to negate the
propositional function in the subject or matrix term: the propositional
function ¬G(x) denotes the set of all objects that do not satisfy the predicate
G. The language of predicate calculus, and in particular the use of a variable
to help denote the extension of a predicate, thus allows for the (internal)
negation of any propositional function in any position in an L-propositional
structure.
Normally in predicate calculus, however, only the negated matrix term is
used, not the negated restrictor term, although both forms of negation
are allowed in the formal language. This is because the negated restrictor
term is considered logically less interesting, whereas the negated matrix or
subject term plays a major role in predicate logic, owing to the duality of the
two quantifiers, as explained in Section 2.2. In general, internal negation is
defined as the negation of the matrix term.
The universal quantifier 8 can now be defined as assigning the property
that the restrictor set is a subset of the matrix set, and the existential quantifier
∃ as assigning the property that the two sets share a nonnull intersection.
We do not wish to claim that this is an exhaustive description of the linguistic
meaning of these two operators, nor even that this description, as far as
it goes, is a correct rendering of their purely logical meaning in the logic
of language. But it does capture the purely logical meaning of these two
quantifiers in standard modern predicate calculus (SMPC). A formal expres-
sion of these standard meanings in terms of generalized quantifiers is given in
(2.14a,b):
Logic: a new beginning 57
A similar analysis for HALF is easily given. This means that there now is a
unified category of quantifiers analysed according to one generalized system.
Hence the name.
Figure 2.12 shows the L-propositional tree structure for sentence (2.17):
S1
Pred S2 S3
SOME x
Pred S4 Pred NP
NOT farmer x
Pred NP
grumble x
a sentence like All farmers grumble is equivalent with All the farmers
grumble, which is also how this sentence is rendered in a large number
of languages other than English, such as French. Somehow or other,
therefore, the universal quantifier ALL must be connected with the definite
determiner the. There are also interpretations where this is not so, as, for
example, with the universal any as in Any doctor will tell you that smoking
is bad. In this case, a formalization with the help of the implication operator
IF . . . THEN seems called for, just as in the Russellian paraphrase of the universal
quantifier.
The discourse-sensitive interpretation of ALL complicates matters consider-
ably. It means, for one thing, that universally quantified sentences require an
underlying form which differs in important respects from the logical form
postulated in our logical system, which does not reckon with discourse
factors. As long as it is pure logic that we are concerned with, this complica-
tion will be ignored and we will simply keep up the fiction that the universal
quantifier runs parallel with its existential counterpart, as is the practice
followed in all logical systems in existence. The difficulties and uncertainties
surrounding the discourse-sensitive quantifier ALL will be discussed in Chap-
ters 8 and 9 of the present volume.
Meanwhile, we pass on to a discussion of the logical systems of proposi-
tional and predicate calculus, looking in greater detail at the logical properties
of external and internal negation in relation to the propositional operators
and the quantifiers.
Take, for example, the following sentences (never mind the plural):
(2.21) a. Some flags are green.
b. Some flags are not green.
c. No (¼ NOT-SOME) flags are green.
The negation of (2.21a) is not (2.21b) but (2.21c). This becomes clear when one
considers the logical analysis of the sentences concerned, given in (2.22a–c),
respectively:
(2.22) a. ∃x(Green(x), Flag(x))
b. ∃x(¬[Green(x)], Flag(x))
c. ¬[∃x(Green(x), Flag(x))]
The negation in (2.22c) is the EXTERNAL NEGATION, the negation in (2.22b) is
the INTERNAL NEGATION. (2.22b) translates as (2.23), where the negation may
be seen as having been unified with the predicate Green of the propositional
function it stands over (its scope) into one single complex predicate NOT-
Green:
(2.23) ∃x([NOT-Green](x), Flag(x))
It is widely assumed that, analogously, the external negation of (2.22c) has
been unified with the main predicate of the sentence that forms its scope,
leading to (2.24), with the complex predicate NOT-SOME, lexically realized as no
in English:
(2.24) [NOT-SOME x](Green(x), Flag(x))
Yet although most natural languages have a single lexical item corresponding
to English no and requiring for truth a null intersection of the two sets
involved, closer analysis shows that such quantifiers are probably not in-
stances of lexical unification of NOT with the quantifier SOME but are quanti-
fiers in their own right in a basic-natural system of predicate logic that differs
from the standard system (for a full analysis see Section 3.5).
Occasion sentences with a definite subject term have no equivalent of
the internal negation of (2.21b). This is because a definite term, such as the
x[Flag(x)], does not consist of a propositional function, although it does
contain one. The whole definite term cannot be negated: anything like ¬[the
x[Flag(x)]] is uninterpretable (semantically ill-formed). One may object that
sentences like (2.25a) or (2.25c), which contain the phrase not the flag, are
perfectly possible. Here, however, these phrases are not used as terms but as
predicates of topicalized sentences corresponding to ‘what is green is not the
Logic: a new beginning 63
flag (but . . . )’. In the logical analysis of such sentences it is useful to extend the
expression the x[Flag(x)] with an identifying copula verb be, as in (2.25d):
(2.25) a. It is not the flag that is green.
b. The FLAG is not green (but . . . )
c. Not the FLAG (but . . . ) is green.
d. ¬[[Be(the x[Flag(x)])](the x[Green(x)])]
In the logical analysis (2.25d) of topicalized sentences like (2.25a–c) the
definite subject term is the x[Green(x)] or ‘the thing that is green’, and the
predicate is [be(the x[Flag(x)])] or ‘be the flag’, unlike its nontopicalized
counterpart (2.18b), where the x[Flag(x)] is the logical subject term and
Green is the logical predicate. In nontopicalized sentences like (2.18a) there
is no way of ‘negating’ the definite term. Anything like Not the flag is green is
interpretable only as a topicalized sentence.
Yet, precisely because a definite term CONTAINS a propositional function,
some form of ‘internal’ negation is possible for the propositional
function inside the definite term. We have, for example, sentences like
(2.26a), analysed as (2.26b), or as (2.26c) with the negation over the proposi-
tional function Catholic(x) incorporated into the predicate (again, never
mind the plural):
(2.26) a. The noncatholics are angry.
b. Angry(the x[¬[Catholic(x)]])
c. Angry(the x[Noncatholic(x)])
Other than in occasion sentences with a definite subject term, the internal
negation typical for quantified sentences does negate a term, namely the
matrix term of the quantifying predicate, as in (2.22b). And this is possible
(semantically well-formed) because the terms of a quantifying predicate are
themselves propositional functions.
right-to-left:
— —
If not ([[F]] \ [[G]]6¼ Ø), then[[F]] \ [[G]] = Ø. Hence [[F]] [[G]].
i.e. 8(F(x),G(x)).
(b) left-to-right:
— —
If [[F]] \ [[G]] ¼
6 Ø, then not ([[F]] [[G]]). Hence [[F]] [[G]],
i.e. ¬[8(¬[G(x)],F(x))].
right-to-left:
—
If [[F]] [[G]], then not [[F]] \ [[G]] = Ø. Hence [[F]] \ [[G]] ¼
6 Ø,
i.e. ∃ (G(x),F(x)).
As has been said, the relation between the quantifiers 8 and ∃ established by the
Conversions is commonly expressed by saying that they are each other’s duals.
Logic: a new beginning 65
(b) left-to-right:
If sitact 2 =P= [ =Q= [ =R=; then sit act 2
= =P= \ =Q= \ =R=,
i.e. ¬[¬P ∧ ¬Q ∧ ¬R].
right-to-left:
If sitact 2
= =P= \ =Q= \ =R=, then sitact 2 /P/ [ /Q/ [ /R/,
i.e. P ∨ Q ∨ R.
All this is normal standard logic, though admittedly presented with a slant
towards relativizing standard logic and placing it in a wider cognitive and
linguistic perspective. The question is: why is this of interest? The following
chapters begin to answer that question.
3
1
For example, Ginsburg et al. (1984), Dehaene (1997), Butterworth (1999), Pica et al. (2004).
2
Pierre Pica, p.c. The fact that these speakers created new expressions to name numbers from
existing ones strongly suggests, if not proves, that it was not the availability of the lexical items that
enabled them to ‘think’ further along the number line but that it was in the first place their cognitive
development that required the new lexical expressions, which were then readily composed and which
probably helped them along in a secondary sense, in that the very availability of the expressions
enhanced performance.
Natural set theory and natural logic 69
numbers and, therefore, lack words for higher numbers. We also have a
relatively good picture of what it has taken successive Chinese, Indian, Arabic,
and Western civilizations to come to the highly sophisticated science of
arithmetic that we have today. From the point of view of arithmetical theory,
therefore, no great obstacles present themselves in the study of basic-natural
arithmetic.
As regards logic, however, the situation is different. The very task of
formulating restrictions on the logical powers of unsophisticated humans is
the opposite of trivial. It requires considerable technical and theoretical effort
and insight, and standard logical lore fails to provide the tools for doing so.
The foundations and basic notions of logic are far less clear and well under-
stood than those of arithmetic, and the relation with natural cognition and
language is still as problematic as ever. It is hardly surprising, therefore, that
before one can pass on to any experimental work, one is forced to start with
an identification and formulation of the restrictions involved.
The much greater conceptual difficulty of logic compared to arithmetic is
borne out by the fact that what looks like the most firmly embedded logical
intuitions of the human race appear to support a logic that depends for its
application on complete situational knowledge, so that more developed
forms of logic were required for use in situations where knowledge is not
complete—a development that might explain the well-known discrepancies
between natural logical intuitions on the one hand and the concepts and
terms of the first, largely Aristotelian, ‘official’ logic on the other. Such
discrepancies do not occur in the case of arithmetic.
One may also look at our hypothesized basic-natural logic in the light
of the psychological theory of PROTOTYPES (see also Sections 8.6 and 8.8 in
Volume I). In general, prototypes seem to be characterized by the fact that
they maximize common features and thus avoid extremes or limiting cases.
Thus, it is proposed in Section 3.2.2 below that the first principle of BNST
consists in not taking into account the so-called extreme values—that is, the
null set (), singletons (sets consisting of precisely one element) and the
totality of objects (OBJ). Perhaps one may, therefore, just as well call BNPC by
the name of prototypical logic. The problem is, however, that so little is known
about the conditions that make for prototypes. Frequency won’t do as a
criterion, as is shown in Section 8.8 in Volume I. But what will do is simply
to a large extent still a mystery, to do with hard-to-define notions such as
‘normal’ or ‘obvious’. Standard modern logic, for its part, can then be seen as
the result of the exploration of the extreme cases: when these are taken into
account, the basic-natural notions have to be sharpened. When one looks at
the question from this angle, it becomes clear why one will have to distinguish
Natural set theory and natural logic 71
3
Steve Levinson tells me that there are languages which use the same word for ‘many’ and ‘all’ and
also languages which use the indefinite article or the numeral meaning ‘one’ for ‘some’, referring to
discussions in Wierzbicka (1996: 74–6, 193–7). What this evidence means remains to be seen. Modern
Greek, for example, has the one word polı́ for both ‘very’ and ‘too’, making a phrase like polı́ megálos
ambiguous between ‘very big’ and ‘too big’ (though megálos by itself can also mean ‘too big’, just as
English late also has the meaning ‘too late’). Yet this does not mean at all that Greek speakers cannot or
do not distinguish between the concepts ‘very’ and ‘too’. They clearly do and, when pressed, they use
parapolı́ for ‘too’, even though parapolı́ still means ‘very much’ or ‘a whole lot’, but it seems to get closer
to ‘too’ than simple polı́.
4
The programme thus outlined in effect amounts to an attempt at replacing current Gricean
explanations for the disparity between logic and language in terms of generalized conversational
implicatures with an explanation based on natural set theory and the cognitive faculty of forming
mental propositions. If the objections raised in Section 1.3.3.2 of Volume I against attempted pragmatic
explanations along Gricean lines have any validity, this seems a worthwhile exercise.
72 The Logic of Language
denote functions from n–tuples of objects (or sets of objects) to truth values.
COMPLEMENT, INTERSECTION, UNION, and SUBTRACTION are set-theoretic functions,
but INCLUSION, for example, is a relation: for any given sets A and B, the binary
relation of INCLUSION, as in A B, is either true or false. By contrast, the set
functions A (complement), A \ B (intersection), A [ B (union), or A–B
(subtraction) do not have a truth value. Given the proper number of arbitrary
sets (one for complement; more than one for intersection and union; exactly
two for subtraction) they denote a new set defined by the Boolean functions
complement, intersection, union, and subtraction, respectively. Complement
is unique in that it involves the nonarbitrary set OBJ (the totality of all
objects) as part of its definition. It can be described as a special case of
subtraction, with OBJ–A as output for any set A. In logic, a further use of
the term complement is to denote a relation (‘be in complement with’)
yielding truth for two sets A and B just in case A [ B ¼ OBJ and A \ B ¼ .
The set-theoretic relations have a twofold use in the reduction of logic to set
theory. First, they correspond to (meta)logical relations expressed in terms of
valuation space (VS) analysis. For example, the inclusion relation translates
into a possible metalogical statement that, say, the set of situations /P/ is
included in the set of situations /Q/—that is, P entails Q (P ‘ () Q), which
is true or false depending on the meanings of P and Q, including the mean-
ings of any truth-functional propositional operators they may contain. Sec-
ondly, some set-theoretic relations correspond to quantifiers. For example, ALL
F is G translates, in principle, as saying that the set denoted by F is included
in the set denoted by G.
By contrast, the set-theoretic functions correspond to the propositional
logical constants of the object language LL as realized in any particular natural
language. Just as the set-theoretic functions take sets and deliver sets, the
propositional functions take valuation spaces and deliver valuation spaces.
For example, the operator AND in an L-proposition of the form P AND Q
delivers /P AND Q/—that is, the set of those situations that make P AND Q true,
corresponding to /P/ \ /Q/, the intersection of /P/ and /Q/. Figure 3.1 shows
how set-theoretic relations and functions are interpreted onto metalogical
relations, object-language quantifiers, and propositional operators.
The counterpart in metalogic of the set-theoretic relation MUTUAL PARTIAL
INTERSECTION (henceforth M-PARTIAL INTERSECTION, symbolized as A O O B: the
two sets A and B partially intersect each other and do not severally or jointly
equal either U or , as in Figure 3.3b) is logico-semantic independence, which
plays no part in the machinery of logic: when /P/ and /Q/ M-partially
Natural set theory and natural logic 73
equivalence identity
FIGURE 3.1 The reduction of metalogical relations and object-language operators to set-
theoretic relations and functions
This means that, in NST, the null set is not a set at all: the cognitive
counterpart of is the absence of any set, something which is cognitively
real and may be called ‘null’ but cannot play the role of a set. Whereas ‘null’
still functions cognitively as ‘absence of a set’, the opposite notion of OBJ, as
known in standard modern set theory, is typically the product of advanced
mathematical and/or philosophical thinking and has no place in natural set
theory. It is too nondescript to be cognitively real to formally untrained
minds. What does seem to play a role is the notion of RESTRICTED UNIVERSE OF
R
OBJECTS or OBJ , involving the totality of all objects within a contextually
defined universe of discourse. Therefore, all standard set-theoretic definitions
involving OBJ should, for natural set theory, be redefined as involving the
notion of OBJR, which does count as a natural set. Incidentally, this strategy is
chosen also by many mathematically-minded logicians, especially the earlier
ones. Thus we read (De Morgan 1847: 37–8):
But the contraries of common language usually embrace, not the whole universe, but
some one general idea. Thus, of men, Briton and alien are contraries: every man must
be one of the two, no man can be both. Not-Briton and alien are identical names, and
so are Not-alien and Briton. The same may be said of integer and fraction among
numbers, peer and commoner among subjects of the realm, male and female among
animals, and so on. In order to express this, let us say that the whole idea under
consideration is the universe (meaning merely the whole of which we are considering
parts) and let names which have nothing in common, but which between them
contain the whole idea under consideration, be called contraries in, or with respect
to, that universe.
Given these assumptions, we now posit the first principle of natural set
theory, PNST–1, which applies to single sets:
PNST–1: , OBJ, AND SINGLETONS ARE NOT NATURAL SETS
Sets are never cognitively represented as having an EXTREME VALUE—that
is, as the null set () or as the totality of objects (OBJ). Nor are they
represented as containing just one element.
Sets that are neither nor OBJ nor a singleton are called ‘natural sets’. PNST–
1 expresses the fact that NST is a theory of plural objects.
In the absence of any experimental data, and hence of any precise scale
of naturalness, we posit hypothetically that PNST–1 is both basic-natural and
strict-natural in that it strongly resists intellectual construction—perhaps
to different degrees for , OBJ, and for singletons. Only at a much more
advanced level will the cognitive powers of the human race be able to override
PNST–1.
76 The Logic of Language
5
One notes that this is reflected in the lexicon in that the prefix sub- requires proper inclusion, not
identity: a subcontinent is part of a continent, a subsection is part of a section, and so on.
Natural set theory and natural logic 77
·
(3.5) a. BN is interpreted as basic-natural intersection \BN.
b. þBN is interpreted as basic-natural union [BN.
80 The Logic of Language
– –
c. X BN is interpreted as restricted complement X R.
d. is interpreted as basic-natural subtraction BN.
BN
This, in its turn, is interpretable onto a logical system when the logical
constants are defined in set-theoretic terms. ALL, SOME, and NO are defined in
(3.7a–c), where A stands for ALL F is G, I for SOME F is G ( SOME F is NOT-
G), and N for NO F is G. For internal or ‘subsentential’ negation, as in ALL/
—R
SOME/NO F is NOT-G, [[G]] is to be replaced with [[G]] , the restricted
complement of [[G]] in any OBJR. The external negations of sentences with
ALL, SOME and NO are defined in (3.7d–f). One notes that (3.7c,e) show that ¬I
does not equal N, since the conditions of N exclude [[F]] [[G]], whereas those
of ¬I allow for [[F]] [[G]] (namely, when sitact 2 /A/). Basic-natural proper
inclusion () is defined, in terms of NST, as in (3.6).
(3.6) A B is true iff there is a natural set C such that BBN A ¼v C.
(3.7) a. ALLF is G true [[F]] [[G]]
(A) iff
b. SOME F is true [[F]] \ [[G]] 6¼v Ø 6¼v [[F]] 6¼v [[G]]; [[F]], [[G]]
G (I) iff 6¼v OBJ or: [[F]] OO [[G]]
c. NO F is G true there is no set H such that Hv= [[F]] \BN [[G]]
(N) iff
–
d. ¬A true sitact 2 /A/R in UR: sitact 2 /I/ or /N/
iff
–
e. ¬I true sitact 2 /I /R in U R: sitact 2 /A/ or /N/
iff
–
f. ¬N true sitact 2 /N /R in U R: sitact 2 /A/ or /I/.
iff
The formal sketch given in (3.4) and (3.5) requires some comment, first as
regards the question of psychological plausibility. Most will agree that it
would be utterly unrealistic to assume that natural speakers have a fully
elaborated interpretation of set theory onto a system of valuation spaces at
their disposal when interpreting utterances. Yet it is not at all unrealistic to
assume that they have a vague, intuitive idea, probably beyond the threshold
of possible awareness, of ‘truth in an overall, possibly infinite, set of situa-
tions’, and hence of notions like necessary consequence, (in)compatibility,
and contradiction. It does not seem to matter, for natural cognition, whether
a set is finite or, technically speaking, infinite, as the notion ‘very large’
appears to cover both ‘infinite’ and ‘very large but finite’. One may perhaps
even speculate that the formal precision of set-theoretic notions in natural
cognition is commensurate with their closeness to the psychological ‘ego’.
Natural set theory and natural logic 81
Infinite and other very large sets would thus become increasingly ‘misty’ to
the mind as they are considered from a greater distance.
Correspondingly, one may assume that notions like mutual exclusion,
M-partial intersection and proper inclusion of sets of situations are likely to
be psychologically real, these notions being defined without any appeal to the
extreme boundaries of the set-theoretic system, namely the null set and the
totality of all objects OBJ. Since mutual exclusion, M-partial intersection and
proper inclusion are the key notions in both NST and its application to
natural logic, it seems reasonable to assume psychological reality for both
NST and natural logic.
From a more formal point of view, we start our comment with restricted
complement as a function and as a relation between a set A and its complement
–
A R. Neither the function nor the relation are current in standard set theory, yet
they are of central importance to the study of natural cognition and natural
–
language. The function, written as A R, is defined in (3.8a): it takes OBJR as given
in any situation and any set A as being properly included in OBJR (A OBJR),
and it delivers OBJR–A. The corresponding relation RC between a set A and its
restricted complement B within OBJR is defined in (3.8b):
–
(3.8) a. A R ¼def the set B such that B v¼ OBJ R BN A.
b. RESTRICTED COMPLEMENT: RC(B,A,OBJ R) iff B v¼ OBJR BN A.
–
Thus, in Figure 3.2, A R equals OBJR –BN A (but remember that multiple
applications of this function are excluded in virtue of PNST–6). The relation
–
between A and A R (horizontal lines) corresponds to natural contradictoriness
–
(that is, within the restricted complement); that between A and A (vertical
lines) to the standard metalogical relation of that name.
A
R
A
OBJR
OBJ
–
FIGURE 3.2 The relation –
between the natural set A, its restricted complement A R, and its
standard complement A
82 The Logic of Language
a. c.
B
A
A
B
OBJ R OBJ R
b. d.
A
A B B
OBJ R OBJ R
a. b.
B = OBJR
A B
A
OBJR
FIGURE 3.4 Basic naturalness has no full union for not totally distinct A and B
6
Except, of course, when the negation is copied for the functional purpose of reinforcement, as in
the Cockney sentence ´E’s never been no good to no woman, not never.
Natural set theory and natural logic 85
7
Or, as Larry Horn joked (Horn 1991: 98): ‘If Duplex Negatio Affirmat, we would predict that
Triplex Negatio Negat. […] But […] the geometric effect of the three negations is to motivate all too
often the more appropriate slogan Triplex Negatio Confundit.’
86 The Logic of Language
is not a natural set. Likewise, the equally violently counterintuitive notion that
a necessary truth S is entailed by any proposition (‘verum per se ex quolibet’)
has been eliminated, because /S/ ¼v U, and U is not a natural set. The concept
of naturalness introduced here further restricts the entailment relation in that
identity of /P/ and /Q/ is now also excluded, which rules out the counterin-
tuitive notion of self-entailment.
Entailments following from the theorem (‘inference rule’) of ADDITION have
now also been eliminated. Addition, one recalls from Section 1.2.2, is the
theorem saying that any L-proposition Q can be extended with ‘ ∨ R’ for any
arbitrary R. It seems clear that this theorem, though standard, should be
qualified as nonnatural, since natural speakers will not agree that, for exam-
ple, Joe is dead entails Joe is dead or today is Sunday. NST eliminates this
entailment. In standard terms, /Q ∨ R/ ¼ /Q/ [ /R/ and, because /Q/ (/Q/
[ /R/), one must accept that Q ‘ Q ∨ R for any arbitrary Q and R. NST helps
out, because in cases where Q and R are logically independent, so that /Q/
and /R/ M-partially intersect, basic-natural union excludes by definition
those situations where both Q and R are true. Therefore, the entailment
schema or inference rule of addition breaks down for exclusive OR.
A further ground for the elimination of addition as an inference rule lies in
the definition of entailment given in Section 1.2.1, which requires not only that
truth be preserved but also that this be determined by the specific linguistic
meaning of the entailing L-proposition. This latter condition is not satisfied
in cases of addition, as there is nothing in the meaning of any arbitrary Q that
causes truth to be preserved for Q OR R, R being equally arbitrary. Not so for
the inference rule known as SIMPLIFICATION, which says that P AND Q entails
both P and Q, since here it is the meaning of AND that causes the entailments.
Nothing much thus remains of addition.
All this taken together removes a great deal of counterintuitive excess
baggage and, in fact, restricts the entailment relation to semantically moti-
vated entailment, precisely as is wanted. Since logic has no term for the
entailment relation as restricted by NST and by the stipulation that entailment
is meaning-driven, the term NATURAL ENTAILMENT was suggested in Section 1.2.1.
It would seem that this reduction of entailment to natural entailment—that
is, to the set-theoretic relation of proper inclusion as restricted under NST and
supported by meaning—properly delimits the class of entailments felt to be
natural by native speakers, and hence empirically observable or measurable as
psychologically valid data.
Contradiction has been slimmed down to a contextually restricted UR,
created by presuppositional restrictions on the admissible situations in any
discourse at hand. Contrariety is the only relation that can stand unmodified,
Natural set theory and natural logic 87
apart from the restriction of the VSs at issue to natural sets of admissible
situations. Finally, subcontrariety has disappeared from the basic-natural
system, as it involves the relation of full union, which has been ruled out.
Yet it reappears in the strict-natural system of metalogical relations, though
not without some considerable cognitive effort.8
It is now clear why in Figure 3.5, adapted from Figure 2.6, (a) is preferred
to (b). (Figure 2.6b, the improved Boethian square in two guises, is no longer
in competition since, in either form, the two component triangles are not
isomorphic.) Figure 3.5a consists of two logically isomorphic triangles with
the strict- (not basic-) natural relations entailment, contrariety, and contra-
dictoriness, subcontrariety being a ‘bonus’ due to the duality of the logical
constants defining P and Q, as defined in (3.12) (¼(2.2) of Section 2.2).
(Properly speaking, we should have a special negation sign for the comple-
ment within a restricted UR, but we leave this detail till Chapter 10, where
presuppositional logic is discussed. Note also that equivalence () is allowed
here, owing to the overriding in Aristotelian-Boethian predicate logic of the
basic-natural restriction disallowing identity.)
(3.12) P ¬Q* and consequently ¬P Q*
Q ¬P* and consequently ¬Q P*
By contrast, the two triangles of Figure 3.5b, though likewise isomorphic
and made up of three metalogical relations, have the less natural relation of
subcontrariety as a constitutive relation for the two triangles and contrariety
is the ‘bonus’ thrown in owing to duality. Thus, where Figure 3.5a has
contrariety, Figure 3.5b has subcontrariety and vice versa. This is why Figure
3.5a is considered (strict) natural, as against Figure 3.5b which does not fit into
any natural system.
a. P ¬Q* Q* b. P ¬P Q* : equivalents
CD CD
C: contraries
C SC
SC C
CD: contradictories
SC
C
SC: subcontraries
: entails
CD CD
Q ¬Q P* Q ¬P* P*
FIGURE 3.5 (a) the natural and (b) the nonnatural isomorphic square
8
Aristotle, with all his logical acumen, failed to identify it as a logical relation (see Section 5.3): it
was developed by his commentators. And beginning logic students, who still have to rely on their
natural intuitions, tend to find subcontrariety very hard to grasp, as logic teachers know well.
88 The Logic of Language
generally assumed and widely taught that modern logic overcomes this
restriction and allows for quantification over any argument term in a sen-
tence. It should be noted, however, that this superior expressivity of modern
logic is due not to its logical properties but solely to the formal language in
which its expressions are couched. Since we use, or anyway can use, the same
formal language, with quantifiers, variables and all, for all the different
predicate logics concerned, they all have equal expressive power, as they can
all express quantification over any argument term in a sentence, embedding
one quantifier in the scope of another. Therefore, the fact that we restrict our
analyses to the monadic subject–predicate distinction is immaterial. We do
this only to keep the exposé within reasonable bounds of size and complexity.
sentence like Some flags are green is true when the set of flags partially
mutually intersects with the set of green things—as is the case in the actual
world. And the relation of M-partial intersection (OO) goes well with the
notion of natural set theory: it is one of the four basic-natural relations
between sets, as shown in Figure 3.3. One would thus expect SOME F is G to
be true just in case [[F]] OO [[G]]. But what we find in natural language is that
SOME F is G is true also when [[G]] [[F]]. Consider sentences (3.14a–d),
which are obviously true, while [[G]] is properly included in [[F]]:
(3.14) a. Some children are orphans.
b. Some people are Englishmen.
c. Some computers are laptops.
d. Some heavenly bodies are planets.
Moreover, the converses of these sentences raise eyebrows, in that they give
rise to the question of whether one should conclude that some orphans
are not children, or some Englishmen are not people, and likewise for
(3.15c) and (3.15d):
(3.15) a. Some orphans are children.
b. Some Englishmen are people.
c. Some laptops are computers.
d. Some planets are heavenly bodies.
It is thus clear that, for natural speakers, IBN I*BN, just as in Hamilton’s logic
which is discussed below.9
The problem is thus that, other than in the remaining three systems, the
existential quantifier in BNPC is non-symmetrical, whereas there is no mo-
tivated way in which this lack of symmetry can be said to follow from a
natural set theory. We may, of course, try to bend NST in such a way that the
BNPC existential quantifier follows directly from it, but one has to fear that
this will not bear experimental testing: the chances of such a natural set theory
being empirically adequate must be deemed minimal. The problem is the
more serious because the intuitions that come with the sentences in (3.14) and
(3.15) are robust and beyond reasonable doubt. In fact, the problem requires
an entire rethinking of quantification theory.
To this end, we distinguish between a COGNITIVE and a SET-THEORETIC ap-
proach to quantification. The latter requires just a (basic-)natural set theory.
The former requires a cognitive theory of how cognition deals with plural
9
Blanché (1966) uses the type name Y for IBN I*BN I do not follow him in this respect, as I do not
want to make the formalism heavier than it need be.
Natural set theory and natural logic 91
10
Interestingly, Aristotle starts in the cognitive mood, as appears from his term en mérei (in part)
(and the Latin translation particularis) for existential quantification, and, as regards universal
quantification, when he says (Int 20a9–15; cf. also Int 17b-12):
For the word every does not make the subject universal but the whole proposition. … So that the words every or no
add nothing else to the meaning than that, whether affirmatively or negatively, the subject is to be taken as a whole.
Yet his syllogistic, set out in the Prior Analytics, is based on the device of letting a predicate (the
Middle Term) occur as a subject. In his syllogistic, therefore, Aristotle follows the set-theoretic
approach, in which both the restrictor and the matrix term represent sets.
11
In fact, it makes sense to regard the radical (presupposition-cancelling) negation, discussed in
Chapter 10, as having grammaticalized only in part. It has to be constructed, in surface structure, with
the finite verb of the main clause and is not allowed in any other, ‘noncanonical’ position. Moreover,
its semantics (echo-effect) shows that it takes a quoted L-proposition as its argument. It would seem
that this perspective on the radical negation deserves further reflection.
92 The Logic of Language
The ‘partial’ in VERUM IN PARTE is to be interpreted as saying that some but not
all flags are green, in accordance with the basic-natural set-theoretic notion
that sees class inclusion as proper class inclusion.
Clearly, this creates room for truth values between just true and false,
because VIP can be specified percentagewise: we can say that it is precisely
70% true that the flags are green, so that, given ten flags, exactly seven flags
must be green for truth to be attained. This aspect of the theory is not
elaborated here, but it is important to realize that the possibility of
an intervalent logic arises inter alia when predicates are assigned to plural
subject terms.
The three meta-operators VIT, VIP, and VIN can be transferred to the
object language, where they can be symbolized as 8BN (corresponding to
the L-propositional type ABN), ∃BN (corresponding to type IBN), and NBN
(corresponding to type NBN), respectively. The truth conditions for the types
ABN, IBN and NBN are as follows:
(a) 8BN [the Fs are G] is true iff all members of [[F]] are G
(b) ∃BN [the Fs are G] is true iff some but not all members of [[F]] are G
(c) NBN [the Fs are G] is true iff no member of [[F]] is G
[[F]] = Ø
FIGURE 3.6 The four possible situation classes in the cognitive approach
Natural set theory and natural logic 93
a. b. [[F]] = Ø
A N* I*
C
¬A
C C
¬I ¬A N
C C I ¬N
A
¬I ¬N
4 3 2 1
N* ¬I*
C ¬N* ¬A* I*
I N A* ¬N* ¬I*
¬A*
A*
U
A {1}
c.
CD CD
C C
{2,3} ¬A ¬N* {2,3}
C
SC SC CD
SC CD
{2} I* C N* {1}
CD CD
SC C
SC C
{1,3} ¬I* SC ¬N (1,2)
SC SC
C
C SC CD
CD SC CD
{2} I C N {3}
C SC C
CD
CD
SC ¬A* {1,2)
{1,3) ¬I C
CD
A* {3)
FIGURE 3.7 The square, VS-model, and dodecagon of BNPC with space 4 idle
96 The Logic of Language
a. b. [[F]] = Ø
¬A
A N* I*
C ¬A
¬I N
C C ¬I ¬A N
I ¬N
C C A
¬I ¬N
4 3 2 1
N* ¬I*
¬N* ¬A* I*
C ¬N* ¬I*
I N A* ¬A*
N* ¬I*
A*
U
¬A*
A {1}
c.
CD C
C C
{2,3,4} ¬A ¬N* {2,3}
C
SC SC CD
SC
{2} I* C N* {1,4}
CD CD
SC
SC C
{1,3,4} ¬I* SC ¬N (1,2}
SC SC
C
C SC C
CD SC CD
{2} I C N {3,4}
C SC C
SC
CD
SC ¬A* {1,2,4}
{1,3,4} ¬I C
CD
A* {3}
FIGURE 3.8 The square, VS-model, and dodecagon of BNPC with space 4 operative
Natural set theory and natural logic 97
a. A ¬I* I* c. ¬A {2,3,4}
CD CD SC
{1} A ¬I* {1,4}
C C C C
C CD
CD
I ¬I A* {1,2} SC {2,3}
I C
b. C C
I*
[[F]] = Ø
¬A ¬I* CD
¬A I* SC
¬A I* A* {3}
{3,4} ¬I SC
A ¬I* SC CD
4 3 2 1 2 3 4
I ¬A* ¬A* {1,2,4}
I ¬A*
¬I A*
¬I ¬A*
FIGURE 3.9 The square, the VS-model, and the octagon for AAPC
12
Even so, ABPC appears to be the optimal strict-natural system of predicate logic. As is shown in
Chapter 10, ABPC can remain in full force provided it is extended with a presuppositional component
and falsity is split up into presupposition-preserving minimal falsity (F1) and presupposition-
cancelling radical falsity (F2).
98 The Logic of Language
a. A ¬I* I*
CD
c.
C ¬A {2,3}
C
SC CD CD
{1} A ¬I* {1}
CD
C C CD
CD C
I ¬I A* SC
SC
b. I SC I*
{1,2} {2,3}
¬A I*
¬A I* CD CD
C
A ¬I*
3 2 1 2 3
¬A* {3} ¬I A* {3}
I SC
CD CD
I ¬A*
¬I A* ¬A* {1,2}
U
FIGURE 3.10 The square, the VS-model, and the complete octagonal graph of ABPC
a. b. ¬A {2,3}
[[F]] = Ø CD
{1,4}
CD {1,4}
A ¬I* ¬I*
A
¬A I*
CD CD
¬A I* {1,2}
A ¬I* I
4 3 2 1 2 3 4
I ¬A* CD
CD I*
I ¬A* {2,3}
¬I A*
A* A*
¬I ¬I {3,4}
{3,4} CD CD
U ¬A* {1,2}
FIGURE 3.11 The VS-model of SMPC and the poor remnants of its octagonal graph
Natural set theory and natural logic 99
These features are distributed as follows over the four predicate logics consid-
ered, whereby one notes that ABPC simply results from AAPC if space 4 is
made inoperative. SMPC lacks all the features:
BNPC 1, 2, 3, 4
AAPC 3
ABPC 4
SMPC --
Does this now mean that the unexpectedly rich, powerful, and sound
predicate logic of BNPC carries the day? Not quite yet, because there still
are a few empirical problems, one of them consisting in the fact that BNPC
fails to account for the strong natural intuition, observed by many authors
(notably Jespersen 1917: 86–91), that makes one feel that ¬A is equivalent with
both I and I*, which means that in the logical system we would like to see an
equivalence relation between ¬A, I and I*. But BNPC fails to oblige as is easily
checked in Figure 3.7b,c. The entailments from I and I* to ¬A hold, but natural
intuition requires a stronger relation. In this respect, ABPC fares somewhat
better, since, in ABPC, ¬A and I* are equivalent, though ¬A and I are mere
subcontraries.
The pragmaticists tackle the problem by an appeal to the Gricean maxims,
reinforced by Horn’s theory of scalarity (Horn 1972, 1989), in virtue of which
the negation, when applied to a quantifiable scale, cuts off only the higher
part of the scale but leaves the remainder intact. Or, as Jespersen put it (1917:
86), ’in negativing an A [ALL F is G; PAMS] it is the absolute element of A that
is negatived’. But this answer, widely subscribed to in pragmatic circles, seems
100 The Logic of Language
to say that N is true. Given that this world has no mermaids, it is not hard
to agree that There are no mermaids living in London must be considered
true. But then the corresponding N* sentence There are no mermaids
(that are) not living in London should also be true, because if there are no
mermaids, there will be no mermaid among those entities that are living in
London nor among those that are not. The disturbing fact is, however, that
while There are no mermaids living in London is considered true, There are no
mermaids (that are) not living in London is false for natural intuition and,
in fact, felt to be equivalent with All mermaids live in London. This latter
intuition is accounted for, since, as is shown in Figure 3.7b, both A and N* are
true only in space 1 and false in spaces 2 and 3, which makes them equivalent
within the confines of a model where the condition [[F]] 6¼ is left out of
account.
But in order to see if BNPC is tenable as a logical system, we must go
beyond those confines and take the fourth space into account for cases where
[[F]] ¼ . This has been done in Figure 3.8b, where truth has been assigned to
N in space 4. But then truth must also be assigned to N* on pain of making the
semantics of the operator NO inconsistent. It thus follows that if we
turn BNPC into a fully fledged logical system that also caters for cases
where [[F]] ¼ , as in Figure 3.8, a gross unnaturalness appears, because
truth for N* makes N* clash with natural intuitions. This is, however, not a
problem for the logical system but only for the claim that BNPC reflects
natural logical intuitions. Therefore, if this claim is to be upheld, it is essential
that space 4 for cases where [[F]] ¼ should not be considered to be part of
it—owing to PNST–1, which declares not to be a set.
Finally, BNPC is still subject to the predicament that it requires
complete knowledge of the verification domain before one is entitled to say
that existentially quantified sentences are true or false. This is illustrated as
follows. Suppose Joe is checking if all the 45 doors in the building are
properly locked. He has come to number 15 and so far all has been well. We
feel that, as soon as he has found that at least one (or two) doors are
properly locked, he ought to be able to say in truth that (at least) some
doors are properly locked. But BNPC does not allow him to do so, because
Some doors are properly locked entails that not all doors are. And Joe cannot
vouch for that entailment. In fact, he must wait till he has checked all
doors before he can say either that some doors are properly locked or
that all are. All he can say after finding that one or more doors are properly
locked is that it is not so that no door is properly locked, or: ‘not (no door is
102 The Logic of Language
properly locked)’, which entails that at least some and perhaps all doors are
properly locked.13
This fact is of great epistemological importance. It is already so that A- and
N-statements require full knowledge of the domain before one is entitled to
claim their truth. These entitlements are thus restricted to finite and practi-
cally surveyable domains. But most domains are infinite or in any case not
practically surveyable and yet we profusely help ourselves to positive and
negative universal statements about them. Strictly speaking, we cannot vouch
for the truth of such statements, yet we venture them, relying on our induc-
tive powers of generalization and thereby taking the risk of falsification. And
this is precisely why they are so useful: almost all of what we consider to be
our knowledge is inductive knowledge, which has come to be established
on the strength of systematic lack of falsification and of the ‘sense’ they
make in terms of larger systems. But this conveys an equal importance to
I- and I*-statements as used in traditional and standard modern logic, because
these statements have the power of falsification. And for this it is needed that
one be entitled to claim their truth without full knowledge of the domain, merely
on the strength of an observation made. If I and I* are added to the list of
sentence types that require full knowledge of the domain before one can
vouch for their truth, this instrument to express a falsification is taken away.
In this sense, BNPC is an obstacle to the expansion of inductive knowledge.
This conclusion is to some extent disconcerting. One might propose that
natural language some is ambiguous between ‘some perhaps all’ (¼ NOT NO)
and ‘some but not all’. But the rules of good methodology make one reluctant
to do that. There is, thus, the basic intuition that some implies or entails ‘not
all’, but further reflection, manifesting itself at the strict-natural level, makes
one see that some ought not to be taken that way. If this were cognitive reality,
it would put a brake on the extension of inductive knowledge and thus on
intellectual development as a whole. Aristotle made the world see that,
on reflection, one must concede that A-sentences entail the corresponding
I-sentences, thereby removing the main blemish in any predicate logic con-
strained by the principles of basic-natural set theory. Needless to say, this
13
Horn (1989: 219) shows that De Morgan was well aware of this epistemological dilemma. Horn
quotes Hamilton (1858: 121):
There are three ways in which one extent may be related to another […]: they are, complete inclusion, partial
inclusion with partial exclusion, and complete exclusion. This trichotomy would have ruled the forms of logic, if
human knowledge had been more definite. […] As it is, we know well the grounds on which predication is not a
trichotomy, but two separate dichotomies. […] Must be, may be, cannot be, are the great distinctions of ontology:
necessity, contingency, impossibility. This was clearly seen by the logicians. But it was not so clearly seen that this
mode of predication tallies, not with the four ordinary forms A, E, I, O, but with the three forms A, (OI), E.
Natural set theory and natural logic 103
14
Part of Hamilton’s claim to fame rests on the drawn-out public polemic between him and the
London logician Augustus De Morgan, which was as famous as it was fierce and even made them take
each other to the courts of justice. The dispute centred upon Hamilton’s predicate logic, which was not
to De Morgan’s liking. See the Appendix in De Morgan (1847), which contains much of their
acrimonious correspondence.
104 The Logic of Language
because Hamilton lost his war with De Morgan, and perhaps also because of
the unique prestige of modern logic, Hamilton’s system of ‘quantification
of the predicate’, as he called it, has largely been forgotten outside circles of
historians of logic.15 Yet, given the undoubted intuitive appeal of at least some
of Hamilton’s logical notions, the Hamiltonian tradition in logic deserves a
closer look. The more so because the Danish linguist Otto Jespersen (1860–
1943), who possessed a finely tuned intuition as regards linguistic matters but
had no logical knowledge (he had probably never heard of Hamilton), came
up with a system of predicate logic that resembles the Hamiltonian system in
every respect except for Hamilton’s ‘quantification of the predicate’ (Jespersen
(1917: 85–92).
A central feature of Hamilton’s logic (see also Cavaliere 2007) is his
insistence on quantification of the predicate. This implies that not only the
subject but also the predicate in the Aristotelian sentence types should be
quantified, despite the grammatical awkwardness of sentences like All men are
some animals.16 He even chides Aristotle (Hamilton 1865: 264–5) for ‘prohibit
[ing] once and again the annexation of the universal predesignation to the
predicate’ (see note 15), continuing ‘Yet this nonsense, (be it spoken with all
reverence for the Stagirite,) has imposed the precept on the systems of Logic
down to the present day’. In this respect, I do not follow Hamilton and take
sides with Aristotle and with standard modern logic, even though one might
perhaps attribute to Hamilton the implicit insight that quantification involves
two sets, the matrix or predicate set and the restrictor set.
Hamilton does not use anything like quantifiers, but prefixes each set
denotation with the operators t (total) or p (partial), using the symmetrical
predicates ‘¼¼’ for ‘coincides with’ and ‘jj’ for ‘excludes’. The following simple
composition rule thus generates all expression types (formulæ) of Hamiltoni-
an predicate logic (‘[Æ / ]’ stands for ‘either Æ or ’):
Formula ¼: [ t / p]X [¼¼ / jj] [ t / p]Y (where X and Y are predicates)
15
Hamilton’s efforts are part of a tradition of predicate logics with ‘quantification of the predicate’
that was rejected by Aristotle (Int 17b13–16) and occasionally discussed during the late Middle Ages,
but which really started in the late eighteenth century and flourished in the nineteenth century (see,
for example, Bochenski 1956, Kneale and Kneale 1962: 349, Cavaliere 2007, Lenzen 2008). A major
factor in the development of these logics was the wish to enrich and strengthen the classical
Aristotelian theory of syllogisms with the help of richer systems of predicate logic.
16
Some languages, including English, marginally allow for sentences like Some humans are all
Englishmen, where all is a so-called ‘floating quantifier’, saying that there is a group of humans who are
all Englishmen. This, however, does not seem to be what Hamilton intended.
Natural set theory and natural logic 105
Given that predicates may denote complement sets, there is, for every predi-
cate X, with the extension [[X]], a negative counterpart NOT-X, with the
—
extension [[X]] . For the predicates F and G and their negations, there are
thus 32 admissible formulæ.
With some effort, such a system can be made good semantic sense of.
Despite the grammatical illformedness of quasi-sentences like All Englishmen
are some humans, there is an intuitive inkling of what they could possibly
mean. This inkling can be made formally explicit in a variety of ways. An
interpretation that seems to come closest to Hamilton’s intentions is the
following:
Read ‘tX’ as ‘the total set of Xs’ and ‘ pX’ as ‘a nonnull proper subset of
Xs’ (where X ranges over predicates and [[X]] 6¼ 6¼ U). Read ‘¼¼ ’ as
‘coincides with’ and ‘jj’ as ‘excludes’.
A quasi-sentence like Some computers are all laptops will then be read, in this
interpretation, as pComputer ¼¼ tLaptop or ‘only a nonnull proper subset
of computers coincides with the total set of laptops’, which is true only if
[[Computer]]
[[Laptop]]. By contrast, Some computers are all non-laptops,
translated as pComputer jj tLaptop or ‘only a nonnull proper subset of
computers excludes the total set of laptops’, is true if either [[Computer]]
equivalent with I*, because when [[G]] [[F]], SOME F is G is false (there is
no M-partial intersection) but SOME F is NOT-G is true. By contrast, the
equivalence of SOME F is G with its converse, SOME G is F (I I!) holds in
such a logic, but not in the Hamiltonian system, again owing to cases where
[[G]] [[F]]. In such cases, Hamilton makes SOME F is G true but SOME G is
F false, while a logic that straightforwardly translates PNST–5 of Section 3.2.2
into SOME makes both false. A logic where both I I* and I I! hold is
inconsistent.
Examples (3.14) and (3.15) quoted above show that natural intuition
supports Hamilton and BNPC and rejects a symmetrical SOME (I I!). Does
this mean that AAPC and ABPC are already somewhat counterintuitive,
because in these systems the equivalence I I! holds, owing to the fact that
they count SOME F is G as true when [[F]] [[G]]? Curiously, and despite cases
like (3.14) and (3.15), symmetrical SOME is likewise supported by intuition.
When I say that some children are male, you will agree that, therefore, some
males are children, and when I say that some males are children, you will agree
that, therefore, some children are male, so that the two-way entailment
relation makes them seem equivalent. Apparently, the intuitive judgements
are influenced by world knowledge. We know that orphans are a proper
subclass of the class of children, which makes it possible for us to say naturally
that some children are orphans. But we also know that the class of children
M-partially intersects with the class of males and also with the class of
females, while it properly includes the class of orphans, as in the diagram
of Figure 3.12.
It thus seems that, given a constellation like that in Figure 3.12, (3.16a,b,c) are
naturally said in truth, but (3.16d) is not:
Children
Orphans
Females Males
Humans
FIGURE 3.12 The set-theoretic relations of children, (human) males, (human) females,
and (human) orphans
108 The Logic of Language
pffi
(3.16) a. pffi Some children are male (female).
b. pffi Some children are orphans.
c. Some males (females) are children (orphans).
d. ! Some orphans are children.
Yet both AAPC and ABPC, in so far as they can lay claim to naturalness, tell us
we should be able to say (3.16d) naturally in truth. Therefore, if we really want
to gauge natural intuitions and express them in a logical system, we should
hold on to BNPC, which does justice to all the intuitions of (3.16). AAPC and
ABPC can thus be said to support natural intuitions less strongly than BNPC,
because they allow for (3.16d), which is excluded by intuition.
NOR
/P/
OR AND OR
/Q/
UR
FIGURE 3.13 VSs for logically independent P and Q and their logical compositions
110 The Logic of Language
17
In Seuren (1974) it is argued that an L-propositional structure of type AND* (AND [¬P, ¬Q]) is
grammatically transformed into NOR (NEITHER P NOR Q) by the rule of NEGATIVE RAISING. Likewise, ALL F
is NOT G is taken to be transformed into NO F is G. This conforms to the basic-natural propositional
and predicate logic as proposed here and shown in Figure 3.10.
112 The Logic of Language
a.
b. AND NOR* OR*
¬OR ¬AND C
¬OR* ¬NOR*
¬AND ¬AND* C C
¬NOR* ¬NOR
C C
NOR OR
3 AND* OR* 2
AND
NOR* C
¬OR OR NOR AND*
1
¬OR*
¬NOR
¬AND*
UR
FIGURE 3.14 Basic-natural analog for propositional logic with NOR 6¼ ¬OR
—
complement of /OR/, /OR/R, comprises /AND/, and neither P nor Q clearly
means NOT-P AND NOT-Q, excluding P AND Q. Therefore, just as English no
does not correspond to basic-natural NOT-SOME but constitutes a separate
quantifier in BNPC, English nor does not correspond to basic natural NOT-OR
but constitutes a separate sentence-type, which may be called EXJUNCTION, in
the NST-constrained system of propositional logic. Consequently, NOR is
equivalent, in this system, with AND* (within UR), as shown in Figure 3.14.
(One remembers from Section 2.4.2 that the internal negation for the propo-
sitional operators AND and OR distributes over the component L-propositions.
Thus, AND* stands for ¬P ∧ ¬Q ∧ . . . and OR* stands for ¬P ∨ ¬Q ∨ . . . .)
NOR*, ¬NOR, and ¬NOR* may be taken to be undefined at the level of basic
naturalness, due to PNST–6 (the mind baulks at repeated applications of the
complement function). As before, however, they are still taken into account in
Figure 3.14, because we need to know the ultimate logical consequences of the
system.
Empirical evidence for this analysis is derived, for example, from the fact
that a sentence like (3.24a) is naturally and immediately interpreted as (3.24b),
whereas (3.25a) is not at all naturally and immediately interpreted as (3.25b),
even though, from a standard logical point of view, both (3.24) and (3.25)
merely instantiate De Morgan’s laws. This is a fact on which neither the
Gricean maxims, nor indeed the whole of pragmatics, have anything to say:
(3.24) a. He doesn’t like planes or trains.
b. He doesn’t like planes and he doesn’t like trains.
(3.25) a. He doesn’t like planes and trains.
b. He doesn’t like planes or he doesn’t like trains.
Natural set theory and natural logic 113
Given the analysis presented above, one now sees why this should be so.
Consider not . . . planes or trains in (3.24a) to be an instance of basic-natural
NOR: ‘He neither likes planes nor does he like trains’. Figure 3.14 shows that, in
this reading, NOR is equivalent to AND*, both sharing the same VS. By
contrast, not . . . planes and trains in (3.25a) realizes ¬AND, the restricted
negation of AND: ‘It is not so that he likes planes and he likes trains’. The VS of
¬AND comprises the combined VSs of NOR ( AND*) and OR ( OR*). Since
in basic-natural propositional logic, ¬AND is not equivalent with OR*, the
transition from (3.25a) to (3.25b) is a matter of strict-natural or standard
modern logic and thus requires a more complex computation at a higher level
of achievement.
There is, however, again the problem of consistency through discourse. As
has been widely observed, OR is typically used in situations where the speaker
is uncertain as to which of the disjuncts provides the correct answer to the
question he or she is entertaining but where the speaker has concluded that
either disjunct will do as a good enough answer. This conclusion is naturally
expressed by the auxiliary of epistemic necessity must. A speaker may say
(3.26b), in response to the question (3.26a):
(3.26) a. How did the journalist know?
b. The journalist must have spoken to Ann or to Jeremy.
If it were then found out that the journalist had spoken to both Ann and
Jeremy, it would be incorrect to say that the person who uttered (3.26b) had
been wrong, even if that speaker had failed to think of the possibility that the
journalist had spoken to both Ann and Jeremy.
Therefore, in parallel with the basic-natural system of quantification,
consistency through discourse requires an entailment from AND to OR. This
time it was the Stoic philosophers who, roughly a century after Aristotle,
discovered this fact and upgraded the natural propositional logic of language
and cognition to full consistency through discourse by introducing the
CD CD
I ¬I A* OR ¬OR AND*
FIGURE 3.15 The strict-natural Squares for predicate and propositional logic
114 The Logic of Language
they are systematic in the languages in which they occur, is yet to be estab-
lished.
If there is indeed a systematic absence of equivalents of *nall and *nand in
the languages of the world, such a gap appears to be matched by similar gaps in
other lexical fields. Thus, it is said, whereas one finds lexicalized equivalents of
epistemic NOT-POSSIBLE, lexicalizations of epistemic NOT-NECESSARY are found
nowhere (lexicalized agentive NOT-NECESSARY, such as English unnecessary is
widely attested). Likewise, NOT-CAUSE is never lexicalized, while NOT-ALLOW is
frequently found in lexicalizations such as disallow, forbid, or prohibit. Typi-
cally, predicates like NECESSARY or CAUSE show semantic characteristics that
may lead one to think that they can or should be classified along with ALL and
AND: they all belong to an ‘all-yes’ section of the lexicon, while POSSIBLE and
ALLOW typically belong to the group of ‘perhaps-yes-perhaps-no’ predicates,
which also comprises the existential quantifier and the propositional connec-
tive OR. The question is: are these similarities reducible to a single principle
and if so, what is it? This question is interesting as it forces one to probe natural
set theory in both the logic and the lexicalization processes of natural language.
Horn, Levinson, and others seek an answer in the pragmatics of language
use. Restricting themselves to *nand and *nall, they argue, in essence, as
follows. Since OR is normally exclusive for pragmatic reasons and pragmati-
cally equivalent to OR-NOT, excluding the case that both argument proposi-
tions are true, there appears to be no need left for an item like *nand which
excludes the simultaneous truth of both argument propositions and may be
taken to imply pragmatically that at least one of the argument propositions is
true. Then, given the pragmatic equivalence of or and *nand, the item without
the incorporated negation—that is, or—would be preferred on grounds of
simplicity in the lexicon, so that *nand is ruled out. Similarly, since SOME,
SOME-NOT, and NOT-ALL are pragmatically equivalent, conveying the intended
meaning ‘some but not all’, it is assumed that there is no need left for a
lexicalized form like *nall meaning ‘not-all’, which is somehow, either logical-
ly or pragmatically, equivalent to SOME-NOT and hence to SOME. Since the
pragmatically equivalent but cognitively and semantically simpler operators
or and some are already available, there is no need for surface lexicalizations
like *nand or *nall (Levinson 2000: 70). Therefore, the lexicalized expressions
all, some, and no will do for the quantifiers, and and, or, and neither . . . nor for
the propositional operators.
This reasoning can be summarized as follows:
(a) The English quantifier word no and its equivalents in other languages
are lexicalizations of underlying NOT-SOME. Analogously, neither . . . nor
stands for NOT-OR.
116 The Logic of Language
For the authors who propose this explanation such a system is not a logical
system but represents the way listeners construct a quantified mental model of
a state of affairs described, on the presumption that the speaker has full and
adequate knowledge of that state of affairs and has the intention to be as
informative and helpful as possible—that is, that speakers will commit them-
selves to the maximum of what they know. The main criterion of such a
system is not truth but information value on the presumption of full coop-
erativity and complete knowledge. It is not meant for the computation of
solid entailments grounded in strictly semantic properties, but for practical
inferences. To say that two expressions are pragmatically equivalent then
amounts to saying that they have the same information value on the pre-
sumption specified.
In contexts where the speaker has only partial knowledge, there is not even pragmatic
equivalence. If I say that some of my students are gay, one should not infer immedi-
ately that not all my students are gay. Perhaps I am unaware of the sexual preferences
of the remainder. But if I and O are often not even pragmatically equivalent, because
the conditions for the Gricean implicatures are not met, then why should O be
superfluous?
18
English fuck all stands for ALL-NOT, which is equivalent to NO in BNPC as Figure 3.7 shows.
19
In Seuren (2002) it is observed that both in traditional predicate logic and in propositional logic
no key role is reserved for vertices named ¬A or ¬AND, which is presented as an explanation for the
systematic absence of lexicalizations like *nand or *nall. The present analysis shows that this
conclusion was premature. See Jaspers (2005) for ample comment.
Natural set theory and natural logic 119
place because NOT-SOME and NOT-OR were thought to lack counterparts for ALL
and AND, respectively. In fact, however, given the lack of single-morpheme
lexicalizations for NOT-SOME and NOT-OR, NST predicts the absence of such
lexicalizations for NOT-ALL and NOT-AND, which in turn, if that absence proves
real, is valuable confirmation for the correctness of our reconstruction of
basic-natural logic.
The real question is not why no and neither . . . nor have no corresponding
counterparts *nall and *nand but, rather, why NOT does not merge with
either SOME or ALL (or with either OR or AND), though it does occasionally
merge with NO, which, with its double negation, is a mild infringement of
point (b) in the Horn-Levinson analysis (lexicalizations of NOT-NOR do not
seem to occur).
The answer may well be found in the consideration that mergers of the
form NOT-SOME or NOT-ALL would unite incompatibles. As has been said, NOT-
SOME would cover both ALL and NO, which will never form a natural cognitive
unit. The same goes for NOT-ALL, which would cover both SOME and NO, again a
very unlikely candidate for lexicalization. Only NOT-NO, which covers
both SOME and ALL, would seem to form an acceptable natural cognitive
unit encompassing the semantic field ‘some, perhaps all’. And indeed, lexica-
lizations of NOT-NO, usually in the stylistic form of an idiomatized understate-
ment (litotes), are, though not frequent, not too hard to come by.
As mentioned by Horn, Latin has a number of instances (see also Jespersen
1917: 90): nonnemo ‘not-nobody ! several persons’, nonnulli ‘not-none !
several’, nonnihil ‘not-nothing ! a considerable amount’, nonnumquam
‘not never ! quite often’, nonnusquam ‘not nowhere ! in several places’.
Dutch has niet-niks ‘not-nothing ! quite something’. Semi-lexicalizations
for ‘not-without ! with a notable amount of ’ are frequently found. A careful
search will no doubt yield a significant number of examples of this nature
in various languages.
Given these observations, one may object that the solution proposed for
predicate calculus lexicalizations, which is based on the fact that in basic-
natural logic no does not stand for NOT-SOME but is a quantifier in its own
right, contrarily opposed to SOME and ALL, does not apply in epistemic modal
or in causal logic, because it can hardly be denied that impossible is a lexicali-
zation of NOT-POSSIBLE, or that disallow stands for NOT-ALLOW.
To this objection I reply that although impossible is, of course, a lexicaliza-
tion of NOT-POSSIBLE, the predicate POSSIBLE in question does not belong to
epistemic modal logic. Impossible has incorporated the predicate POSSIBLE in a
variety of its senses, but not in the modal-epistemic sense of ‘it may be
true that . . . ’. The Oxford English Dictionary (OED) gives the following two
main senses for possible, the second of which clearly comprises epistemic
possibility:
1. That may be (i.e. is capable of being); that may or can exist, be done,
or happen (in general, or in given or assumed conditions or
circumstances); that is in one’s power, that one can do, exert, use, etc.
2. That may be (i.e. is not known not to be); that is perhaps true or a
fact, that perhaps exists.) (Expressing contingency, or an idea in the
speaker’s mind, not power or capability of existing, as in 1; hence
sometimes nearly ¼ credible, thinkable.)
But for impossible the OED merely gives the negation of possible 1, not of
possible 2:
1. Not possible; that cannot be done or effected; that cannot exist or come
into being; that cannot be, in existing or specified circumstances.
That is, the OED describes impossible as occurring only as the negative
counterpart of possible in the main sense 1, not in the main sense given
under 2. There is, in other words, no negated counterpart of sense 2 of
possible. Impossible is naturally used in phrases like an impossible task, an
impossible construction, an impossible person, even an impossible truth, and also
in, for example, It is impossible to clear up that mess or It is impossible for the
man to climb the stairs, but never with a that-clause in the sense of ‘it cannot
be true that . . .’. A sentence like It is impossible that he is right strikes one as
deviant.20
20
I hesitate about the litotes not impossible, as in It is not impossible that he is right, which sounds a
great deal better than It is impossible that he is right. Also, as was pointed out by Isidora Stojanovic,
people often react to a piece of new information by saying That’s impossible, meaning ‘That can’t be
true’. It seems obvious that much still remains to be sorted out in this area.
Natural set theory and natural logic 121
1
Yet the analogy between the universal quantifier and the propositional operator ∧ on the one
hand, and between the existential quantifier and ∨ on the other, was expressed in the symbol ‘’ for
the universal quantifier and the symbol ‘V’ for the existential quantifier in the so-called ‘Californian’
notation, which was much used during the 1960s and 1970s but eventually had to yield to what is now
the standard notation.
Logical power 123
concern, were it not that ABPC, just like propositional calculus, has maxi-
mal logical power, whereas SMPC, as is shown in Section 4.2.3, sees its
logical power dramatically reduced when compared with ABPC. The loss
of logical power in SMPC is due to the fact that SMPC has given up the
subaltern entailment schema A ‘ I (All F is G entails Some F is G), which is
considered valid in ABPC, just as AND ‘ OR, the analog in propositional
calculus, is valid in propositional calculus. Logicians, who like the idea that
SMPC is the ultimate ne plus ultra of predicate calculus, have never been
very vocal as regards this dramatic loss of logical power, but there is no
denying that it is there. Moreover, as is shown in Chapter 6, the logical
power of ABPC turns out to be highly functional for the transmission of
quantified information. Therefore, if ABPC can be saved for natural lan-
guage, our respect for this logic will be considerably enhanced (many
philosophers of language who dare not doubt the inviolability of SMPC
still look back to ABPC with nostalgia).
The situation as it is creates a dilemma in that SMPC is logically sound
but has bought its health at great cost, whereas avoiding that cost seems
to mean a faulty logic. It is time now to spell out the question in greater
detail.
The reason why ABPC was replaced with SMPC in the wake of scholars like
Frege and Russell about a century ago lies in the fact that ABPC, as it stands,
suffers from what is known as UNDUE EXISTENTIAL IMPORT (UEI). By this is
meant the fact that ABPC only functions when [[F]], the extension of the
predicate F in the standard sentential schema used for predicate logic, is
nonnull—that is, when, in the world as it is, [[F]] contains at least one actually
existing object properly characterized by the predicate F. This is unbearable
to a logician, because logic is meant to be based on meanings only, not
on contingencies in the world. When ABPC is applied to a situation where
[[F]] ¼ , then, given that it is based on a strictly extensional ontology of
actually existing objects only, just as SMPC does, an inconsistency arises
under universal quantification. Consider the sentences (4.1a) and (4.1b),
whose universally quantified subject terms contain a predicate expression
[[F]] with a null extension, either because of the contingent conditions of
the world, as with the predicate be a dodo (the dodo became extinct around
the year 1700), or because of a semantic inconsistency, as with the predicate be
living dead. The question is whether sentences like (4.1a) or (4.1b) are true or
false in the actual world:
(4.1) a. All dodoes are in good health.
b. All living dead are in good health.
124 The Logic of Language
If they are true, it follows in ABPC (by the subaltern entailment schema) that
their existential counterparts should also be true:
(4.2) a. Some dodoes are in good health.
b. Some living dead are in good health.
The sentences of (4.2), however, entail, by the entailment schema of EXISTENTIAL
IMPORT, that there is at least one actually existing dodo and at least one actually
existing entity that is dead while being alive, neither of which is the case.
In ABPC, SMPC and other varieties of predicate logic, existential import is
semantically induced by the existential quantifier (SOME), which requires that
there be a nonnull intersection of [[F]] and [[G]]. In a strictly extensional
ontology, this means the actual existence of at least one element in the
intersection. Since this condition is not fulfilled in (4.1a,b), these sentences
cannot be true. But if they are false, then their negations (4.3a) and (4.3b),
respectively, should be true:
(4.3) a. Not all dodoes are in good health.
b. Not all living dead are in good health.
Now, however, the problem appears again, because, owing to the duality
relation between the universal and the existential quantifiers (the Conver-
sions), (4.3a,b) are equivalent to (4.4a,b), respectively:
(4.4) a. Some dodoes are not in good health.
b. Some living dead are not in good health.
And here, existential import rears its head again, as (4.4a,b) entail again that there
is at least one actually existing dodo and at least one actually existing entity that is
dead while being alive, respectively—quod non. Therefore, (4.1a,b) cannot be false
either. And since ABPC does not allow for any value but true or false and does not
allow for the absence of a truth value, ABPC appears to be in trouble.
More technically minded readers may wish to see a more formal definition
of undue existential import. All right then, here are two possible definitions:
A system of predicate logic suffers from UNDUE EXISTENTIAL IMPORT when
every admissible expression in the system entails a proposition of the
type I or I*—that is, entails the existence of at least one entity of the F-
class quantified over.
Alternatively, a system of predicate logic suffers from UNDUE EXISTENTIAL
IMPORT when there is a proposition or proposition type T such that both
T and ¬T entail a nonnecessary (contingent) proposition of the type I or
I*, which makes I or I*, as the case may be, a necessary truth.
Logical power 125
Figure 4.5c shows that this is indeed the case in ABPC. To solve this
problem, something must be done. The solution embodied in Russellian
SMPC consists in cutting out the subaltern entailment schema A ‘ I and
declaring (4.1a,b) true in the actual world, in accordance with standard set
theory which says that the null set is included in all sets. This solution has
the advantage of mirroring mathematical set theory and thus of agreeing with
those forms of mathematics that lend themselves to application to physical
matter. For that reason, Russellian logic has had a great career during the
twentieth century and has acquired unique prestige. Yet it has landed the
study of language in another dilemma, this time not of a logical but of an
empirical nature, because SMPC grossly offends natural linguistic intui-
tions—much more than traditional ABPC. Parsons’ description of the math-
ematical logicians’ typical, not altogether forthcoming, reaction is true to life
(Parsons 2006: 3):
The common defense of this is usually that this is a logical notation devised for
purposes of logic, and it does not claim to capture every nuance of the natural
language forms that the symbols resemble. So perhaps ‘8 x(S x! P x)’ does fail to
do complete justice to ordinary usage of ‘Every S is P ’, but this is not a problem with
the logic. If you think that ‘Every S is P ’ requires for its truth that there be S s, then you
can have that result simply and easily: just represent the recalcitrant uses of ‘Every S is
P’ in symbolic notation by adding an extra conjunct to the symbolization, like this: 8 x
(Sx ! Px) & ∃xSx. This defense leaves logic intact and also meets the objection,
which is not a logical objection, but merely a reservation about the representation of
natural language.
This, however, is mere palliative therapy. It amounts to saying that all one
should do to appease speakers’ logical conscience is define ALL F is G not just as
[[F]] [[G]] but as [[F]] [[G]] and [[F]] 6¼ , adding the clause ‘and [[F]] 6¼ ’,
and all is well. But all is not well, because if one does that, De Morgan’s laws
make NOT ALL F is G come out as meaning ‘either [[F]] / [[G]] or [[F]] ¼ ’,
which again violates natural intuitions: SOME F is NOT-G clearly implies
intuitively that NOT ALL F is G (in fact, both SMPC and ABPC take the two
to be equivalent). But SOME F is NOT-G stipulates the existence of at least one F
and thus rules out the possibility that [[F]] ¼ , which disqualifies the disjunct
‘or [[F]] ¼ ’. The addition of the clause ‘and [[F]] 6¼ ’ to the definition of the
universal quantifier thus merely makes the clash of intuitions rear its head
elsewhere. Therefore, whichever way one takes it, SMPC does not sit at all well
with the facts of language. (See Section 5.2.4 for further comment.)
In overall perspective, the following steps are taken. First, in accordance
with the arguments set out in Chapter 2 of Volume I, we give up the
126 The Logic of Language
but they may be true when the matrix predicate does not pose that require-
ment. As far as the language system is concerned, existential import is thus
properly regulated by the lexicon, not by the machinery or the axioms of logic.
This allows language to profit from the maximal logical power of ABPC
within the confines of those situations where all presuppositions are fulfilled
and thus within the confines of the default conditions of linguistic interac-
tion. Chapter 10 contains a detailed description of this logic, which allows for
intensional entities.
Thus protected, ABPC can be saved for language and cognition, though
perhaps not for general mathematics and the physical sciences. The question
is whether the protective measures are justified and how they actually work.
The arguments for intensional objects were given in Chapter 2 of Volume I, so
that we consider that question settled. The presuppositional machinery and
the concomitant logic are defined in Chapter 10. The history and precise
analysis of ABPC is sketched in Chapter 5, while its surprising functionality is
demonstrated in Chapter 6. What remains to be done is show the logical
power of ABPC and of propositional calculus (which, unlike predicate calcu-
lus, has not been superseded by a modern variant). This is what is undertaken
in the following section.
C C
SC
CD
OR ¬OR AND*
FIGURE 4.1 Propositional calculus represented as a natural square formed by two
logically isomorphic natural triangles
130 The Logic of Language
AND ¬OR*
CD
C C CD
C C
OR SC OR*
CD
CD
¬OR AND*
FIGURE 4.2 Hexagonal graph for propositional calculus
Figure 4.1 can easily be deduced from the latter. All Figure 4.2 does is make
explicit some logical relations that are not shown in Figure 4.1. But if that is
what we want, we should present a model that shows all logical relations
between all possible vertices.
A complete representation requires an octagonal model, as in Figure 4.3,
where the natural triangles are again printed in heavy lines. To facilitate the
checking of the logical relations specified in the octagon, the VS of each vertex
is indicated, according to the VS-model of Figure 4.4 and listed in (4.5) below.
This octagon, with its dense network of metalogical relations, may look
forbidding, yet it should be remembered that it simply follows from the
combination of the subaltern entailment (AND ‘ OR) with standard bivalent
negation and the De Morgan equivalences.
In the octagon of Figure 4.3 all edges between vertices represent some
logical relation, giving a complete octagonal graph. No edge between two
vertices would mean that the corresponding L-propositional types are logi-
cally independent: their VSs have a nonnull intersection but neither includes
the other and their union does not equal U. The absence of logically indepen-
dent pairs makes the graph COMPLETE in the standard terminology of graph
theory. This puts the logical power of this system at the maximum of 28,
according to the rough metric introduced earlier.
Figure 4.4 shows propositional calculus in terms of a valuation space (VS)
model. It subdivides U in terms of the eight L-propositional types AND, OR,
AND*, OR*, and their negations. Space 1 is reserved for cases where all com-
ponent L-propositions are true, space 2 for those where one or more are true
and one or more are false, and space 3 for cases where all are false. For example,
Logical power 131
¬AND {2,3}
CD CD
{1} {1}
AND ¬OR*
CD
C
C CD
C
SC
{1,2} {2,3}
OR SC OR*
CD CD
C
¬OR AND*
SC
{3} CD CD {3}
¬AND* {1,2}
FIGURE 4.3 Octagon as a complete graph for propositional calculus
given the two logically independent sentences The earth is round and Venus is
a planet, U is subdivided for the eight sentences The earth is round AND Venus is a
planet, The earth is round OR Venus is a planet, The earth is NOT round AND Venus
is NOT a planet, The earth is NOT round OR Venus is NOT a planet, and their
¬AND OR*
¬AND OR*
AND ¬OR*
3 2 1 2 3
OR ¬AND*
OR ¬AND*
¬OR AND*
U
external negations. When the two (or more) component L-propositions are not
logically independent because, say, there is a contrary pair among them, then
/AND/ ¼ since their conjunction is never true. When the component L-
propositions exhaust U, then /OR/ ¼ U, because their disjunction is always
true. (The reader may try to work out for himself or herself what the VS-
model will look like when the component L-propositions of AND and OR are
equivalent: P ∧ P, P ∨ P, ¬(P ∧ P), ¬(P ∨ P), etc.)
The simple diagrame of Figure 4.4, again, represents the whole of standard
propositional calculus. The VSs of the various L-propositional types are specified
as follows (the numbers stand for the spaces as they are numbered in Figure 4.4):
(4.5) /AND/ = {1} /¬AND/ = {2,3}
/OR/ = {1,2} /¬OR/ = {3}
/AND*/ = {3} /¬AND*/ = {1,2}
/OR*/ = {2,3} /¬OR*/ = {1}
Since {1} {1,2}, the entailment AND ‘ OR holds. Likewise for the entailment
AND* ‘ OR*, since {3} {2,3}. Moreover, AND and ¬OR* are equivalent since
their VSs coincide. Likewise for OR and ¬AND*. Then, AND and AND* are
contraries because their VSs do not intersect. And OR and OR* are subcontra-
ries because the union of their VSs taken equals U: {1,2} [ {2,3} ¼ {1,2,3} ¼ U.
It is thus easily seen that the VS representation of Figure 4.4, when written as
an octagon, returns Figure 4.3.
a. A ¬I* I*
CD
C c. ¬A {2,3}
C
SC CD CD
{1} A ¬I*{1}
CD
C CD
C
CD C
I ¬I A* SC
SC
b. I SC I*
{1,2}
¬A I* {2,3}
¬A I* CD CD
C
A ¬I*
3 2 1 2 3
FIGURE 4.5 The Square, the VS-model, and the complete octagonal graph of ABPC
The VS-modelling of ABPC given in Figure 4.5b shows again the fault
of undue existential import: since there is no space where both I-type and
I*-type L-propositions are false, there is no space for those situations where
[[F]] ¼ . Therefore, a fourth space is required containing those situations where
[[F]] ¼ . The logical power of ABPC is again 28, as for propositional calculus.
b. [[∃]] ¼ { <Y,X> j Y \ X 6¼ }
(the extension of the predicate ∃ is the set of all pairs of sets Y, X,
such that the intersection of Y and X is nonnull)
ALL F is G is considered true just in case [[F]] [[G]], and SOME F is G is
considered true just in case [[F]] \ [[G]] 6¼ . When [[F]] ¼ , ALL F is G is
automatically true because, in set theory, is a subset of any set (for all sets X
·
and Y, X Y iff X Y ¼ X and X þ Y ¼ Y—a condition always fulfilled when
X ¼ ).
Figure 4.6 (repeated from Figure 3.11) does for SMPC what Figure 4.5
does for ABPC. In Figure 4.6a, space 1 represents cases where [[F]] [[G]]
and [[F]] 6¼ ; space 2 cases where either [[F]] O O [[G]] or [[G]] [[F]] and
[[F]] 6¼ ; space 3 cases where [[F]] OO [[G]] and [[F]] 6¼ ; and space 4 cases
where [[F]] ¼ —the class of situations absent in ABPC. In space 4, A-type
and A*-type L-propositions both count as true and I-type and I*-type
L-propositions as false.
Figure 4.6a parallels Figures 4.5b and 4.4, but whereas Figure 4.4 is in no need
of a fourth space, Figure 4.6a must have one, since, without it, it fails to cater for
situations where [[F]] lacks the required supply of extensional objects. Standard
propositional calculus does not need a counterpart to extensionally null pre-
dicates: a set may be null, but a proposition has nothing to reciprocate with.
Under a strictly extensional ontology, the system of Figure 4.6 is logically
sound: there is no undue existential import and the system is fully consis-
tent—even if it clashes with natural intuitions. But look what has happened to
its logical power, which now amounts to a mere 12. The extra space 4, with
truth for A-type sentences, destroys most of the beautiful, rich logic of Figure
4.5. The logically complete and maximally powerful octagonal graph of Figure
a. b. ¬A {2,3}
[[F]] = Ø CD
{1,4} CD {1,4}
A ¬I* ¬I*
A
¬A I*
CD CD
¬A I* {1,2}
A ¬I* I
4 3 2 1 2 3 4
I ¬A* CD
I*
I ¬A* CD {2,3}
¬I A*
A* A*
¬I ¬I {3,4}
{3,4} CD CD
U ¬A* {1,2}
FIGURE 4.6 The VS-model of SMPC and the poor remnants of its octagonal graph
Logical power 135
4.5c has been largely dismantled and the traditional Square has vanished
altogether. In effect, the octagon has disappeared and given way to two
isomorphic trapezoids, shown in Figure 4.6b, whose isomorphism reflects
the elementary fact that the internal negation may be added without con-
sequences for the semantics of the vertices (note that A** A and I** I).
The entire logical power of the trapezoids derives from the equivalence
¬I A* (or ¬I* A)—that is, from the Conversions. Almost all connections
between vertices have been lost, which means that the vertex pairs in question
represent the vacuous relation of logical independence.
The striking news about SMPC is, of course, its heavy loss in logical power
when compared with ABPC, whose 28 logical relations have dwindled to a paltry
twelve: two quadruples <A, I*,¬I*,¬A> and <A*, I,¬I,¬A*>, which are logi-
cally isomorphic owing to the Modulo*-Principle. This loss of logical power is
solely due to the stipulation made in SMPC that A- and A*-sentences are true in
space 4, where [[F]] ¼ . But alas, ABPC is logically faulty, as it fails to cover
situations where G requires nonnull membership of [[F]] yet [[F]] fails to oblige.
To make things worse, SMPC also clashes badly with natural intuitions
about truth and falsity. In fact, for natural language SMPC is an unmitigated
disaster. Consider, for example, sentence (4.8), said by a mechanic to justify
an exorbitant bill for the servicing of a car with a diesel engine, which, as one
knows, has no spark plugs:
(4.8) All spark plugs have been changed.
For SMPC this sentence is true, but any judge presiding over the case brought
by the car owner against the mechanic will consider the latter a liar.
Or take the sentences (4.9a,b), which, in the SMPC book, should both be
counted as true in any situation, common enough in the actual world, where
there are children but no real baby dinosaurs:
(4.9) a. Some children played with all baby dinosaurs.
b. Some children didn’t play with any baby dinosaur.
In fact, given the absence of baby dinosaurs, SMPC makes both sentences
equivalent to the statement that there was at least one child. Yet ordinary
people will consider (4.9a) false in a situation with at least one child and no
baby dinosaurs. And (4.9b) may have to count as true in such a situation, but
only in a trivial and uninformative way.
Pragmatic principles make both sentences equally inappropriate in the
actual world, where everyone knows that there are no living baby dinosaurs.
To utter such sentences will thus violate a number of Gricean maxims. Yet
136 The Logic of Language
these maxims fail to explain why (4.9a) is felt to be false while (4.9b) is
considered true by unsuspecting speakers. More examples of this nature are
easily thought up. In practically all cases the conclusion is that ABPC fits
natural intuitions much better than SMPC, even though the latter reigns
supreme in the world of modern logic and the former is still far from full
empirical adequacy.
2
In Seuren (2002) a solution is proposed for undue existential import that is identical to Abelard’s.
I named it Revised Aristotelian predicate calculus or RAPC. At the time, I was not aware of the fact that
Abelard had proposed an identical solution nine hundred years earlier. To give Abelard his due, I have
renamed the system in question Aristotelian-Abelardian predicate calculus.
Logical power 137
a. A ¬I* I* c. ¬A {2,3,4}
CD CD SC
{1} A ¬I* {1,4}
C C C C
C CD
CD
I ¬I A* {1,2} SC {2,3}
I C
b. C C
I*
[[F]] = Ø
¬A ¬I* CD
¬A I* SC
¬A I* A* {3}
{3,4}¬I SC
A ¬I* SC CD
4 3 2 1 2 3 4
I ¬A* ¬A* {1,2,4}
I ¬A*
¬I A*
¬I ¬A*
FIGURE 4.7 The square, the VS-model, and the octagon for AAPC
12, but less well than ABPC, which comes to 28. Moreover, intuitions are
better respected. AAPC has thus freed itself from all the restrictions of natural
set theory, whether basic or strict, but has been able to reduce the price for this
freedom considerably by adopting a definition of the universal quantifier 8
that differs from the definition adopted in SMPC.
It thus appears that, from a strictly logical point of view, ABPC represents a
retrograde development with regard to AAPC, which preceded it in time (if
one grants Aristotle the honour of being the originator of AAPC). Not only
did ABPC introduce the logical defect of undue existential import, it also
failed to eliminate the principles of natural set theory. Yet this logical blunder,
if that is what one wishes to call it, was offset by a great advance as regards the
functionality of predicate logic in real life situations. As is shown in Chapter 6,
ABPC has the advantage of cutting out the informational redundancy of
SMPC and it has greater logical power.
It is, therefore, questionable whether SMPC does indeed deserve the exalted
status of inviolate doctrine it enjoys. For nonmentalistic, purely extensional
applications, for which SMPC is fully valid, AAPC is likewise fully valid, but it
appears to be a better, because more powerful, logic. Nowhere do the found-
ing fathers of modern logic or their followers provide reasons for keeping the
138 The Logic of Language
Conversions and giving up the subalterns rather than the opposite. In fact,
they appear to have opted for the most immediately obvious, but not neces-
sarily the most useful, application of Boolean algebra to predicate calculus.
This, however, is a question that falls outside the scope of the present work.
What we can show—and do show in the following section—is that it is not
too hard to provide a simple and logically interesting mathematical charac-
terization of the AAPC quantifiers.
Yet no matter how interesting, perhaps even revolutionary, the Abelardian
solution to the problem of undue existential import may be, it does insuffi-
cient justice to the requirements of cognitive realism (Section 1.3.1 in
Volume I), as it fails to take into account the fact that natural language
quantifies with equal ease over extensional as it does over intensional objects,
as well as the fact that certain predicates yield truth when applied to inten-
sional objects, as is shown in Chapters 2 and 5 of Volume I. For that reason
AAPC cannot be considered a viable candidate for the post of logic of
language, even though it is a highly interesting and intriguing pointer.
Therefore, if it proves possible to resolve or circumvent the problem of
undue existential import, ABPC, also known as the Square of Opposition, will
turn out, after all, to be the most preferable predicate-logic system for natural
language and cognition, pace our mathematical friends.
existential quantification and disjunction. As was said in Section 4.1, the universal
quantifier yields truth just in case the conjunction G(a1)∧G(a2)∧ . . . ∧G(an) is
true, where a1, a2, . . . , an denote the elements in the extension [[F]] of F.
Analogously, the existential quantifier yields truth just in case the disjunction
G(a1)∨G(a2) ∨. . .∨ G(an) is true, where, again, a1, a2, . . . , an denote the
elements in [[F]]. The reason why this correspondence is not at all popular
in modern standard logic lies in the fact that the correspondence is lost when
[[F]] ¼ , because then there is no conjunction and no disjunction. In the present
perspective, however, this is not a weakness but, rather, an aspect that can be
turned to the advantage of AAPC.
Just like the Russellian quantifiers (see (2.9) in Section 2.3.5.1), the distributive
quantifiers are defined as unary higher-order predicates over sets. Other than
the Russellian quantifiers, however, and more like the generalized quantifiers,
the distributive quantifiers are defined with respect to a designated predicate
(F) and require that the members of [[F]] satisfy the condition of the quantifier
with respect to the extension of the matrix predicate G. Given the predicates
F and G, the condition for 8F is, in simple terms, that for all x 2 [[F]], G(x) is
true, whereas ∃F requires merely that for at least one x 2 [[F]], G(x) is true.
The quantifiers are thus, in fact, predicates over restrictor-predicate exten-
sions with respect to a matrix predicate G. When it is said that All flags are
green, or Some flags are green, then these sentences are interpreted as state-
ments about the set of flags. Provided [[Flag]] 6¼ , the statement is that, for
8Flag, truth is achieved only if Green(a1) ∧ Green(a2) ∧ . . . ∧ Green(an) is
true, and, for ∃Flag, only if Green(a1) ∨ Green(a2) ∨ . . . ∨ Green(an) is
true, where a1, a2, . . . , an denote the elements in [[Flag]].
On the basis of this we say that, for 8F, the set of objects [[F]] must satisfy the
predicate G under conjunction and that, for 8F, the set of objects [[F]] must satisfy
the predicate G under disjunction, for truth to come about. The notion of a set of
objects satisfying a predicate under a propositional operator is defined as follows:
The set of objects [[F]] SATISFIES THE PREDICATE G UNDER CONJUNCTION just in
case either F(a)∧G(a) is true, where a is the only element in the extension
of F (¼ [[F]]), or there is a true conjunction [F(a1) ∧ G(a1)] ∧ . . . ∧
[F(an) ∧ G(an)], where a1, . . . , an form the extension of F.
The set of objects [[F]] SATISFIES THE PREDICATE G UNDER DISJUNCTION just in case
either F(a)∧G(a) is true, where a is the only element in the extension
of F (¼ [[F]]), or there is a true disjunction [F(a1) ∧ G(a1)] ∨ . . . ∨ [F(an)
∧ G(an)], where a1, . . . , an form the extension of F.
140 The Logic of Language
Val: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
Fa + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + -
Fb + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - -
Fc + + + + - - - - + + + + - - - - + + + + - - - - + + + + - - - -
Ga + + + + + + + + - - - - - - - - + + + + + + + + - - - - - - - -
Gb + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - -
Gc + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Fa ∧ Ga + - + - + - + - - - - - + - - - + - + - + - + - - - - - - - - -
Fb ∧ Gb + + - - + + - - + + - - + + - - - - - - - - - - - - - - - - - -
Fc ∧ Gc + + + + - - - - + + + + - - - - + + + + - - - - + + + + - - - -
Val: 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
Fa + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + -
Fb + + - - + + - - + + - - + + - - + + - - + + - - + + - - + + - -
Fc + + + + - - - - + + + + - - - - + + + + - - - - + + + + - - - -
Ga + + + + + + + + - - - - - - - - + + + + + + + + - - - - - - - -
Gb + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - -
Gc - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Fa ∧ Ga + - + - + - + - - - - - - - - - + - + - + - + - - - - - - - - -
Fb ∧ Gb + + - - + + - - + + - - + + - - - - - - - - - - - - - - - - - -
Fc ∧ Gc - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FIGURE 4.8 Mtoy for two predicates F and G and three objects called a, b, and c
Logical power 141
Given the fact that the quantifiers are now restrictor-bound predicates over
matrix-predicate extensions, their L-propositional syntax had better be
changed as well. Sentences like (4.12a,b) are now better rendered syntactically
as the L-propositions (4.13a,b), where the restrictor predicate is attached to
the quantifier. The syntax of the quantifier now corresponds to (2.10a,b) of
Section 2.3.5.2:3
(4.12) a. All farmers grumble.
b. Some farmers grumble.
(4.13) a. 8x[Farmer(x)](Grumble(x))
(for all objects x such that x is a farmer, x grumbles)
b. ∃x[Farmer(x)](Grumble(x))
(for at least one object x such that x is a farmer, x grumbles)
The corresponding L-propositional tree structure is now different from that
assumed for generalized quantifiers. On this analysis, a sentence like Some
farmers do not grumble is assigned the L-propositional tree structure of Figure
4.9. This configuration makes it possible to use the same variable x for both
the predicate farmer and the predicate grumble. In practice, this means that
the whole structure S1 is true just in case the disjunction ¬Grumble(a1) ∨
¬Grumble(a2) ∨. . . ¬Grumble(an) is true, where a1, a2, . . . , an range over
[[Farmer(x)]], or, in the terminology defined above, the set of farmers satisfies
the predicate not-Grumble under disjunction.
S1
Pred S3
Pred S2 Pred S4
SOME x NOT
Pred NP Pred NP
Farmer x Grumble x
FIGURE 4.9 L-propositional tree structure of Some farmers do not grumble
3
Compared with (2.11a,b) in Chapter 2, Figure 4.9 shows the result of the transformational
syntactic rule of OBJECT INCORPORATION or OI—a rule frequently found in the syntax of natural
languages. Examples are English expressions like take care of or pay attention to, where the object
terms care and attention have been incorporated into the predicate, as is seen from the passives She has
been taken care of or She has been paid attention to. Full lexicalizations where an object term has been
incorporated are, for example, English bearhunting, seafaring, globetrotting, golddigging, brew (‘make
beer’), price (‘put a price on’), pencil (‘use a pencil to write’). Such formations are frequent in many
other languages as well. See also note 6 in Chapter 2.
Logical power 143
S1
Pred S3
Pred S1 Pred NP
MOST x Grumble x
Pred NP
Farmer x
FIGURE 4.10 L-propositional tree structure of Most farmers grumble
144 The Logic of Language
its inability to warrant the truth of quantified statements under the operators
ALL, SOME, or NO as long as no full situational knowledge has been achieved—a
defect that will hardly have been felt for most of the period in which natural
language came into being and was the vehicle of linguistic interaction
in illiterate, tribal communities. Presumably, BNPC’s failure to be knowl-
edge-independent began to be important only a mere five thousand years
ago when literate civilizations began to spring up all over the world, in South
and Central America, China, Mongolia, Northern India, Mesopotamia, and
Egypt.
5
a. A C
E b. A ¬I* I*
CD
C
CD CD SC
C
CD
SC
I ¬I A*
I O
: equivalents
A: All F is G / No F is not-G C: contraries
I: Some F is G / Not all F is not-G CD: contradictories
E: No F is G / All F is not-G SC: subcontraries
O: Some F is not-G / Not all F is G >: entails
FIGURE 5.1 ABPC represented as the Boethian Square of Opposition and as the strict
natural square consisting of two isomorphic triangles
the roles of external and internal negation and because it fails to express
the Modulo*-Principle. For these reasons, we decided, in Section 2.2, to
replace the symbols E and O with A* and I*, respectively, so as better
to be able to express the fact that E-sentences are in fact A-sentences (that
is, universals) and that O-sentences are in fact I-sentences (particulars), in
both cases with an internal negation on the G-predicate, represented by
the asterisk. We also adopted the sign ¬ for standard (external) negation
so as to express the fact that contradictoriness is systematically caused by
negation.
a. ¬AND b. ¬A
CD CD CD CD
AND ¬OR* A ¬I*
CD CD
CD CD
C C C C
C C
SC SC SC SC
OR SC OR* I SC I*
CD CD CD CD
C C
¬OR AND* ¬I A*
SC SC
CD CD CD CD
¬AND* ¬A*
FIGURE 5.2 Propositional calculus and ABPC as isomorphic complete octagonal graphs
Aristotle, the commentators, and Abelard 149
It has been shown, in the preceding chapters, that, apart from the notation
used for the vertices, ABPC is better represented not as a square with
two crossing diagonals but as a combination of two triangles connected
at two of their vertices by equivalence relations, as in Figure 5.1b, just as
was done for propositional calculus in Figure 4.1. In Chapter 4, however,
it was shown that the optimal representation for both propositional and
predicate calculus is maximalist: it takes the form of an octagonal graph, as
in Figure 5.2, repeated from Figures 4.3 and 4.5.
Section 4.1 has made it sufficiently clear that, under a strictly extensional
ontology, ABPC suffers from UNDUE EXISTENTIAL IMPORT (UEI). What interests
us here is the historical development of ABPC and of its unknown variant
Aristotelian-Abelardian predicate calculus or AAPC.
1 The most detailed and authoritative study on Aristotle’s On Interpretation (Perı̀ Hermēneı́as) is
Weidemann (1994), where one finds a translation that is better than most, along with a discussion of
the chronology and authorship of the work, a complete survey of the manuscript tradition, the
tradition of interpretation in Antiquity, the Arab world and the Middle Ages, and the translation
tradition. What it does not have is an analysis of the actual logic involved.
150 The Logic of Language
logic. Such lapses are understandable, given the high degree of difficulty of
predicate calculus and given the fact that Aristotle had to create logic out of
nothing, without any existing terminology and without any formalization
techniques to build on. Many things that were still opaque to him are clearer
to us now.
In the matter of existential import, however, Aristotle appears to have seen
the problem more sharply than most later logicians and many modern
historians of logic, who tend to gloss over it too lightly or even fail to see it.
When one reads Aristotle’s text literally, one sees that he hedges precisely
on the point where existential import becomes relevant. He rejects the
classic Conversions and thus saves his logic from the blemish of undue
existential import, as will be clear in a moment. That Aristotle may well
have seen the danger of undue existential import looming at the horizon is
seldom taken into consideration, presumably, one is inclined to think, be-
cause his commentators, followers, and critics did not, on the whole, discern
it as clearly as they could have. It was not until the late nineteenth century
that the logical problem of existential import began to receive full attention.
Until that time, awareness of this problem seems to have been desultory
and incomplete. Since we have no inclination to underestimate Aristotle,
who is undoubtedly one of the greatest intellectual giants in Western history,
we will have a closer look at the issue.
What we see, when we read the text of On Interpretation closely, is that
Aristotle stops short of stating the Conversions—that is, the equivalence of NO
F is G and ALL F is NOT-G and of SOME F is G and NOT ALL F is NOT-G. The
Kneales noticed this (Kneale and Kneale 1962: 57):
Aristotle [ . . . ] allowed that Every man is not white could be said to entail No man is
white, but rejected the converse entailment.
Yet these authors failed to see the relevance of this rejection. Further down in
their book, when discussing Abelard’s rejection of the Conversions and his
appeal to Aristotle who, like Abelard, considered NOT EVERY human is white
and not SOME human is NOT-white, to be the contradictory of EVERY human
is white, their comment is (Kneale and Kneale 1962: 210):
It is true, of course, that Aristotle wrote Greek words corresponding to Non omnis
homo est albus [Not every human is white], but it seems clear that he did not intend to
convey by these words anything different from the doctrine later attributed to him by
Boethius.
This not only contradicts what they wrote on p. 57, but it is also, one must
fear, just wrong. Why should Aristotle not be taken at his actual words?
Aristotle, the commentators, and Abelard 151
These passages are crucial. For if, as Aristotle actually does here, the
Conversions are given up for one-way entailments (A* ‘ ¬I, and therefore
also A ‘ ¬I*, because if A* ‘ ¬I then also A** ‘ ¬I*, and A** A) but not
vice versa, the logic is sound: it then no longer suffers from its central logical
defect, undue existential import. This fact is too important for it to pass
unnoticed and it seems less than fair to Aristotle to ascribe this crucial
avoidance of a basic logical error to mere good luck on his part.
Yet it is true that Aristotle fails to be explicit on several points where one
would have liked him to be a little less sparing of his words. For example, he
gives no evidence of an explicit awareness of the subaltern entailments. Yet they
follow directly from what he does present explicitly. He does say clearly and
repeatedly that the truth of ALL F is G requires the falsity of its contrary NO F
is G. He also says explicitly and repeatedly that the contradictory of NO F is G
is its nonnegative counterpart SOME F is G. Given this, it is hard to imagine
2 This last sentence is problematic owing to the extreme density of Aristotle’s style at this point. The
literal translation of the Greek Anángkē gàr eı̃naı́ tina is ‘For there must be some’. Most existing
translations leave the opacity of this sentence unclarified. I have followed Weidemann’s translation
(Weidemann 1994: 20): ‘denn notwendigerweise ist denn ja irgendeiner gerecht’, which makes perfect
sense: if there is one just man, then it cannot be the case that all men are unjust.
152 The Logic of Language
that he failed to see that, therefore, the truth of ALL F is G requires the truth of
SOME F is G—that is, the positive subaltern entailment. And analogously for
NO F is G, which, by contraposition, entails NOT ALL F is G—that is, the
negative subaltern entailment for the negation of ALL F is G (though not
necessarily for its supposed equivalent SOME F is NOT-G). Moreover, he implies
the validity of the subaltern entailments at Prior Analytics (25a8–14):
The positive sentence does convert [in the Aristotelian sense; PAMS], though not as a
universal but as a particular, for example, if every pleasure is a good, then some good
must be a pleasure. Of particular sentences the positive does convert (since if some
pleasure is good, then some good will also be a pleasure), but the negative particular
does not convert, because if some animal is not human, it does not follow that some
human is not an animal.
3 See also Kneale and Kneale (1962: 58), where these authors express essentially the same conclusion.
Aristotle, the commentators, and Abelard 153
Square; PAMS] and follow from them.’ (Busse 1897: 92). (One may surmise
that diagrams were commonly used in classroom teaching, or else it is hard to
understand how Ammonius could use the expression ‘below the contraries’;
see also Section 5.2.3.)
At Int 17b25 and 20a17–18 (quoted above) Aristotle says that the contradicto-
ries of contraries may be simultaneously true. But he does not, or not
explicitly, draw the further consequence that the falsity of the one excludes
the falsity of the other: they cannot be simultaneously false. Had he done
so, he would have established the relation of subcontrariety between I-type
and ¬A-type sentences. We shall be generous and assume that Aristotle did
know about subcontrariety, even though he did not give it a name. Aristotle
may not have seen all the details, but to deny him the insight that if
two sentences cannot be simultaneously true, their contradictories cannot be
simultaneously false, would seem to do him an injustice. In this we are
supported by the Kneales, who take it that Aristotle was, in fact, aware of
the relation of subcontrariety even though he had no term for it (Kneale and
Kneale 1962: 56):
The two particular statements [i.e. I and I*; PAMS] have been said by later logicians to
be subaltern to the universal statements under which they occur in the figure and sub-
contrary to each other. Although he does not use these expressions, Aristotle is
interested in the relations so described, and assumes that sub-contraries cannot
both be false though they may both be true. This is shown by his description of
them as contradictories of contraries.
¬A
CD SC
A ¬I* CD: contradictories
C: contraries
CD SC: subcontraries
C >: entailment
SC
C
I I*
C
C
CD
¬I A*
SC
SC CD
¬A*
FIGURE 5.3 Aristotle’s predicate logic as presented in his texts
Aristotle, the commentators, and Abelard 155
4 The same goes for propositional calculus, where it took Lukasiewicz (1934) to show that this
calculus did not originate with Aristotle, as had been widely thought till then, but with the Stoics.
Significantly, Lukasiewicz wrote (Borkowski 1970: 198): ‘The history of logic must be written anew, and
by an historian who has fully mastered mathematical logic.’ Admittedly, the situation has changed to
some extent during the seventy-odd years that have passed since, but not enough to make
Lukasiewicz’s words irrelevant. It is still so that historians of logic can do with some additional
formal training. Just as technical translators need knowledge not only of the languages concerned
but also of the subject matter of the text, historians of a technical subject need not only historical,
linguistic, and textual knowledge but also the expertise required for a proper grasp of the subject at
hand. Unfortunately, this latter requirement is too often not met.
5 With the exception of the three authors mentioned, all the ancient authors known to have written
commentaries on Aristotelian logic restricted themselves either to basic semantic notions (such as the
156 The Logic of Language
relation between thought, word, and thing) or to the theory of the syllogism. This goes, for example,
for Alexander of Aphrodisias (±205 CE) and also for the Neo-Platonist Porphyry (±234–±305 CE)
mentioned by Ebbesen (1990a). The immensely influential ancient authority on medical science, Galen
of Pergamon (±129–199 CE), is thought to have written a commentary on Aristotelian predicate logic.
Unfortunately, however, the relevant parts of this commentary have not, as far as is known, survived
the wear of time. Sorabji (2004), a source book of Aristotelian commentaries on logic and
metaphysics, has nothing at all on predicate logic.
Aristotle, the commentators, and Abelard 157
title) makes it clear that Apuleius saw that a rectangular arrangement would
capture Aristotelian predicate logic as set out in his On Interpretation and
modified through a few centuries of teaching practice.
Apuleius describes in detail what the ‘quadrata formula’ (Sullivan 1967: 64)
should look like. It is made up of the four vertices A (‘omnis voluptas bonum
est’ or ‘every pleasure is a good’), A* (‘omnis voluptas bonum non est’
or ‘every pleasure is not a good’), I (‘quaedam voluptas bonum est’ or
‘some pleasure is a good’) and I* (‘quaedam voluptas bonum non est’
or ‘some pleasure is not a good’)—though, of course, Apuleius did not use
these symbols. The positive and negative universals (A and A*) are to be
placed ‘in superiore linea’ and the positive and negative particulars (I and I*)
‘in inferiore linea’ (Sullivan 1967: 110). This terminology strongly suggests a
teaching practice where it was customary to actually draw the diagram.
One may assume that something like a square representation, whether or
not actually drawn, was used in whatever teaching in predicate logic took
place during subsequent centuries. It was left to the Roman Boethius, who
probably fell back on his slightly older contemporary the Greek Ammonius,
to complete Aristotle’s logic and cast it into the mould of the classic Square of
Opposition, which subsequently became part of the stock-in-trade of predi-
cate logic.
S C E
O S
V N A V
T I
B R R B
A O
A T A
L C L
T I T
D D
E A I E
R C
R T TO R
N N R N
O IA
A C A
E E E
Adfirmatio particularis Negatio particularis
Quidam homo Quidam homo iustus
SVBCONTRARIAE
iustus est non est
Universale Universale
particulaliter particulaliter
For Boethius’ text in this regard I consulted Meiser (1880). There one finds
that Boethius did actually draw the Square diagram, prefacing it by the words
(Meiser 1880: 152):
Superioris autem disputationis integrum descriptionis subdidimus exemplar, quatenus
quod animo cogitationique conceptum est oculis expositum memoriae tenacius
infigatur.
[We have provided an integral descriptive diagram of the above discussion, since what
is understood by the mind and by thought is more enduringly fixed in memory when
shown to the eye.]
Logically speaking, one might be tempted to think that letting the I* and
A*-corners ‘leak’ has a great deal going for it because it looks as if the classic
Square is preserved in full glory when A* and I* no longer require that the
F-class should be nonnull, as becomes clear when one sets up a VS-model and
an octagonal representation along these lines, as in Figure 5.5 below. In virtue
of considering space 4 a truth-maker for A* and I* but not for A and I, the
resulting octagon turns out identical to that of the Square (Figure 5.2b). This
may well give rise to the idea that the Square can be saved in full if the truth
assignments are made in this way. But, as is shown below, this advantage is
bought at the price of having to admit an ambiguity in the semantic definition
of the existential quantifier.
Horn (1989: 23–30) is at pains to convince his readers of the historical
legitimacy of the view that ‘existential import is determined by the quality of
the proposition; affirmative (A and I) propositions entail existence, while
negative ones (E and O) do not’ (Horn 1989: 24). He assigns an impressive
pedigree to this view, tracing it back to Aristotle in so far as Aristotle denies
the truth of a singular, nonquantified sentence like Socrates’ son is ill when
Socrates has no son. Just as Socrates’ son is not ill is true when Socrates has no
son, Not all Socrates’ children are ill should likewise be considered true, and
thus All Socrates’ children are ill should be considered false, when the good
man has no children. Yet, although this parallel is sometimes drawn (e.g.
Thompson 1953: 257), it is false, since Aristotle clearly distinguishes between
‘singular sentences’ (‘occasion sentences’ in Quine’s terminology) and quan-
tified sentences or categoricals (Quine’s ‘eternal sentences’). Moreover, nei-
ther Aristotle nor any other ancient author has been found prepared to
maintain that Some of Socrates’ children are not ill is true when Socrates is
childless.
Moody (1953) proposes a more modest pedigree. Trying to defend the
logical soundness of the Square, he claims (1953: 50–3), followed by Klima
(1988: 18–19), that the medievals generally held that only the affirmatives in
the Square have existential import while the negatives do not. Buckner,
however, clearly showed in his conference paper (2007) that this is at best
tendentious and at worst just wrong, since this interpretation of the intended
meaning of I*-type propositions is overwhelmingly belied by the medieval
philosophical literature. It appears to be closer to the truth to say that during
the fourteenth century and only then this view was discussed but far from
generally accepted.
In Moody’s defence it must be said that he recognizes the fact that, even
though withholding existential import from the O-corner saves the Square
from UEI, it clashes with natural intuitions (Moody 1953: 51–2):
160 The Logic of Language
Yet he fails to draw the conclusion that, therefore, this interpretation of the
Square is artificial and unhelpful for an understanding of the natural logic of
human language and cognition.
In Horn (1997) one finds a shift towards what Thompson, in the quote
given above, calls ‘logical analysis itself ’ and away from natural usage. Here,
Horn defends the view that only affirmative propositions in the Square of
Opposition have existential import, while the negatives do not, so that Some F
is not a G should be taken to be automatically true ‘in a state of F-lessness’.
This view, which is, again, called ‘traditional’ (Horn 1997: 157), is attributed to
Aristotle’s commentators Apuleius and Boethius, to the twelfth-century
French philosopher Abelard, and to a few modern authors, in particular
Carroll (1896), Strawson (1952), and Kneale and Kneale (1962).
These references, however, do not stand up to scrutiny. As has been shown
in Section 5.2.2, the commentators certainly did not subscribe to the analysis
proposed by Horn and Parsons. Abelard did not even have an O-corner, as is
shown in Section 5.3, because, like Aristotle, he distinguished I*, which does
have existential import, from ¬A, which does not. The eccentric but well-
informed Oxford mathematician-logician-writer-photographer Charles
Dodgson, better known under his nom de plume Lewis Carroll, wrote a
Aristotle, the commentators, and Abelard 161
logic textbook for children (Carroll 1896). This author clearly assigns existen-
tial import to I*, as is shown by his reduction of Some apples are not ripe to:
Some j existing Things j are j not-ripe apples (Carroll 1896: 77).
Horn’s statement (Horn 1997: 157) that Carroll defended the position that the
O-corner lacks existential import is thus clearly erroneous.
The same applies to his reference to Strawson’s logic textbook. There we
read, for example (Strawson 1952: 166):
It is already agreed that the I and O forms are to be regarded as having existential
import.
And as regards the Kneales’s famous book on the history of logic, we read
(Kneale and Kneale 1962: 58):
. . . the assertion of the existence of a man who is white, or not-white, as the case may
be, already involves an assertion of the existence of a man.
Parsons (2006), which, mainly through its publication on the internet,6 has
had a considerable influence, repeats Horn’s claim to an ancient pedigree.
Apparently, Parsons considers this historical claim important, as it occurs in
the opening paragraph of his article (Parsons 2006: 1):
For most of this history [of the Square; PAMS], logicians assumed that negative
particular propositions (‘Some S is not P’) are vacuously true if their subjects are
empty.
6 Parsons (1997) is a larger version of Parsons (2006), published in the normal, old-fashioned way.
7 I owe this information to Edward Buckner (email correspondence).
162 The Logic of Language
principle. Since the Church backed the realists (Ockham was excommuni-
cated in 1328), their view prevailed and the nominalist view, which errone-
ously implied that the O-corner has no existential import, was marginalized.
But one should note that this debate was not triggered by the question of UEI,
which, apparently, did not figure at all prominently in the minds of the
philosophers involved, but by the great metaphysical debate on universals.
It might well have been superfluous if both parties had taken the trouble to
have a closer look at the Aristotelian-Abelardian version of predicate logic.
Moody states (1953: 51):
Since existential import was considered to belong only to affirmative sentences, it is
sufficient, for the falsity of an affirmative and hence for the truth of the contradictory
negative, that one of the terms stands for nothing.
This statement, however, is not backed up with any crucial quotation. The
closest he comes is a quote from Buridan’s Sophismata, Ch. 2, Concl. 14
(Moody 1953: 51):
Omnis particularis negativa vera, ex eo est vera ex quo universalis affirmativa sibi
contradictoria est falsa.
[Every true particular negative is true on the grounds that the universal affirmative
which is its contradictory is false.]
But Buridan does not go so far as to state that particular negatives are
vacuously true when the subject class is empty. Moody’s statement that
‘existential import was considered to belong only to affirmative sentences’
is, therefore, based on his interpretation, not on textual evidence. One may
wonder why the medieval authors did not put into words what Moody takes
them to imply. Did they baulk at the idea of having to call a sentence like Some
unicorns are not animals true despite the absence of unicorns in this world or
is this simply not what they implied?
Abelard, as shown in Section 5.3, solved the predicament, in the early
twelfth century, by giving up the Conversions and letting the O-corner keep
its existential import. In this he was followed by Walter Burleigh in his De
Puritate Artis Logicae, written around 1328 in reply to Ockham’s Summa
Logicae, which was written around 1323. For Burleigh, as is shown in Section
9.4.2, an A-type sentence like Every man who has a son loves him and its
corresponding I*-type Some man who has a son does not love him are not
contradictories but only contraries.
In any case, Moody’s skewed interpretation of medieval logic is now widely
taken to reflect historical reality, especially in North America. Many or most
American authors now take it for granted that it was ‘the general medieval
Aristotle, the commentators, and Abelard 163
view that affirmative sentences are false if their subjects are empty, whereas
negative sentences are true if their subjects are empty’ (King 2005: 266). Yet
never is one presented with an actual reference to that effect: it very much
looks as if Moody has been believed on the strength of his authority, not on
the strength of actual evidence.
Nor, it seems, on the strength of actual insight. When providing the truth
conditions for the four Aristotelian sentence types, King, in his otherwise
excellent article, unwittingly assigns the same conditions to ¬I as he does to
I*, which suggests a less than full grasp of the issue. Yet the issue seems to
weigh on his mind, or else it is hard to see why he should start his enumera-
tion of truth conditions for the four types with an I*-sentence whose subject
term is empty (King 2005: 266):
Truth conditions for assertoric present-tense categorical sentences are straight-
forward. For instance, the particular negative sentence ‘Some vampires are-not
friendly’ is true just in case what ‘friendly’ personally supposits for, namely people
who are friendly, does not include anything—note the negative copula—for which
‘vampire’ personally supposits. Universal affirmatives (‘Every S is P’) are true when
everything their subjects supposit for their predicates also supposit for; particular
affirmatives (‘Some S is P’) when their predicates supposit for at least one thing their
subjects supposit for; universal negatives (‘No S is P’) when the predicate does not
supposit for anything the subject supposits for.
Parsons (2008: 5) does come up with something that looks like evidence,
quoting a passage from Ockham, who lived in the fourteenth century. This
passage, however, occurs in a wider context which I quote here more fully,
both in Latin and in my English translation (Ockham, Summa Logicae II.3).
The text opens Chapter II.3 of Ockham’s Summa Logicae, entitled ‘What is
required for the truth of propositions that are both indefinite and particular’
(I have italicized the parts that are quoted by Parsons):8
Viso quid sufficit ad veritatem propositionis singularis, videndum est quid requiritur
ad veritatem propositionis indefinitae et particularis.
Et est primo sciendum quod si non vocetur propositio indefinita nec particularis nisi
quando terminus subiectus supponit personaliter, tunc semper indefinita et particularis
convertuntur, sicut istae convertuntur ‘Homo currit’, ‘Aliquis homo currit’; ‘Animal est
homo’, ‘Aliquod animal est homo’; ‘Animal non est homo’, ‘Aliquod animal non est
homo’. Et ad veritatem talium sufficit quod subiectum et praedicatum supponant pro aliquo
eodem, si sit propositio affirmativa et non addatur signum universale a parte praedicati;
quod dico propter tales ‘Aliquod animal est omnis homo’, ‘Aliquis angelus est omnis
angelus’. Sed si talis sit negativa, requiritur quod subiectum et praedicatum non
supponant pro omni eodem, immo requiritur quod subiectum pro nullo supponat, uel
quod supponat pro aliquo pro quo praedicatum non supponit. Et hoc quia ad veritatem
talium sufficit veritas cuiuscumque singularis. Sicut ad veritatem istius ‘Aliquod animal
est homo’ sufficit veritas istius ‘Hoc animal est homo’ vel ‘Illud animal est homo’;
similiter ad veritatem istius ‘Animal non est homo’ sufficit veritas istius ‘Hoc animal
non est homo’, quocumque demonstrato. [ . . . ] Et ideo si nullus homo nec aliquod animal
sit nisi asinus, haec consequentia non valet ‘Homo non est asinus, igitur aliquod animal non
est asinus’. Similiter non sequitur ‘Homo albus non est animal, igitur homo non est
animal’ nisi ista propositio sit vera ‘Homo albus non est homo’. Tamen affirmative bene
sequitur [ . . . ] quia semper, sive homo sit animal sive non, bene sequitur ‘Homo currit,
igitur animal currit’, similiter bene sequitur ‘Homo albus est animal, igitur homo est
animal’, sive homo sit albus sive non. Sic igitur patet quomodo indefinita vel particularis
est vera si subiectum supponat pro aliquo pro quo non supponit praedicatum. Hoc
tamen non semper requiritur, sed quandoque sufficit quod subiectum indefinitae vel
particularis negativae pro nullo supponat. Sicut si nullus homo sit albus, haec est vera
‘Homo albus non est homo’, et tamen subiectum pro nullo supponit quia nec pro
substantia nec pro accidente.
[Now that we have seen what suffices for the truth of singular propositions, let us see
what is required for the truth of propositions that are both indefinite and particular.
First we note that, if the label ‘indefinite’ or ‘particular’ is assigned to a proposition
only when the subject term has individual reference, then indefinites and particulars
are always interchangeable, as in ‘A man runs’ and ‘Some man runs’; ‘An animal is a
man’, ‘Some animal is a man’; ‘An animal is not a man’, ‘Some animal is not a man’.
And it is sufficient for the truth of such propositions that the subject term and the
predicate refer to some same thing, if the proposition is affirmative and no universal
sign is added to the predicate; I am saying this because of examples like ‘Some animal
is every man’, ‘Some angel is every angel’. But if such a proposition is negative, it is
required <for its truth> that the subject and the predicate do not refer to all
identicals—that is, it is required either that the subject refers to nothing or that it
refers to something to which the predicate does not refer. And this is because it suffices
for the truth of such propositions that at least one, no matter which, singular
proposition is true. Just as it suffices for the truth of ‘Some animal is a man’ that
‘This animal is a man’ or ‘That animal is a man’ is true, in like manner it suffices for
the truth of ‘An animal is not a man’ that there be some, no matter which, animal that
can be actually pointed at, of which ‘This animal is not a man’ be true. [ . . . ] And
therefore, if there are no men and no animals except a donkey, the following argument is
not valid: ‘A man is not a donkey; therefore, some animal is not a donkey’. In the same
way, the following is not valid ‘A white man is not an animal; therefore, a man is not an
animal’, unless it is also true that ‘A white man is a man’. Yet these arguments are valid
in the affirmative form [ . . . ] because, whether a man is an animal or not, the
Aristotle, the commentators, and Abelard 165
argument ‘A man runs; therefore, an animal runs’ is always valid. Or, to take another
example, ‘A white man is an animal; therefore, a man is an animal’ is a valid argument,
whether a man is white or not. Thus it is clear why an indefinite or particular
proposition is true in case the subject refers to something to which the predicate
does not. But this is not always required, for sometimes it suffices for the truth of a
negative indefinite or particular proposition that the subject refers to nothing. For
example, when there are no white men, ‘A white man is not a man’ is true even though
the subject refers to nothing because it refers neither to a substance nor to an
accident.]
I apologize for this very long quote, but it is necessary because it places the
quote given by Parsons in an entirely different light. In this passage, Ockham
appears to be making an attempt at incorporating a logic of indefinites into
that of the Square, as one sees from the opening sentence, where he does not
speak simply of particulars but of propositions that are both indefinite—that
is, ‘generic’, without a quantifier—and particular. Ockham systematically
distinguishes indefinites (generics) from particulars in that he uses the bare
noun (adorned with the indefinite article in the translation) in indefinites but
the noun preceded by aliquod or aliquis (‘some’) in particulars. In other
words, Parsons misquotes Ockham when he translates ‘ad veritatem talium
sufficit’ as ‘it is sufficient for the truth of [a particular] proposition’ because
talium (such) does not simply stand for particular propositions but for
propositions that are both indefinite and particular.
We remember from Section 5.2.1 that Aristotle spends some time over
indefinites (adióristoi) in the first chapter of On Interpretation, without
coming anywhere near a logic for them. Apparently, Ockham is now at
pains to achieve what Aristotle had failed to do. It is clear from the text that
Ockham withholds existential import from internally negative indefinites, as
he considers A man is not a donkey true in cases where there are no men but
there is a donkey, which in itself is quite reasonable (A unicorn is not a donkey
is reasonably called true in the actual world). In fact, all his examples where
internal negation yields truth when the subject class is empty are examples of
indefinites, not of particulars.
But it is also clear that he is pushing for an identification of indefinites and
particulars: ‘indefinites and particulars are always interchangeable’. And the
last-but-one sentence of the quote reads: ‘But this is not always required, for
sometimes it suffices for the truth of a negative indefinite or particular
proposition that the subject refers to nothing.’ Apparently, Ockham has
some qualms about this, because he continues: ‘For example, when there
are no white men, “A white man is not a man” is true even though the subject
refers to nothing’, giving again an example of an indefinite, and not a
166 The Logic of Language
9 King (2005: 243) writes ‘Ockham’s Summa Logicae (The Logic Handbook), written ca. 1323, is a
manifesto masquerading as a textbook’, implying that the book was meant to promote Ockham’s
grand nominalist philosophical conception. Be that as it may, it is clear that the Summa Logicae does
not represent standard views that were taught in the Arts faculties of medieval universities.
10 The text, which is itself a translation from the German original, reads ‘empirically valid and
universally valid judgements’, which makes no sense; I have restored the text according to the only
possible intended meaning.
Aristotle, the commentators, and Abelard 167
Again, one sees that this philosopher rejects the Horn–Parsons analysis,
because it clashes not only with ordinary usage but also with syllogistic
reasoning.
This, more or less random, selection of texts rather bodes ill for Parsons’
(and Horn’s) contention that ‘For most of this history, logicians assumed that
negative particular propositions (“Some S is not P ”) are vacuously true if
their subjects are empty’.
168 The Logic of Language
But, history aside, does it make semantic or logical sense to deny the O-
corner existential import? At first sight one might think it does, because if the
A-corner has existential import, one would expect it to follow that ¬A, and
thus its alleged equivalent I*, do not. Yet a little reflection will show that this
makes the semantics of the natural-language existential and universal quan-
tifiers SOME and ALL inconsistent in a strictly extensional system. According to
Horn and Parsons, existential import is induced by the universal and existen-
tial quantifiers only when they combine with a nonnegative predicate G but
not when they combine with a negative predicate not-G. This would make
their satisfaction conditions dependent on the lexical choice of the main
predicate. But in extensional predicate logic the main predicate is represented
by a lexical variable—in this case, G or not-G—and should, therefore, have
no bearing on the logical properties of the operators that define the logic. The
Horn–Parsons position has the extraordinary implication that, for example,
Some men are bachelors has existential import, but Some men are not married
hasn’t, or that All John’s views are erroneous has existential import, but All
John’s views are not right hasn’t!
Logically speaking, Parsons’ proposal amounts to a VS-model and an
octagon as shown in Figure 5.5. Here one sees that, in a state of F-lessness,
A and I are taken to be false but A* and I* true. This has the surprising effect
that the corresponding octagon remains identical with that holding for
traditional ABPC, as is easily checked when one compares Figure 5.5b with
Figure 5.2b. It may well have been this fact that has motivated Parsons and
a. b. ¬A {2,3,4}
[[F]] = Ø
CD CD
¬A I* {1} A ¬I* {1}
¬A I*
CD
CD
¬A I* C C
C
A ¬I* SC
4 3 2 1 2 3 4 SC
I ¬A* I SC I*
{1,2}
I ¬A* {2,3,4}
¬I A* CD CD
A* C
¬I
U {3,4} ¬I SC
A* {3,4}
CD CD
¬A* {1,2}
FIGURE 5.5 The predicate logic proposed by Parsons
Aristotle, the commentators, and Abelard 169
those he follows to propose the view that the Square can be saved by assuming
that the affirmatives have existential import while the negatives do not. Yet
there is a big problem, in that this interpretation of ABPC makes it impossible
to provide consistent definitions for the two quantifiers 8 and ∃, or, in
traditional terms, for the words ALL and SOME.
This is easily shown. If, as Parsons does, one takes A, or ALL F is G, to be
true just in case [[F]] 6¼ and [[F]] \ [[G]] ¼ , then A*, or ALL F is NOT-G,
must be taken to be true just in case [[F]]6¼ and [[F]] \ [[G]] ¼ , which
gives both A and A* existential import. Likewise, if I, or SOME F is G, is taken
to be true just in case [[F]] \ [[G]] 6¼ , then I*, or SOME F is NOT-G, must be
taken to be true just in case [[F]] \ [[G]] 6¼ which gives both I and I*
existential import. The reason for this is that 8 (ALL) and ∃ (SOME) are binary
higher-order predicates expressing relations between two sets X and Y, no
matter which. Therefore, it makes no difference whether X or Y is character-
ized by positive or by negative satisfaction conditions. Yet Parsons defines A
as requiring that [[F]] 6¼ and [[F]] \ [[G]] ¼ , as above, but A* as requiring
that [[F]] ¼ or [[F]] \ [[G]] ¼ , which precludes one single definition for 8
(ALL). Likewise, Parsons defines I as requiring that [[F]] \ [[G]] ¼ , as above,
but I* as requiring that [[F]] ¼ or [[F]] \ [[G]] 6¼ , which again precludes
one single definition for ∃ (SOME). In other words, ALL and SOME have become
ambiguous between those cases where the G-predicate is positive and those
where it is negative. In the latter case it (arbitrarily) loses its natural existential
import. Parsons and company thus buy the logic of the Square of Opposition
but have to live with ambiguous quantifiers.
This forces Parsons to give up the classic notion of quantifier and thus to
reject both the traditional Aristotelian syntactic template (NOT) ALL/SOME F is
(NOT-)G and the modern notation, whether in the Russellian or in the
generalized-quantifier format. And this is exactly what he does. Parsons
(2006, 2008) falls back on the device of ‘quantification of the predicate’
discussed in Section 3.4.2 in connection with Hamilton’s predicate logic.
(This device was also known to Ockham, as one sees from the long quote
given above from Ockham’s Summa Logicae.) Like Moody and Thompson,
however, he dismisses naturalness as a criterion (Parsons 2008: 5):
What is important is that the logical notation be coherent and useful. If it does not
perfectly match the usage of ordinary language, that is not on its own important for a
system of logic. Indeed, if you are sure that ordinary language universal affirmatives
should be false when their subject term is empty, then you may represent that fact by
translating them into modern logical notation adding a conjunct. Instead of
symbolizing ‘Every A is B’ by ‘8x(Ax ! Bx)’, symbolize it as ‘∃xAx & (Ax ! Bx)’.
170 The Logic of Language
Yet he does not mention the disastrous consequences this has for the natural-
ness of the logic when this proposal is combined with the requirement that
the Conversions stay intact. Horn and like-minded pragmaticists should
realize that this does not help them at all.
In this context, Parsons devises an entirely new logical language which is
severely at odds with natural language, both syntactically and semantically,
and where SOME and ALL are defined not as words but syncategorematically,
and in terms not just of what may be the case ‘in the model’ but also of ‘truth
under an assignment . Following Klima (1988), Parsons then introduces an
artificial ‘zero element’ which, arbitrarily, makes ALL F is G and SOME F is G
false but ALL F is NOT-G and SOME F is NOT-G true. I will not here comment on
the logical merits of this proposal, since it is irrelevant for the natural logic of
language and cognition. All I wish to note here is that proposals to the effect
that only the Aristotelian affirmatives have existential import while the
negatives do not are doomed to take leave from the world of natural intui-
tions and to disappear into abstract logical space.
During the Middle Ages, one witnesses not only a genuine professional
interest in logic, both syllogistic and predicate calculus (as shaped by
Boethius), but also a dramatic increase in mystic, kabbalistic and other occult
exercises, all of a well-defined formal nature, connected with letters of the
alphabet, geometrical figures, esoteric symbols, heavenly bodies, and num-
bers. One thinks, for example, of the mysticism woven around the Fibonacci
numbers and the golden ratio, also known as the golden section or the divine
proportion—a mysticism that has been of all times but was particularly
strong during the Middle Ages.
Syllogistic and predicate logic, with their nice geometrical designs, likewise
offered a wonderful opportunity for mystics and kabbalists to look for hidden
messages sent down from the clouds of eternity. A good example is the
Catalan Ramon Llull (1232–1316), who developed a system based on a
nonagon (reminiscent of Aristotle’s nine categories) listing the attributes of
God: goodness, greatness, eternity, power, wisdom, will, virtue, truth, and
glory. From these he meant to derive deductively all important eternal truths,
especially the Christian dogmas. The nonagon was realized in four different
‘figures’, the first of which was designed to generate the Aristotelian syllo-
gisms. The other three had different functions, all of a ‘cosmic’ nature (Eco
1995: 56–64; Wikipedia s.v. Ramon Llull).
Tellingly, the Arabic word for logic is mant. iq, from the Greek word mantike
(prophecy, fortune-telling). For a long time, logic and kabbalistic mysticism
were closely linked—a fact which, paradoxically, strongly contributed to the
increased popularity and prestige of logic. Once the political shackles of the
Roman Empire had been shed and Christianity had gained the ascendancy,
esoteric formal symbolisms began to cast their spell not only on the common
folk, as in late Antiquity, but also on the educated classes and the lay rulers.
And most of these were fascinated more by the mystical depths attributed to
the formal systems than by their purely intellectual content (to the extent that
there was any).
In modern times, we have, by and large, managed to banish occultism from
mathematics, logic, and science and to a large extent also from the prevailing
social norm system, if not quite from actual practice.11 But the irony is that
11 One should realize that even reputable academics have, at times, indulged in kabbalistic and
similar occult exercises. Mark Alford writes the following about Isaac Newton (Alford 1995):
In fact, Newton was deeply opposed to the mechanistic conception of the world. A secretive alchemist and heretical
theologian, he performed countless experiments with crucibles and furnaces in his Cambridge chambers, analyzing
the results in unmistakably alchemical terms. His written work on the subject ran to more than a million words, far
more than he ever produced on calculus or mechanics. Obsessively religious, he spent years correlating biblical
172 The Logic of Language
the ‘scientific attitude’, which began to be manifest during the late Middle
Ages and has dominated Western culture ever since, may well owe its growth
and its enormous influence in large part to the very craving for hidden truths
and for contact with the supernatural that it so successfully managed to get
rid of.
on the mystery of the Trinity. A century after his death, under the influence of
Thomas Aquinas, the Church reversed its position on the legitimacy of
theology and in particular the study of the Trinity, but Abelard was never
duly acknowledged, let alone rehabilitated.12
Apart from all this, however, Abelard was also a consummate logician, who
was probably the first, after Aristotle, to be aware of the problem of undue
existential import in ABPC. He proposed a solution which is not only as
sound as it is simple but also reflects Aristotle’s original intention: dissolve the
Conversions into one-way entailments from A to ¬I* and from A* to ¬I. Yet,
perhaps because the Church was keen to erase Abelard’s heritage from history,
this solution never came to the surface. It has played virtually no role in the
tradition of logic, where, to the extent that its existence was acknowledged, it
was not understood and hence misrepresented.13
One sometimes finds, as, for example, in Horn (1997) and Parsons (2006)
discussed in Section 5.2.4, Abelard discredited with the view that I*-sentences
have no existential import, which would mean that the existential quantifier is
taken to induce existential import when followed by a positive, but not when
followed by a negative matrix predicate. If Abelard had indeed held this view,
he would be subject to the same criticism as has been voiced in Section 5.2.4
with regard to Horn’s and Parsons’ description of ABPC. For Abelard, how-
ever, what is at issue is not whether I*-sentences have existential import
(which he was sure they have) but whether ¬A-sentences have it. One should
realize that in SMPC ¬A-sentences do have existential import, while in ABPC
there is a clash of truth values when ¬A-sentences are subjected to this
question. It is the denial of existential import to ¬A-sentences, and the
attribution of existential import to I*-sentences, that makes Aristotle’s and,
after him, Abelard’s logic worthwhile and saves it from inconsistency. It has,
apparently, been hard for logicians to see that the Conversions are far from
unassailable.
Kneale and Kneale (1962) also mention Abelard’s solution and, apparently,
likewise fail to see his point. They do, however, acknowledge Abelard’s
importance for the history of logic, as appears from the following passage
(Kneale and Kneale 1962: 204; translation between square brackets mine):
12 One may consult McLeod 1971, Gilson 1978, or Clanchy 1997 for authoritative and highly readable
accounts of this dramatic episode in human history.
13 In Seuren (2002) a solution is proposed for undue existential import that is identical to
Abelard’s. I named it Revised Aristotelian predicate calculus or RAPC. At the time, I was not aware of
the fact that Abelard had proposed an identical solution nine hundred years earlier. To give Abelard his
due, I have renamed the system in question Aristotelian-Abelardian predicate calculus.
174 The Logic of Language
Abelard’s mind was the keenest (though not in all respects the most admirable)14 that
had been devoted to the subject for more than a thousand years, and he approached
his task with the belief that it was still possible to make discoveries: ‘Non enim tanta
fuit antiquorum scriptorum perfectio ut non et nostro doctrina indigeat studio, nec
tantum in nobis mortalibus scientia potest crescere ut non ultra possit augmentum
recipere’ (De Rijk 1956: 535). [For the perfection of the ancient writers was not such
that their doctrine could not profit from our investigations, nor is it possible for
science to grow to such an extent in us mortals that it can no longer be improved.]
What Abelard proposed was that, for cases where [[F]]¼, A- and A*-
sentences, as well as I- and I*-sentences, should be considered false, while
their negations should be considered true (Kneale and Kneale 1962: 210–11):
while he [¼Abelard] admits that Nullus homo est albus [No human is white] can be
regarded as the contradictory of Quidam homo est albus [Some human is white] because
nullus is merely an abbreviation of non ullus, he now refuses to allow that Quidam homo
non est albus [Some human is not white] is the contradictory of Omnis homo est albus
[All humans are white], as Boethius had maintained, and says that Aristotle dealt with
the question more subtly when he offered Non omnis homo est albus [Not all humans are
white] as the contradictory. It is true, of course, that Aristotle wrote Greek words
corresponding to Non omnis homo est albus, but it seems clear that he did not intend
to convey by these words anything different from the doctrine later attributed to him by
Boethius. Abelard, on the other hand, thinks that Non omnis homo est albus is
something distinct in meaning from the particular negative proposition Quidam
homo non est albus, and therefore outside the usual scheme of four categorical forms.
His reason for introducing this complication is that he assumes existential import for
Omnis homo est albus, though apparently not for Nullus homo est albus. The assumption
seems curious after his explicit statement that the word est occurring as pure copula
involves no assertion of existence; but there can be no doubt of his doctrine on this
point, since he insists that even the seeming tautology Omnis homo est homo would be
false if there were no men: ‘Cum autem Quidam homo non est homo semper falsa sit
atque Omnis homo est homo homine non existente, patet simul easdem falsas esse: unde
nec recte dividentes dici poterunt’ (De Rijk 1956: 176) [But since Some human is not
human is always false and All humans are humans is false when no humans exist, it
follows that these two can be false at the same time, so that it must be incorrect to call
them contradictory opposites]. We must therefore suppose that in his view it is the
word omnis which introduces existential import.15
14 One wonders about the relevance, or indeed the stringency, of the Kneales’s moral reservations.
This remark about Abelard’s allegedly not so admirable ‘mind’ is best taken as a late reflection of the
biographical and other historical facts mentioned at the outset of the present section. The Kneales
ought to have known better.
15 On Abelard’s doctrine of the existential value of the copula verb esse, see Rosier-Catach (2003 a).
Since Rosier-Catach’s meticulous analysis shows that Kneale and Kneale appear to be right on this
Aristotle, the commentators, and Abelard 175
This is a curiously interesting passage, since it shows not only that Kneale
and Kneale, with all their barely hidden irony, failed to see Abelard’s point—a
failure they share with the entire logical tradition—but also, more important-
ly, that Abelard’s analysis is perfectly coherent and, as was shown in the
preceding section, in accordance with what can be gathered from Aristotle’s
own text, despite Kneale and Kneale’s assurance to the contrary. Since Abe-
lard’s proposal is in full agreement with Aristotle’s text, it is legitimate to
speak of ARISTOTELIAN-ABELARDIAN PREDICATE CALCULUS, or AAPC. But let us turn
to Abelard’s own texts so as to unravel what he himself actually proposed.
In his Dialectica, his main work on logic (full edition De Rijk 1956), Abelard
is at pains to distinguish as clearly as possible between the external (‘pre-
posed’) negation, which always creates contradiction, and the internal (‘inter-
posed’) negation, which creates contrariety under the universal quantifier. We
read, for example (De Rijk 1956: 177; translation mine):
The preposed negation thus has a different logical power from the interposed negation. A
sentence that says Every human is not white is not equivalent with Not every human is white,
and Some human is not white says something different from Not any human is white.16
In this context Abelard argues, in accordance with the Stoics, that the only
real guarantee for pure contradiction is the preposing of the negation and
not the insertion of an internal negation combined with a change from universal
to existential quantifier or vice versa (De Rijk 1956: 176; translation mine):
Similarly for categorical propositions, where the only real truth-value-inverting
(dividens) contradiction of any arbitrary positive proposition appears to be the one
complex issue, we subscribe to their conclusion that, for Abelard, ‘it is the word omnis [and not the
suppletive copula verb esse ; PAMS] which introduces existential import’. One will note that, according
to the analysis presented in the present book, existential import is not induced by any quantifier either
but by the extensionality of argument-term positions under any given predicate (see Section 10.7 and
Section 3.5.1 in Volume I).
16 It is not clear to what extent Abelard was aware of the distinction between sentences and their
underlying L-propositions (logical form), though valuable details for a reconstruction are presented in
Rosier-Catach (1999, 2003b, 2003c). Had he used a modern European language, he would have noticed
that in sentences with a definite, non-quantified subject term the external (sentential) negation is not
‘preposed’ but woven into the surface sentence (in many languages in construction with the finite verb
form). But he used Latin, where the logically external negation can always be literally preposed, that is,
placed at the beginning of the surface sentence, although most of the times it does not actually occupy
that position. His insistence on the syntactic distinction between preposed (external) and interposed
(internal) negation suggests that he was thinking in terms of a logical language with both
propositional and predicate variables, combining Aristotelian and Stoic logical analyses.
176 The Logic of Language
that has the negation preposed to it so that all its entailments are lost (totam eius
sententiam destruit). For example, the contradictory of Every human is human is Not
every human is human, and not Some human is not human, since there are situations
where the first and the third are simultaneously false. For when not a single human exists,
both of these two propositions are false: Every human is human and Some human is not
human. [ . . . ] The proposition Some human is not human [ . . . ] is always false. For what
it says is totally impossible: it cannot be the case and nature can offer no instance of it.
[ . . . ] In no situation can the same thing be both human and not human at the same
time. For it is a well-nigh eternal law that what is not included under negation is excluded
under it. [ . . . ] But since Some human is not human is always false and All humans are
humans is false when no humans exist, it follows that these two can be false at the same
time, so that it must be incorrect to call them contradictory opposites.
a. [[F]] = Ø b. [[F]] = Ø
A ¬I* ¬A ¬I*
¬A I* ¬A I*
¬A I* ¬A I*
A ¬I* A ¬I*
4 3 2 1 2 3 4 4 3 2 1 2 3 4
I ¬A* I ¬A*
I ¬A* I ¬A*
¬I A* ¬I A*
¬I A* ¬I ¬A*
U U
¬A {2,3,4}
a.
CD SC
{1} A ¬I* {1,4}
C
b. A ¬I* I*
CD
C
CD
{1,2} SC SC
{2,3}
C C C C
I I*
C
C
CD
CD
I ¬I A*
{3,4} ¬I A* {3}
SC
SC CD
¬A* {1,2,4}
FIGURE 5.7 The octagonal (a) and the square (b) representation for AAPC
Aristotelian sentence types are made false in AAPC by the situations in space
4. (One remembers from Section 4.2.4 that space 1 covers the situations where
O [[G]] or [[G]] [[F]], and space
[[F]] [[G]], space 2 covers those where [[F]] O
3 those where [[F]] OO [[G]].)
Given the logical relations as defined in (4) of Section 2.3.3, one reads
from Figure 5.6b that the Conversions have been replaced with one-way
entailments from A to ¬I* (or from I* to ¬A) and from A* to ¬I (or from I
to ¬A*), since /A/ /¬I*/ and /A*/ /¬I/, but not vice versa.17 Then, the
subaltern entailments from A and A* have been preserved, since /A/ / I/
and /A*/ /I*/.18 The subcontrariety between I and I* has been lost, but I and
¬A are still subcontraries, since /I/ [ /¬A/¼U (and, of course, analogously
for I* and ¬A*).19 In addition, the number of contrary pairs has increased.
17 /A/= {1} and /¬I*/= {1,4}; therefore /A/ /¬I*/; therefore A ‘ ¬I* but not vice versa. Then, /A*/
= {3} and /¬I/= {3,4}; therefore /A*//¬I/; therefore A * ‘¬I but not vice versa.
18 /A/= {1} and /I/= {1,2}; therefore /A/ /I/; therefore A ‘ I but not vice versa. Then, /A*/= {3} and
/I*/= {2,3}; therefore /A*//I*/; therefore A* ‘ I* but not vice versa.
19 /I/= {1,2} and /¬A/= {2,3,4}; so /I/[/¬A/= U; so I and ¬A are subcontraries. Then, /I*/= {2,3}
and /¬A*/= {1,2,4}; so /I*/[/¬A*/= U; so I* and ¬A* are subcontraries. Note that I and I* are not
subcontraries: {1,2} [ {2,3} 6¼ U. This means that I and I* are logically independent in this system: they
can be both true, both false, singly true, and singly false.
178 The Logic of Language
b. ¬A {2}
a.
CD
[[F]] = Ø {1}A C ¬I* {1,2}
¬A ¬I* C
CD CD
A ¬I* CD
C
2 1 2 CD
{1} I C I* Ø
I ¬A*
[[F]] ° Ø ¬A* C
C
¬I CD C
U
C
{2}¬I A* Ø
CD
CD
¬A* {1,2}
FIGURE 5.8 VS-model and octagon for ALL F IS F and so on
The corresponding octagonal graph is shown in Figure 5.7a, where the valua-
tion spaces of Figure 5.6b are given for each vertex. A comparison with the
graph-theoretically complete octagonal graph for ABPC of Figure 5.2b above
shows that AAPC is less powerful than ABPC but a great deal more powerful
than SMPC, whose poverty is shown in Figure 4.6b.
Abelard devotes some space to sentences where the F- and G-predicates are
identical, as in his example EVERY human is human, SOME human is NOT-
human, and so on. In such sentences, AAPC remains intact, as it should, but
acquires a number of extra entailment relations, many of which are counter-
intuitive. Since the extensions of the F- and G-predicates coincide, the spaces
1 and 2 of Figure 5.6b collapse into one space. Space 3 disappears, as there can
be no difference between two identical predicate extensions, and space 4
remains intact, leaving just two spaces, one for [[F]] 6¼ and one for [[F]] ¼
, as shown in Figure 5.8a, which returns the octagon of Figure 5.7b. In Figure
5.8a, A* and I* are not represented, since in AAPC sentences of the form ALL F
is NOT-F and SOME F is NOT-F are necessarily false, as noticed by Abelard. (It is
an interesting exercise to read Abelard’s text quoted above with the Figures 5.7
and 5.8 at hand.)
The octagonal graph of Figure 5.8b is graph-theoretically complete, in
that there is at least one, sometimes two, logical relations between every two
vertices, but the important point is that all logical relations of AAPC, as
Aristotle, the commentators, and Abelard 179
shown in Figures 5.5b and 5.6a, are preserved and in some cases reinforced.
Figure 5.8b just has a much richer supply of logical relations than ordinary
AAPC, though the relations it has in excess of those of ordinary AAPC are
largely counterintuitive, owing to the fact that in AAPC with two identical
predicates sentences of the types A* and I* are necessarily false and thus
have a null VS. That being so they mathematically entail any arbitrary
sentence—a form of entailment that counts as nonnatural and counterintu-
itive, as is argued in Chapter 3. At the same time, they are contrary with
regard to any arbitrary sentence, since they will never be true together with
any sentence, including themselves. The combination of entailment and
contrariety is, of course, highly counterintuitive. But then, predicate calcu-
lus with two identical predicates is something of a logician’s prank and is not
part of ordinary language or cognition. One remembers that one of the
principles of natural set theory proposed in Chapter 3 specifically rules out
the identity of sets that have been introduced as sets in their own right and
hence the identity of predicate extensions that have been given different
names.
What results from all this is that giving up the Conversions for one-way
entailments and declaring A-type and A*-type sentences false for situations
where [[F]] ¼ results in a better logical and linguistic deal than keeping the
Conversions and declaring A-type and A*-type sentences true in such situa-
tions, as SMPC does.
Why Abelard’s solution has never been incorporated into the logical tradi-
tion is hard to say. In the absence of a specialized study on the Abelard
tradition, one can only guess at the reasons for this historical anomaly.20 In
the context of the present work, Abelard’s contribution is highly significant, as
it brings us one step closer to the isolation of the situations where [[F]] ¼ by
declaring all four Aristotelian sentence types false in those situations.
Yet, even with Abelard’s solution, we are still far removed from an adequate
treatment of the logic of quantification in natural language. Somehow, space 4
must be ‘put on hold’ and treated as being hors concours, but this cannot be
done with the means at our disposal at this point in the exposition. It is done
in Chapter 10, where the principle of strict bivalence is sacrificed in the
20 One can think of a few possible reasons, given his superior intellect, combined with his gift for
debating and ridicule, his insistence on rational as opposed to mystic thinking, his frequent clashes
with official theology, and his defiant attitude with regard to the stifling moral prescriptions imposed
and enforced by the Church, which twice condemned him for heresy.
180 The Logic of Language
Mauritius as it was before the year 1700, when there were still dodoes alive. It
must, therefore, be considered to be a rightful member of the class of A-
sentences even if the F-class, the class of dodoes, happens not to be instantiated
in the actual world as it is now. No logic can afford to leave such sentences out
of account: U is the set of ALL possible situations, not just of those that happen
to have a nonnull F-class. Nor can the predicate variable F be restricted to
those predicates that happen to have a nonnull extension. Space 4, therefore,
reserved for situations where [[F]] ¼ , cannot be eliminated from the system.
It can, however, be isolated and ‘put on hold’. For one thing, space 4 is special in
that it is only needed for the purpose of catering for situations where [[F]] ¼ .
This is shown as follows. For a situation sit to belong to space 1 or 2, it is necessary
that [[F]] 6¼ and [[G]] ¼ 6 , since both spaces make I-sentences true, which
require a nonnull intersection of [[F]] and [[G]]. For sit to belong to space 3, which
houses the situations where SOME F is NOT-G is true, [[F]] and ½½G must be
nonnull, but [[G]] is free: either [[G]] ¼ or [[G]] 6¼ , since all that is required
is a nonnull intersection of [[F]] and the complement of [[G]]. Only in space 4 is
it allowed, and necessary, that [[F]] ¼ , while [[G]] may or may not be null.
This insight is not revolutionary. It is widely known that SMPC is the result of
adding situations where [[F]] ¼ to the set of situations in which ABPC is valid.
Yet it is useful to look at this fact a little more closely. Let us speak of FACT 1:
FACT 1
If the set of situations where [[F]] ¼ is disregarded, SMPC is
transformed into traditional Aristotelian-Boethian predicate calculus
(ABPC or the Square) and becomes logically isomorphic with
standard propositional calculus. This provides a wealth of logical
relations that are absent in SMPC.
FACT 1 is important because there is more. It is not hard to see that, if the set
of situations where [[F]] ¼ can somehow be put on hold, it is only the
VS-extension of A-type or A*-type sentences that is affected. The reason is
simple: space 4 in the VS-model of SMPC (Figure 3.11), which is reserved for the
situations where [[F]] ¼ , only makes sentences of type A and A* true; the I-type
or I*-type sentences still remain restricted to spaces 1, 2, and 3. This gives us FACT 2,
which is likewise not revolutionary, yet much less present in the minds of the
professionals than FACT 1, despite its relevance in the context of the logic of
language:
FACT 2
The reduction of SMPC to ABPC only affects A-type or A*-type
sentences and has no effect on the VS-extension of I-type or I*-type
The functionality of the Square and of BNPC 183
The point we wish to make is that when [[F]] or [[G]] or both have an
extreme value, then the set-theoretic relation between the two sets involved is
fully determined. Hence, the truth value, in SMPC, of quantified L-proposi-
tions in the language of predicate calculus (LPredC) involving the predicates
F and G, with or without the external or internal negation, follows automati-
cally: their truth or falsity is determined by Boolean computation alone,
no further inspection of the world being needed. But when the situation
is such that neither [[F]] nor [[G]] has an extreme value, the truth value of
an L-proposition describing that situation depends on the contingent set-
theoretic relation between [[F]] and [[G]]. The class of situations characterized
by the condition that neither [[F]] nor [[G]] has an extreme value we call the
CLASS OF CONTINGENT (OR MUNDANE) SITUATIONS. The sentences that describe a
contingent situation we call CONTINGENT (OR MUNDANE) SENTENCES.
Some comment is in order. What is said is that in all cases where [[F]] ¼
or [[G]] ¼ or [[F]] ¼ OBJ or [[G]] ¼ OBJ, the truth value of any L-
proposition in LPredC, involving the predicates F and G is fully determined
by Boolean computation. This is so because:
(a) The extreme values and OBJ are instances of Boolean 0 and 1,
respectively.
(b) The quantifiers in SMPC are exclusively defined in terms of the set-
theoretical functions \, [, and Complement.
(c) The set-theoretical functions \, [, and Complement correspond to
the Boolean functions of multiplication, addition, and complement,
respectively.
184 The Logic of Language
(d) These Boolean functions are defined in terms of the constants 1, 0 and
at most one variable symbol (see Section 2.3.2).
As regards (d), one should note that, as soon as more than one variable
symbol is involved whose value is not Boolean 1 or 0, the values of the
Boolean functions are no longer determined a priori but depend on any
values the variable symbols may have in any application of Boolean algebra
to some domain.
From (a)–(d) it follows that, in all cases where [[F]] or [[G]] or both have an
extreme value, the truth value of any quantified L-proposition in LPredC
involving the predicates F and G is computable from these values, while this
is not so when both [[F]] and [[G]] are natural sets in the sense of Chapter 3 and
thus avoid extreme values. The inverse does not hold. For example, when 8x
(Gx,Fx) is true (in SMPC), both [[F]] and [[G]] may be null or nonnull or equal
to OBJ, though not in every combination. The existential quantifier is more
restrictive: when ∃x[Gx,Fx] is true, neither [[F]] nor [[G]] may be null, though
either may be equal to OBJ (provided OBJ 6¼ ). We thus formulate FACT 3:
FACT 3
When [[F]] ¼ or [[G]] ¼ or [[F]] ¼ OBJ or [[G]] ¼ OBJ, the truth
value of any L-proposition in LPredC involving the predicates F and G is
fully determined by Boolean computation.
The reason this is relevant is that the use of an L-proposition of LPredC in
natural language is now seen to be informative only for situations belonging
to the class of contingent situations, that is, for situations where [[F]] 6¼ 6¼
OBJ and [[G]] 6¼ 6¼ OBJ. In the other cases, where the truth value simply
follows from an extreme value, it is more informative to specify the status of
[[F]] and [[G]], which gives the set-theoretic relation between them as an
automatic consequence. The relevance of FACT 3, therefore, lies not so much
in the logic itself as in more pragmatic considerations regarding the function-
ality of human language.
F ≠ ev F = Ø F = O+ F ≠ ev F = Ø F = O+ F ≠ ev F = Ø F = O+ F = Oø
G ≠ ev G ≠ ev G ≠ ev G = Ø G = Ø G = Ø G = O+ G = O+ G = O+ G = Oø
1 2 3 4 5 6 7 8 9 10
1 F⊂ G ? + – – – – + + – –
11 12 13 14 15 16 17 18 19 20
2 G⊂F ? – + + – + – – – –
21 22 23 24 25 26 27 28 29 30
3 F=G ? – – – + – – – + +
31 32 33 34 35 36 37 38 39 40
4 FO
OG ? – – – – – – – – –
41 42 43 44 45 46 47 48 49 50
5 F∩G = Ø ? + – + + + – + – +
51 52 53 54 55 56 57 58 59 60
6 F⊂ G ? – + – – – + – + –
61 62 63 64 65 66 67 68 69 70
7 F=G ? – – – – + – + – +
71 72 73 74 75 76 77 78 79 80
8 F⊂ G ? + – + + – – – – –
natural set of objects and thus a proper part of OBJ—the relation of proper
inclusion of [[F]] in [[G]] (F G) cannot possibly hold, so that slot 3 must be
marked ‘–’ and thus represents an impossible situation. Or take slot 7, the top
slot in column vii. Here it is specified that [[F]] is a natural set (‘F 6¼ ev’) and
[[G]] equals nonnull OBJ (‘G ¼ Oþ’). That being so it simply follows that [[F]]
is properly included in [[G]]. One notes that the condition of row 4 ([[F]] and
[[G]] are both natural sets and intersect partially) is compatible only with the
condition of column i and incompatible with those of all other columns.
What does Figure 6.1 tell us? First, it is clear that column i represents the
class of contingent situations as defined above, since it is only in column i that
both [[F]] and [[G]] avoid the extreme values and OBJ. This illustrates the
fact that when either [[F]] or [[G]] has an extreme value, the truth value of a
set-relational statement is predicted on a priori grounds. This is perhaps
undramatic, but it does no harm to see it visually illustrated.
The next point has a little more drama to it. The reduction of the predicate-
logical constants to set theory allows us to set up valuation spaces for the
various L-proposition types of LPredC. The procedure specified in (6.2) allows
us to list the slots that yield truth for any L-proposition of SMPC and thus
compose a VS-model:
The functionality of the Square and of BNPC 187
(6.2) Check for each slot n whether formula is true given R and C, where:
formula stands for any predicate-logic L-proposition,
R for the set-theoretic relation specified for the row and
C for the status of [[F]] and [[G]] specified for the column.
When formula is true, add the slot number n to the list.
For example, given the predicate-logical L-proposition ALL F is G or A,
formally written 8x[Gx,Fx], we list, for row 1, the following slots: 1, 2, 7, and 8.
Row 2 yields no slot, since 8x[Gx,Fx] cannot be true when [[G]] [[F]]. Row 3
yields the slots 21, 25, 29, and 30, because 8x[Gx,Fx] is always true when [[F]]
¼ [[G]]. Row 4 again yields no slot because 8x[Gx,Fx] is false when [[F]] O O
[[G]] and all other slots in row 4 represent impossible situations. It is a bit
tedious to do this for all eight L-proposition types of SMPC and for all rows
and columns, but the result is given in (6.5).
One should note that the procedure defined in (6.2) automatically selects
only those matrix slots that are marked ‘þ’. This is because the slots marked ‘–’
represent impossible situations, given their combined values in R and C. These
slots, therefore, cannot play a part in a valuation-space model listing all
possible situations. It is useful to have the universe U of all possible situations
at hand:
(6.3) The universe U of all possible situations (all slots marked ‘?’ or ‘þ’):
{ 1,2,7,8,11,13,14,16,21,25,29,30,31,41,42,44,45,46,48,50,51,53,57,59,61,66,
68,70,71,72,74,75}
Some situations have been printed in boldface and larger font. This has
been done to make them stand out as the members of the class of contingent
situations, listed separately in (6.4). These, one remembers, are the situations
where, given the status of [[F]] and [[G]], their set-theoretic relation does not
follow automatically.
(6.4) {1,11,21,31,41,51,61,71}
In all other situations the set-theoretic relation stated for [[F]] and [[G]] in a
row follows from their status specified for them in their column. Take, for
example, situation (slot) 2: given the column condition [[F]] ¼ , [[G]] 6¼ ev,
it is necessarily so that [[F]] [[F]]. Or take situation 44 with the column
condition [[F]] 6¼ ev and [[G]] ¼ . Here it is necessarily so that [[F]] \ [[G]] ¼
. Analogously for all situations whose numbers are not printed in bold.
Here, then, is the valuation-space table for all eight basic expressions in
SMPC:
188 The Logic of Language
This yields AAPC. AAPC also holds for the mundane situations plus those
where [[F]] 6¼ , but it stops being valid when the situations where [[F]] ¼
are taken into account. Within these constraints, however, AAPC is not
maximally powerful: ABPC, which operates within the same constraints, is
more powerful. But AAPC has the advantage of lacking UEI.
We thus see in clear detail what we had suspected all along: if the elimina-
tion of the class of situations where [[F]] ¼ is feasible, that will be sufficient
to boost the predicate logic from the impoverished SMPC to the maximally
powerful ABPC. What has also been shown is that the elimination of situa-
tions with a null F-class does not affect the valuation spaces of I-type and
I*-type sentences: their valuation spaces in (6.8) are identical to those in (6.5).
It is therefore only the A-type and A*-type sentences and, of course, the
complements of the I-type and I*-type sentences, that are affected by the
shrinking from U to UR. This illustrates what was stated above in FACT 2.
(6.9) a. /A/ = {1 }
b. /¬A/ = {11, 31, 41, 51, 61, 71}
c. /A*/ = {41, 61, 71}
d. /¬A*/ = {1, 11, 31, 51}
e. /I/ = {11, 31, 51}
f. /¬I/ = {1, 41, 61, 71}
g. /I*/ = {11, 31, 51}
h. /¬I*/ = {1, 41, 61, 71}
i. /N/ = {41, 61, 71}
j. /¬N/ = {1, 11, 31, 51}
k. /N*/ = {1}
l. /¬N*/ = {11, 31, 41, 51, 61, 71}
The following logical relations are now seen to hold, precisely as specified
in Figure 3.7 of Chapter 3:
(6.10) A N* ¬A ¬N*
A* N ¬A* ¬N
I I* ¬I ¬I*
A ‘ ¬A* N* ‘ ¬A*
A ‘ ¬I / ¬I* N* ‘ ¬I / ¬I*
A ‘ ¬N N* ‘ ¬N
A* ‘ ¬A / ¬N* N ‘ ¬A / ¬N*
A* ‘ ¬I / ¬I* N ‘ ¬I / ¬I*
I ‘ ¬A / ¬N* I* ‘ ¬A / ¬N*
I ‘ ¬A* / ¬N I* ‘ ¬A* / ¬N
We may also keep the BNPC-definitions of the three quantifiers 8, ∃ and
N but disregard the basic-natural restrictions of Section 3.2.2, thereby
reading BNPC as an unrestricted logic. When we do that, BNPC is extended
to cover all situation classes of Figure 6.2. The VSs thus specified are listed
in (6.11):
(6.11) The VSs of the basic sentence types of BNPC as an unrestricted logic
a. /A/ ¼ {1}
b. /¬A/ ¼ {2,7,8,11,13,14,16,21,25,29,30,31,41,42,44,45,46,48,50,
51,53,57,59,61,66,68,70,71,72,74,75}
c. /A*/ ¼ {41,61,71}
192 The Logic of Language
d. /¬A*/ ¼ {1,2,7,8,11,13,14,16,21,25,29,30,31,42,44,45,46,48,
50,51,53,57,59,66,68,70,72,74,75}
e. /I/ ¼ {11,31,51}
f. /¬I/ ¼ {1,2,7,8,13,14,16,21,25,29,30,41,42,44,45,46,48,
50,53,57,59,61,66,68,70,71,72,74,75}
g. /I*/ ¼ {11,31,51}
h. /¬I*/ ¼ {1,2,7,8,13,14,16,21,25,29,30,41,42,44,45,46,48,50,
53,57,59, 61,66,68,70,71,72,74,75}
i. /N/ ¼ {41,61,71}
j. /¬N/ ¼ {1,2,7,8,11,13,14,16,21,25,29,30,31,42,44,45,46,48,50,
51,53,57,59,66,68,70,72,74,75}
k. /N*/ ¼ {1}
l. /¬N*/ ¼ {2,7,8,11,13,14,16,21,25,29,30,31,41,42,44,45,46,48,50,
51,53,57,59,61,66,68,70,71,72,74,75}
Here one sees that even when extended to cover all situations, the six
nonnegated basic sentence types of BNPC, A, A*, I, I*, N, and N*, are still
restricted to the mundane situations represented in column i. This remark-
able property is not shared by ABPC, as is shown by (6.8) above.
It thus appears that BNPC, though invariably dismissed as unimportant by
professional logicians, is not only a superbly powerful logic but is also highly
functional in that it automatically focuses its nonnegated basic expressions on
the mundane situations. Yet it has the fatal drawback of not allowing for
existential statements in the absence of complete knowledge of the domain.
BNPC also has a second, less serious drawback, to do with expressive power.
As a result of the incorporation of PNST–2 (‘natural sets are distinct’) into the
semantics of the BNPC quantifiers, BNPC is unable to produce a true
quantified sentence when [[F]] ¼ [[G]]: the slots 21, 25, 29 and 30, filling the
row characterized by the relation [[F]] ¼ [[G]], do not occur in the VS of any
of the six nonnegated basic sentence types of BNPC.1 The same does not hold
for the slots in row 7, defined by the relation ½½F ¼ [[G]] (or, equivalently,
[[F]] ¼ ½½G): slot 61 figures in both /A*/ and /N/ of (6.11), which, as one can
see, are identical. This is because in row 7 the sets [[F]] and [[G]] are both
natural sets and distinct from each other. Hamilton and Jespersen can thus
rest in peace: their logic has finally found the recognition it deserves, even
though it has its limitations.
1
This lack of expressive power does not occur in Hamilton’s notation as presented in Hamilton
(1866). As is shown in Section 3.4.2, Hamilton writes tF = tG when [[F]] = [[G]].
The functionality of the Square and of BNPC 193
6.5 Conclusion
It is time to summarize our conclusions. The first conclusion is that our
analysis of predicate calculus has shown that if there is a way to restrict
predicate calculus to those situations where [[F]] 6¼ , predicate calculus
will have the same maximal logical power as propositional calculus, as both
will then conform to the logical entailment system described by the complete
octagonal graphs of Figure 4.5c.
The second conclusion concerns a fact that has so far either been unknown
or been allowed to lie unexploited. We have been able to single out a core class
of contingent situations characterized by the condition that both [[F]]
and [[G]] avoid extreme values, that is, the condition that [[F]] 6¼ 6¼
OBJ and [[G]] 6¼ 6¼ OBJ. This class is significant (a) because the quantifying
L-propositions of LPredC are maximally informative when they describe such
situations (in all other cases the truth or falsity of an LPredC L-proposition
follows from the status of [[F]] or [[G]]) and (b) because a predicate logic
restricted to this class of contingent situations is maximally powerful (given
its eight basic expressions), while the general system of SMPC has only weak
logical power. Therefore, functionality will be boosted if a system of predicate
logic can be developed that sustains the restriction to the class of contingent
situations, so that quantifying sentences describing these situations can
benefit from a maximally powerful logic. In addition, we have found that
such a system will apply not only to the situations of the contingent class but
also to those other noncontingent situations where the F-class is nonnull. The
loss to the general SMPC system will thus remain limited, while the gain will
be maximal.
Finally, we have seen that BNPC is not only an extremely powerful predi-
cate calculus but is also an extremely functional one, in that, when all
restrictions are lifted, its nonnegated basic expressions automatically restrict
their valuation spaces to the mundane situations of column i.
The question we are facing now is the following: how will the predicate
logic of language cater for the situations that are missing in the restricted UR?
This question is answered in the following chapters.
7
1 Appellations (name-callings) may seem to form a special case. They hardly ever extend over a
whole discourse domain, though marvellous extended invectives are known in world literature.
2 The notion of commitment domain was first introduced in Hamblin (1970), who speaks of
commitment background.
The context-sensitivity of speech and language 199
4 For example (with thanks to Barbara Partee), a person may very well believe in evolution theory
and at the same time believe that of necessity every human individual must have had two human
parents.
The context-sensitivity of speech and language 203
the same object? If so, SSV is ensured and the operator may does not create an
intensional subdomain. But perhaps the speaker does not know that the
morning star and the evening star are the same planet and perhaps his
evidence shows that the evening star cannot be inhabited, though the morn-
ing star just may be, given the conditions under which the planet has been
observed. In that case, SSV is effectively blocked, which would make at
least the operator may intensional. But then again, an interlocutor might
say that if (7.6a) is true, then so must (7.8a) be, because morning star and
evening star are just different names for the same object. And the speaker
who produced (7.6a) will then have no choice but to agree. And analogously
for (7.6b) and (7.8b):
(7.8) a. The evening star may be inhabited.
b. The evening star must be inhabited.
The issue seems to turn on the meaning of the word relevant: what does one
take the ‘relevant knowledge state K’ to amount to? It doesn’t look as if this
question can be resolved here and now. We will, therefore, allow it to rest until
new insights arise. It may be added that the question of free SSV, even though
it triggered essential developments in semantics during the twentieth century,
is less important for an adequate understanding of natural language than it is
for an adequate philosophical notion of what ‘truth’ amounts to.
By contrast, if (7.7a) or (7.7b) is false, SSV appears to be freely applicable.
For in that case either K, in so far as it is relevant, is incorrect, in which case
substitution of the one term for the other does not affect the truth value, or K,
in so far as it is relevant, is correct but there is no compatibility or necessary
consequence, in which case SSV again fails to affect the truth value.
In actual fact, the widespread belief that epistemic modal subdomains are
intensional in the sense of disallowing SSV simply is a consequence of the
wildly unrealistic philosophical construct which reduces the extension of
S-terms embedded under modal predicates to sets of possible worlds. Since
the set of possible worlds in which the morning star is inhabited is different
from the set of possible worlds in which the evening star is inhabited, SSV
must, according to this analysis, be blocked. But this analysis is typically
generated by philosophical and formal a prioris. It lacks any psychological
plausibility, let alone any empirical support.5
5 The tangle was confounded by W. V. O. Quine (1953: 143–4), who, in an effort to show that SSV is
blocked in modal contexts, confused the value-assigning predicate bev with the predicate be of
identification (see Section 5.3.2 in Volume I). Quine wrongly regards a sentence like (i) as an
identity statement and not as a statement assigning a value to the parameter ‘the number of
planets’, which is what it is. Following up on this mistake, he shows that a replacement of the term
206 The Logic of Language
nine in (ii) with the number of planets leads to the obvious falsity (iii), which is why he holds that the
modal necessity operator does not allow for free SSV and is, therefore, intensional:
(i) The number of planets is nine.
(ii) Nine is necessarily greater than seven.
(iii) The number of planets is necessarily greater than seven.
Needless to say, this argument, influential though it may have been, comes to nothing.
6 The L-propositional form of epistemic and agentive modal statements is taken to be identical.
In many languages, including most languages of Europe, the modal predicate forms part of the
Auxiliary System and takes an embedded subject-S-term. The modal predicate is lowered into the
subject-S-term, where it may end up as a finite verb form (as in English), or as a morphological
element (as, for example, in Turkish). See Seuren (1996: 79–84, 111–16, 159–60, 221–2) for extensive
discussions and analyses in different languages.
7 Dutch and Low-German allow for an agentive prepositional phrase with agentive modals, under
the preposition van (von). Sentences like (i) are normal standard Dutch:
(i) Hij moet van zijn leraar de sommen afmaken.
he must of his teacher the sums finish
His teacher has told him to finish his sums.
Dutch van was, until a few centuries ago, the standard preposition for passive agent phrases (standard
modern Dutch has door); German von still has that function. This suggests that, at least for these
languages, the agentive modals may represent underlying passive predicates approximately of the
form ‘it has been made possible/necessary (by A) for B to do C’. We will, however, not pursue this
issue here.
The context-sensitivity of speech and language 207
(7.10) Ann fears that the Eiffel Tower has been hit.
It is not necessary for the subdomain under fear to be construed as requiring
the existential introduction of an address for the Eiffel Tower into the sub-
domain itself before (7.10) can be processed, as in (7.11), even though such an
interpretation is not excluded:
(7.11) Ann fears that there is a thing called ‘Eiffel Tower’ and that it has been hit.
The normal reading of (7.10) is such that the Eiffel Tower is taken to be a really
existing object, which can be given an address in the commitment domain.
Yet the actual existence of the Eiffel Tower is not entailed by (7.10), as can be
seen from (7.12), whose first conjunct, Ann is under the illusion that there is a
thing called ‘Eiffel Tower’, entails that there is no such thing as the Eiffel Tower.
If the second conjunct, she fears that it has been hit, entailed that the Eiffel
Tower exists, the conjunction as a whole would be incoherent or inconsistent,
but it is not: (7.12) is a fully coherent piece of discourse:
(7.12) Ann is under the illusion that there is a thing called ‘Eiffel Tower’ and
she fears that it has been hit.
Transdominial referential transparency likewise holds between subdo-
mains. Consider the following sequence of sentences:
(7.13) Roy is thought to have a sister. One hopes that she is more honest
than him.
In (7.13), she stands for the intensional object ‘Roy’s reputed sister’, who may
not exist at all. This object is represented in the intensional subdomain of what
people think. Yet it recurs in the intensional subdomain of what people hope.
This is made possible by the principle of transdominial referential transparency.
When an inconsistency arises between any domains or subdomains where
mutual consistency is required, this inconsistency is not due to transdominial
reference but to a conflict between actual and virtual being (see Section 10.8).
Thus, sentence (7.14) does allow for transdominial reference, but suffers from
inconsistency between the subdomains of knowledge and hope:
(7.14) !! I know Roy has no sister, but I hope that she is more honest than
him.
The formal system, or, if one wishes, the logic of transdominial consistency
has so far not been given any attention in the literature. No attempt is
made here to develop such a logic. Further research will have to bring greater
clarity.
The context-sensitivity of speech and language 209
the metalinguistic character of the radical negation may well be a not fully
grammaticalized remnant of an original truly metalinguistic operator ‘Not
true (the utterance u)’.
Now there is nothing for the NP the fifty-year-old bachelor to link up with, so
that the listener or reader is left with no other option than to apply post hoc
suppletion of the presupposition carried by (7.24), namely that there appar-
ently was a fifty-year-old bachelor approaching people in the street. But no
such measure is needed for (7.23), because world knowledge easily allows for
an identification of the Swiss banker and the fifty-year-old bachelor men-
tioned in the news item. The principle of Minimal D-change now says that
because such an identification is possible, it is mandatory, unless blocked by
specific information provided in the discourse or in the situation at hand. The
principle of minimal D-change thus amounts to a restriction imposed on any
new incrementation process to minimize the number of subdomains and the
number of addresses for individual objects or sets of objects.
7.3.1 Consistency
Any D must be internally consistent for the simple reason that an inconsistent
set of statements cannot be true for any situation at all. Speakers and listeners
have a profound awareness of this condition, as any discourse comes to a halt
when an inconsistency is detected.
Although it does not matter much whether a D in course is actually true or
not, it does matter whether it can be true. There is a CONSTRAINT OF POSSIBLE
TRUTH with regard to the propositional content of any commitment domain.
A commitment domain must, as a whole, be consistent, so that there is
something to the truth of which the speaker is committed, or whose truth
the speaker wants to be realized through the listener, who is requested,
216 The Logic of Language
Something was said about this latter aspect in Section 4.2 of Volume I, with
regard to a few examples from the literature showing a clash between speaker’s
overtly expressed commitments or appeals on the one hand and his or her
loyalty to the commitment or appeal on the other. The examples discussed
were Austin’s quote from Euripides’ Hippolytus: ‘my tongue took an oath, but
my mind remained unsworn’, Hamblin’s ‘I am obliged to order you to do D,
and I hereby do so; but my private advice to you is not to’, as well as the
famous Moore paradox ‘the cat is on the mat, but I don’t believe it’. And the
conclusion was that the existing literature is largely unclear with regard to
speech-act consistency. In general, the speech-act force of utterances has been
neglected in linguistic, semantic, philosophical, and pragmatic studies, where
all attention has been focused on propositional content. To remedy that
situation and bring greater clarity to the issues concerned is, however, a
research programme of such magnitude that we cannot possibly hope to do
much about it within the confines of the present study.
7.3.2 Informativity
A new increment to any D must be informative in the sense that it narrows
down the set U of possible situations in the direction of the target situation,
or else it must recapitulate what has been achieved so far and draw an
inference that has not so far been made explicit, as is typically the case in
sentences starting with therefore. In general, we speak of the PRINCIPLE OF
INFORMATIVITY or PI. The underlying rationale of PI seems to be a basic need
in linguistic interaction to home in on the target situation, which is to be
described up to the degree of precision needed for the purpose at hand. Given
our system of valuation-space modelling, this requirement can be cast into a
formal mould.
Formally speaking (and without taking into account the class of inference-
drawing increments), the principle of informativity (PI) is defined as follows:
PRINCIPLE OF INFORMATIVITY (PI)
Each successive increment in a discourse domain D must constitute a
further restriction of /D/, provided the restricted /D/ 6¼ .
Noncompliance with PI results in either an erratic or an incomprehensible
discourse (the latter in particular when /D/ is reduced to ).
Consider, in an abstract formal sense, the universe of all possible situations
U to be the disjunction (in the standard sense) of all possible propositions,
and hence the union of the VSs of all infinitely many possible L-propositions:
/P1/ [ /P2/ [ /P3/ [ . . . .As a general principle we say that before any discourse
218 The Logic of Language
has started /D/ ¼ U. Although this makes the initial /D/ (¼ U) a nondescript
entity, and hence unfit for natural cognition, it helps to see how a discourse
can start with the implicit question, anticipated by the speaker: ‘Exactly what
proposition do you, speaker, want to increment for the purpose of the present
interaction?’ Any first L-proposition P presented for incrementation can be
regarded as an answer to that question. P then restricts the initial /D/ (¼ U)
to /P/, in the sense that the target situation is an element in the new, more
restricted, /D/.
We will speak of any given (old) D as Do, and of the resulting new D as Dn.
It is not required that, for an increment of P, /P/ be included in /Do/, which
would make P entail Do. But it is not forbidden either. When /Do/ contains
/Dan is human/ and /P/ is /Dan is a student/, or, presuppositionally, when
/Do/ contains /Dan was married before/ and /P/ is /Dan is divorced/, then P
entails Do and the discourse is still coherent and informative. In most cases,
however, P and Do will be semantically independent with regard to each
other, so that /Dn/ will consist of the nonnull intersection of /P/ with /Do/.
This process of incrementing P to Do is shown graphically in Figure 7.1.
Anticipating the analysis of presuppositional phenomena in Chapter 10, we
posit that, for normal or default negation, /NOT(P)/ is the complement of /P/
within /Do/, since an incrementation of NOT(P) is an answer to the (implicit
or explicit) question ‘P?’ within the interpretative limits of Do. In Figure 7.1,
/Do/ is marked by horizontal lines, and /Dn/ by vertical lines. Since by
definition /Dn/ is included in /Do/, /Dn/ is, in fact, marked by both horizon-
tal and vertical lines. This makes the standard incrementation procedure an
instance of AND-conjunction.
/not(P) /
/Do/ :
/Dn/ :
/P/
/Dn/ in /Do/ :
U
quite rich. Since this sister has always been mentioned in connection with a
daughter, Dan believes that, if Philip has a sister, this sister has a daughter.
Dan knows that Philip’s parents are in their mid-forties, which makes it
practically impossible for them to have a grown-up granddaughter. Therefore,
Dan knows that, if Philip’s sister has a daughter, this daughter must still be a
child. Perhaps a starting point for a suspense thriller!
Subdomain hierarchies do not infringe upon the principle of transdominial
denotational transparency discussed in Section 7.2.2.1, as one might be in-
clined to think. This is because that principle is about interpretational acces-
sibility of addresses in the commitment domain or any subdomains, whereas
subdomain hierarchies are about coherence. Although a sequence of sentences
like (7.28) may be deemed incoherent on account of its transgressing the
boundaries set by subdomain hierarchies, there is no problem as regards the
denotations of the various definite noun phrases: she in the second sentence
clearly denotes the sister-address set up in the first sentence, even though a
subdomain of belief does not fit well into a subdomain of hope.
An attempt at setting up a few subdomain hierarchies was made in Seuren
(1985: 417–22). One such scale, intended to apply to descending epistemic
strength, is shown in Figure 7.2. Such scales imply that the discourse may
proceed from the column marked 0 to any column marked by a higher
number, while predicates in the same column are not subject to a sequenti-
ality constraint.
The predicates of column 1 are all factive predicates, which means that the
truth of the object that-clause is presupposed. This makes it possible, though
perhaps only marginally so, to formulate a coherent sequence of conjuncts as
exemplified in (7.30):
(7.30) Molly regrets that her brother is in jail but (it is true that) the man is
dangerous.
This is not a counterexample to the epistemic-strength scale of Figure 7.2,
because factive predicates induce a presupposition of truth for their that-
clauses, so that whatever is said in the that-clause is retrievable from the
higher truth domain.
0 → 1 → 2 → 3
true know believe hope
realize think wish
regret … want
… try
…
FIGURE 7.2 Intensional scale for epistemic strength
The context-sensitivity of speech and language 221
0 → 1 → 2 → 3
true must probable may
necessary likely possible
… … …
FIGURE 7.3 Intensional scale for epistemic inference
discourse domain is compatible with Sandra having a son and this son having
been away.
Subdomains created by the predicate hope are thus subsidiary subdomains
and they have to fall back on a recipient subdomain of what the subject-term
referent of hope believes, to cater for nonprojected presuppositions.
To the best of my knowledge, this particular phenomenon has not so far
been discussed in the literature. If this is so, it is surprising because the
phenomenon is of considerable importance for a proper understanding of
the processes of utterance interpretation. It seems, incidentally, that the class
of predicates that create subsidiary subdomains is identical with the class of
emotive factive predicates that do not allow for substitition salva veritate of
topic–comment modulation—a phenomenon hitherto unknown and dis-
cussed in Sections 3.2 and 6.2.3.2 of Volume I. We revert to this important
topic in Section 10.5.2 and in Chapter 11.
Right now, we leave these and related questions open and merely point out
that these aspects of discourse incrementation have been neglected in the
literature, despite their obvious relevance.
situational knowledge prove that they have been wrong in insisting that
the truth conditions of sentences are compositionally derivable from the
satisfaction conditions of the predicates occurring in them and the structural
positions they occupy. Let us consider a few examples, some of which have
been discussed earlier, especially in Sections 9.6 and 9.7 of Volume I, though
in a slightly different context.
Gradable adjectives provide a prime example. These are adjectives that
allow for grade modifiers such as rather, very, or a little. They also allow
for comparatives and superlatives.8 Typical examples are expensive, old,
large, wide, smart, popular, rich, safe, fast, and many others, as opposed to,
for example, closed, empty, rectangular, frontal, dead, postprandial, which
are, in principle, nongradable. The applicability of gradable adjectives
(when used absolutely, that is, not in a construction that implies a form
of comparison) depends on, usually socially recognized, standards, such
as standards of cost, age, size, monetary value, etc., for the objects denoted
by their subject terms. The description of such standards is not part of
the description of the language concerned but of (socially shared) knowledge.
Thus, when I say:9
(7.31) He is an old man.
the truth of what I say depends on socially acknowledged norms for calling a
man old. How the norm is selected is still largely unknown. It is unclear, for
example, what norm is to be applied in a case like Apes are intelligent. Are they
meant to be intelligent with regard to humans, or compared to other animals?
There is a large amount of literature dealing with gradable adjectives, and
many issues have so far remained unresolved. But it is clear, across the board,
that no solution will be found unless cognitive factors are fully integrated into
the semantics of gradable adjectives. The point here is that the criteria for
truth or falsehood are not given in the linguistic description of the meanings
of these adjectives but in (socially shared) knowledge. Such adjectives thus
need an open parameter (sometimes also called ‘free variable’) in their
8 Interestingly, some adjectives are nongradable in literal use but become gradable when used
metaphorically. For example, the adjective self-contained is nongradable when applied to an apartment
but gradable when applied to a person’s character. Likewise for square, round, full, angular, pedestrian,
human, savage, and many other adjectives, which are nongradable when used literally (especially in a
technical context), but gradable when used nonliterally or less strictly.
9 One recalls from Section 9.3 in Volume I the proposal that a sentence like (7.31) is to be analysed as
‘He mans oldly’ (or ‘He olds his being a man’), in analogy with He is a good teacher and He teaches well.
224 The Logic of Language
10 Typically, in the case of gradable adjectives, the boundary between truth and falsehood forms
what is often called a ‘grey area’, in which gradable statements are neither clearly true nor clearly false,
letting in a ‘fuzzy’ logic with transitional truth values. Gradability thus goes hand in hand with fuzzy
truth values.
The context-sensitivity of speech and language 225
In (7.37a), the viewpoint is taken by John, and the box must be to his left as he
sees it. In (7.37b), this is not necessary: the box must be to John’s left as
I (speaker) see it, while for John it may be anywhere around him. That
predicates like left, right, in front of, behind, and so on, and also pairs of the
type come and go, are sensitive to viewpoint is no doubt due to the fact that
they involve ego-related localizations. (One thinks of the quasi-problem of
why mirrors invert left and right but not up and down.)
Moreover, FUNCTION is known to be a determining factor in lexical mean-
ings, in particular in the domain of artefacts. What makes a coat a coat is not
its size, shape, material, or what not, but its intended function—a criterion to
be satisfied not by the object itself but by the use to which it can be put
according to whatever, possibly very creative, cognitive criteria.
A further source of cognitive dependency lies in a semantic component of
evaluation. As was already pointed out by the Greek Sophists, the truth of a
sentence like There is a pleasant breeze depends primarily on what humans
perceive as ‘pleasant’, under varying conditions, and only in a secondary sense
on the physical properties of the object so predicated. This point has great
philosophical importance, as philosophers argue about the question of
whether predicates like good and just (the central concepts in ethics), and
beautiful (central in aesthetics), are to be defined in terms of world properties
alone, or in terms that co-involve personal evaluation. As is shown in Section
3.4 of Volume I, this question applies likewise to the predicate true.
One further source of vagueness lies in the fact that satisfaction conditions
of predicates often centre around prototypical ‘ideals’ (Rosch 1975). Some
objects are closer to the intended prototype than others. A sparrow, for
example, is closer to the prototype of ‘bird’ than an ostrich or a penguin.
The notion of prototypicality plays a role in lexical semantics, in that pre-
conditions (the class of satisfaction conditions that give rise to presupposi-
tions) often select prototypical circumstances. The preconditions of the
German predicate kahl (bald, bare), for example, include the condition that
the subject-term referent is prototypically a human being or his/her head,
prototypically covered with hair on the top of the head. The prototypicality
appears from the fact that subjects, when asked what they think of first on
hearing the word kahl, almost invariably answer that they think of a human
head. Yet the subject-term referent may also, nonprototypically, be another
kind of object, normally covered with other growth, such as feathers or leaves,
or with decorative artifacts.
The update condition (giving rise to standard entailments) is simply that
the growth or decoration which is normally there, is not there. This allows for
phrases like der kahle Kopf (the bald head), der kahle Mann (the bald man),
228 The Logic of Language
der kahle Baum (the bare tree), der kahle Vogel (the bald bird), die kahle
Landschaft (the bare landscape), die kahle Wand (the bare wall). One notes,
incidentally, that English has two predicates to cover this semantic field: bald
and bare. Yet, when asked what the English equivalent is of German kahl, most
people will reply bald, not bare. This is because of the prototype of kahl, which
centres on hair on the human head, so that the more marginal cases slide out
of focus. Prototypicality is thus an autonomous cognitive parameter that
plays a role in the satisfaction conditions of many predicates.
This is as far as we can go in the present context. But even this cursory
discussion shows that lexicographers are not all that wrong when they view
theoretical semantics with a fair amount of scepticism.
8
Discourse incrementation
1
(8.3a) may be rephrased as John has one girlfriend,—who is Australian, with a nonrestrictive
relative clause. Nonrestrictive relative clauses have the force of a subsequent conjunct, as appears from
the fact that they may be followed by a polar tag with speech-act force, as in: John has one girlfriend,—
who is Australian, isn’t she?
232 The Logic of Language
These two sentences differ radically in what they say. Sentence (8.3a) implies
that John has a girlfriend, who is Australian. (8.3b), by contrast, is compatible
with a sequel like and many others who are not. (8.3a) is a conjunction of two
sentences; (8.3b) is not. It is clear that this girlfriend in (8.3a) cannot represent
a variable bound by the existential quantifier, represented by the indefinite
article a, if only because one can say I believe that John has a girlfriend and
I hope that this girlfriend is Australian, where binding of this girlfriend under
one single quantifier is impossible. The principle of generality dictates that the
same must then apply to this girlfriend in (8.3a).
Nothing changes for (8.3a) when the definite description this girlfriend is
replaced with the anaphoric pronoun she which, again, cannot represent a
variable bound by the indefinite article (quantifier) a. But what does it
represent? We say that it represents an instance of PRIMARY ANAPHORA—that
is, anaphora where the antecedent is not itself a referring expression but an
address that has been set up existentially just before and is ‘visited’ by a
subsequent anaphoric expression for the first time.
The problem for (8.3a), with she for this girlfriend, is—and this is the basic
problem of primary anaphora, further discussed in Chapter 9—how to
account for the status of the anaphoric pronoun she as a referring pronoun
not bound by the existential quantifier represented by a, but somehow
recovering its antecedent from the preceding existentially quantified sentence.
In anticipation of the treatment of this problem in Chapter 9, we now
introduce the technique of ADDRESS CLOSURE, which ensures that a pronominal
reference used in primary anaphora is not bound by the preceding quantifier
but is represented as a definite term in D.
First look at (8.4), where address closure has taken place. We say that an
address that has not been closed is an OPEN ADDRESS. Thus, d–1 in (8.1) is an
open address, where a is an existential quantifier. An open address is closed
the moment a definite term retrieves its denotation from a preceding existen-
tially quantified sentence and thus becomes a referring term. Address closure
is represented as ‘//’. The incrementation of The cat ran away, uttered right
after There was a cat, thus looks like (8.4):
(8.4) d–1 [a j Cat(a) // Ran away(the a(Cat(a)))]
Address closure is needed to establish reference. It changes a from being a
quantifier—that is, a function from sets to truth values—to being a reference
function selecting an object (the referent) from a set of objects. Reference is a
function that takes a set and delivers a specific element in that set. The
reference function has been a source of discomfort to modern semantics
Discourse incrementation 233
2
One might think of using the notation ‘Run away(a)’ for the part after closure. This, however,
would make it impossible to make cross-references, as in (8.8c,d) below.
234 The Logic of Language
Pred NP Pred NP
Obj x Cat x
d. d–1 [a j Cat(a)]
In (8.5b,c), AN is the existential quantifier, also known as ∃, but AN is
reserved for a single entity, as opposed to SOME, which is reserved for the
plural existential. AN is treated as a binary higher-order predicate over pairs of
sets, with, in this case, the subject term Obj(x) (see Section 5.6 in Volume I)
and the object term Cat(x). The index x in ANx binds the variables x in Obj(x)
and Cat(x). The operator AN requires for truth that there be at least one
element common to the sets denoted by the two terms.
There is a problem here regarding actual and virtual being. Following
the argument developed in Section 5.6 of Volume I, (8.5a) should not entail
the actual existence of a cat, since the predicate Obj(x) is intensional,
which would allow it to intensionalize the set of cats in virtue of Virtual
Object Attraction discussed in Section 5.4 of Volume I. That (8.5a) is felt to
entail actual existence might be attributed to the fact that a sentence like (8.5a)
is normally used under an operator of place, as in There were cats in the cellar,
where the operator in the cellar ensures an extensional interpretation. Since,
in natural speech, the verification domain is normally restricted to a given
situation known and accepted by speaker and listener to be actual and not
just virtual, the default interpretation of (8.5a) would then implicitly impose
that situation as a local restrictor turning the sentence into an extensional
statement. Yet intuitively, (8.5a) does entail actual existence. For example,
when asked Are there unicorns? I can reply in truth No, only in stories. We
revert to this question below.
The SA-structure (8.5c) is the tree-structure counterpart of (8.5b). NP1
is the subject term, NP2 the object term, and the caret symbol ‘^’ is a
set-denoting operator: ‘the set of things x such that . . . ’. The grammatical
Discourse incrementation 235
3
Typing of terms and predicates is a commonly used device in formal semantics, introduced by the
Polish logician Ajdukiewicz during the 1930s. It is based on the typing of entities as e (entity) and truth
values as t (the symbols are due to Montague). It enables one to follow a compositional function
calculus from entities to truth values, as done in categorial grammar, where the finally resulting value
must be typed t. A set of entities, denoted by what is known as a first-order predicate, is typed (e,t),
that is, a function from individual entities to truth values. A Russellian quantifier is a unary second-
order predicate, typed as ((e,t),t), that is, a function from sets of individual entities to truth values. The
existential quantifier, for example, takes a set of individuals and assigns it the value TRUE just in case the
set is nonnull and otherwise the value FALSE.
236 The Logic of Language
d–1 [a Cat(a) // So ]
Pred 1 NP2
Catch
Det S2
the x
Pred NP
Mouse x
The procedure is repeated for NP2 in d–1, yielding the two parallel
increments:
238 The Logic of Language
d–1 [a Cat(a) // So ]
Pred 1 2
Catch
d–2 [a Mouse(a) // So ]
Pred 1 2
Catch
For practical reasons trees are written as bracketed strings, giving (8.8c) and
(8.8d), respectively.
A sentence like (8.9a), with SA (8.9b) is incremented as follows, with D
containing d–1 [a j Cat(a)]:
(8.9) a. The cat caught a mouse.
b.
So
Pred NP Pred NP
Catch NP3 x Mouse x
Det S3
the y
Pred NP
Cat y
d–2 [ a S2 , S1 ]
Pred NP Pred NP
Mouse a Catch NP3 a
Det S3
the y
Pred NP
Cat y
bound by d–n (for open addresses) or the definite term n (for addresses after
closure).
Double existential quantification is treated analogously. (8.10a), with SA
(8.10b), yields the open address (8.10c):
(8.10) a. A cat caught a mouse.
b.
So
Pred NP NP Pred NP
Catch x y Mouse y
This, however, requires for truth an actual specific Ferrari, claimed by John to
be owned by him. But the Ferrari in question may well be, and probably is,
a virtual Ferrari because John may well be bluffing about his racing monster.
So there we are: the Ferrari claimed by John to be his property may well
be a virtual vehicle but to refer to it it looks as if we need an open address
that asserts its actual existence.
The only solution available, given the machinery as developed so far, is to
apply the intensionalization operator #, as in (8.7) above, and establish an
address like (8.13) for ‘There is a Ferrari John claims he owns’:
(8.13) d–3 [a j #Ferrari(a), Claim(John, Own(John,a))]
This correctly speaks of a specific Ferrari, namely the one John claims he owns,
but, owing to the fact that the existential quantifier no longer entails actual
existence by itself, this specific Ferrari need not actually exist. The assignment
of the intensionalization operator # to the predicate [[Ferrari(a)]] is driven by
the fact that the open address originates in the intensional context created by
the predicate Claim, in whose scope the open address for the Ferrari was set
up. The still open address d–3 can now be selected by the phrase the Ferrari
John claims he owns to land at and close.
that, for any set X, Ppl(X) is P(X) minus and all singletons.4 We might also
say that Ppl(X) is the set of all natural subsets of X plus X itself, following the
definition of natural set given in Chapter 3.5 The very fact that plurality as a
linguistic category requires the notion of plural power set as just defined, as
opposed to the standard notion of power set, demonstrates, if not the validity,
certainly the reasonableness of the natural set theory hypothesis put forward in
Chapter 3.
To express this distinction formally, we can usefully employ the type-
raising distributive operator ‘::’, defined over predicates, for the language of
SAs and discourse addresses. Let [[P(x)]] denote, as before, the extension
of the predicate P(x)—the set of individuals x such that x satisfies P—then
[[::P(x)]] is defined as follows (x ranges over sets of individuals):
½½:: PðxÞ¼Def Ppl ½½PðxÞ
The extension of ::P(x) is thus the set of sets of at least two individuals x
such that each x satisfies P. The expression ::Happy(the children) reads as
‘the set of children in question is an element in Ppl([[Happy(x)]]), the plural
power set of Happy’. In other words, the sentence The children are happy is
true just in case the set of children in question is a natural subset of the total
set of those individuals that are happy. This requires that more than one child
is happy, because singletons are excluded from plural power sets. When P is
transitive and both of its terms are definite and plural, :: distributes indis-
criminately over the subject and the object term referents.
An open plural address is normally of the form (8.14c), representing (8.14a)
with SA (8.14b):
(8.14) a. There were (some) cats.
b.
So
Pred NP Pred NP
:: Obj x :: Cat x
j ::Cat(ā)]
c. d–4 [a
4
For languages, such as classical Arabic and Ancient Greek, with a morphological category ‘dual’, in
addition to ‘singular’ and ‘plural’, special provisions must be made for sentences quantifying over two
elements. It is not clear, at this stage of the enquiry, whether an underlying numeral two will suffice for
the purpose.
5
Standardly, if X has cardinality n, P(X) has cardinality 2n. For plural power sets, as one will easily
figure out for oneself, if X has cardinality n, Ppl(X) has cardinality 2n–(nþ1).
242 The Logic of Language
Assuming ABPC to be the logic in charge (but SMPC will do as well in this
case), the plural existential quantifier SOME yields truth just in case there is a
nonnull intersection of at least one plural set of individuals of the two term
extensions concerned, which now are sets of sets of individuals. SOME is again
an instruction to set up a new address of the right type. In (8.14c), a
represents plural SOME and binds the variable. (8.14c) thus requires that
there be at least one set of at least two actually existing cats.
The distributive operator :: makes it possible to account for a sentence
like (8.15a), rendered by the SA (8.15b) and the corresponding open address
(8.15c):6
(8.15) a. There were cats (that were) running away.
b.
S
Pred
SOME x̄ NP1 NP2
^x¯ S1 ^x¯
S2 S3
Pred NP
:: Obj x¯ Pred NP Pred NP
:: Cat x¯ :: Run away x̄
6
Sentence (8.15a) is meant to be taken in the purely existential way, and not as Some of the cats were
running away, which is perhaps better treated in a format analogous to that of All (of) the cats were
running away, as in (8.30) below. Perhaps all quantifiers should be open to a double treatment, one in
which they do and one in which they do not involve a definite restrictor term.
Discourse incrementation 243
reading of They ran away, incremented as in (8.18b), lets the cats in question
run away individually.
Typical collective readings are found in sentences like:
(8.19) a. The mice have been at the cheese.
b. The Americans were the first to land on the moon.
In their common reading, these do not imply that all the mice have been at
the cheese, or that all Americans were the first to land on the moon, as they
are about the mice, or the Americans, as a group. (Embarrassingly, these
sentences are true even if there was one single mouse at the cheese or
one single American on the moon—a fact that our formalism is as yet unable
to account for.)
Now consider the at least three-way ambiguous (8.20a) with the SAs
(8.20b) (in two versions) and (8.20c):
(8.20) a. The men carried a bag.
b.
So
c.
So
Pred NP1
::S1 Det_ S2
the x
Pred NP2 NP3 Pred NP
_
ANy :: Man x
^y S3 ^y S4
Pred NP
_ NP Pred NP
:: Carry x y Bag y
The SA (8.20b) has two guises, one with and one without the distributive
operator :: over the predicate Carry. With ::, (8.20b) says that there was a bag
that the men carried individually, so that the same bag was carried as many
times as there were men. Without ::, we have the group reading, saying that
the men combined forces to carry one single bag. IP produces d–11 in (8.20d)
as the result of (8.20b), with or without the distributive operator :: over
Carry. In either reading there is just one single bag. Supposing D already
contains the open address
j :: Man(a)]
d–12[a
(‘there were men’), this address is now closed, analogously to d–1 in (8.11d),
resulting in (8.20e), again with or without the distributive operator :: over
Carry.
(8.20c), however, does not speak of one single bag but says that the set of
men in question is one of those sets of individuals such that each individual
had a bag to carry, so that there were at most as many bags as there were
individuals, and perhaps less, if two or more of the men carried a single bag.
This latter reading is incremented as (8.20f): the set of men referred to by d–12
was such that for each man there was a bag carried by him (alone or with one
or more others).
The main predicate of (8.20c) is the propositional function (¼predicate)
S1. S1 is a tree-structure version of what is known in logic as a LAMBDA
PREDICATE. The lambda operator º creates predicates, enabling one to incorpo-
rate quantificational and other operators into a predicate. In this case, the
lambda predicate denotes the set of those sets of at least two individuals who
have a bag to carry, individually or collectively. A variable is needed to ensure
that this lambda predicate is a propositional function rather than an open
address with a truth value. This variable is of a different register from those
used so far in the address notation. For that reason we revert to the end of the
alphabet and use x, here type-raised to x because the carriers are groups of at
least two individuals. The lambda predicate is incorporated as such into the
address notation.
With this lambda predicate, the sentence says that the set of men in
question is one of those sets of at least two individuals who have a bag to
carry, individually or collectively. The collective or group reading need not be
represented, as it is already given in (8.20b) without :: over Carry, which may
be seen as an instance of lambda reduction. Therefore, (8.20c) only gives the
distributive reading, which is not captured by (8.20b). This reading requires
two occurrences of the distributive operator ::, one for the predicate ::ºx̄ [b j
246 The Logic of Language
In the strictly existential sense, the quantifiers MANY and FEW are treated as
variants of the neutral existential quantifier SOME, regarded as a binary
second-order predicate over sets. Thus, a sentence like (8.22a), with the SA
(8.22b), comes out as (8.22c) (leaving aside, of course, the question of what
gradable MANY and FEW imply in any given context):
(8.22) a. Many cats were asleep.
b.
S
j ::Cat(ā), Asleep(ā)]
c. d–14 [MANY a
Like SOME, the quantifying predicates MANY and FEW are higher-order by
nature, requiring terms denoting sets of sets. (8.22b) says, in effect, there is a
nonnull intersection between the set of plural cat sets and the set of plural
object sets of beings that were asleep, while the intersection of the set of cats
and the set of beings that are asleep has a high cardinality.
Formally, we define the semantics of the quantifiers MANY and FEW in the
following way (cf. (2.14b) in Section 2.3.5.2):
(8.23) For all sets X and Y:
a. [[MANY]] ¼ { <Y,X> j Ppl(Y) \ Ppl(X) 6¼ , jY \ X j is high }
(the extension of the predicate MANY is the set of all pairs of sets
Y, X, such that the intersection of Ppl(Y) and Ppl(X) is nonnull
and the cardinality of the intersection of Y and X is high)
b. [[FEW]] ¼ { <Y,X> j Ppl(Y) \ Ppl(X) 6¼ , jY \ X j is low }
(the extension of the predicate FEW is the set of all pairs of sets Y,
X, such that the intersection of Ppl(Y) and Ppl(X) is nonnull and
the cardinality of the intersection of Y and X is low)
One notes that this definition makes a sentence like Few cats were asleep false
when there was only one sleeping cat, because in such a case Ppl([[Asleep]]) \
Ppl([[Cat]]) ¼ , singletons being excluded from plural power sets. If one
finds that unsatisfactory, one may reduce the satisfaction conditions for
MANY and FEW to simply ‘jY \ X j is high’ and ‘jY \ X j is low’, respectively,
248 The Logic of Language
Pred NP NP Pred NP
–
(::) Catch –x
–
y :: Mouse y
-
j ::Cat(ā), [MANY
c. d-17 [FEW a bj ::Mouse(b-), (::)Catch(ā,b-)]]
The group reading says ‘a small group of cats caught a large group of mice’. In
this reading subsequent definite reference can be made to the large group of
mice, as in These mice had escaped from a laboratory, which requires the
inferentially added address:
(8.28) d-18 [MANY a j ::Mouse(ā), [FEW -bj ::Cat(b),
Catch(b,ā)]]
But in the distributive reading, with ::Catch(b,ā), subsequent definite
reference is not possible, which means that inferential bridging of the kind
at issue must be blocked. The passive of (8.27a):
(8.29) Many mice were caught by few cats.
is equivalent to (8.27a) only in the group reading. In the distributive reading
scope differences destroy the equivalence.
7
It is possible to hide the discourse-dependency of the universal quantifier in a conditional (see
Section 8.2.4), as when one says If there is a set of farmers, then all farmers grumble, which seems to be a
sentence type involving the use of any as in Any doctor will tell you that smoking is bad. But here again,
the set of farmers has to be introduced first, albeit under the conjunction if. And, of course, saying If
there is a set of farmers, then all farmers grumble is not the same as saying All farmers grumble.
8
One notes that the condition [[F]] 2 Ppl([[G]]) is equivalent with the condition [[F]] [[G]], with
[[F]] and [[G]] as natural sets, just as [[F]] 2 P([[G]]) is equivalent with [[F]] [[G]]. The advantage of
the formulation [[F]] 2 Ppl([[G]]) is that it provides a unified solution to the type problem caused by
standard analyses for sentences like All the farmers dispersed. The analysis 8x(Farmer(x) ! Disperse
(x)) or, in terms of generalized quantification, 8x(Disperse(x),Farmer(x)) will not do because
Disperse(x) contains a type error. Interestingly, the standard analysis may be taken to explain why
the tenseless ‘eternal’ sentence All farmers disperse is infelicitous, but it fails to account for the
felicitousness of an occasion sentence like All the farmers dispersed. The analysis given here provides
Discourse incrementation 251
j ::Farmer(ā)]
c. d–19 [a
j ::Farmer(ā) // ALL[::Grumble(19)]]
d. d–19 [a
In this version of ALL, (8.30b) is read as ‘the individual grumbling of each
farmer was total as regards the set of farmers’. The quantifier ALL thus functions
as an adverbial modifier of the L-proposition Grumble(the farmers).9
For (8.30a) to be incremented, D must already contain d–19 [ā j ::Farmer
(ā)] (‘there are farmers’), thus ensuring that the class of farmers is nonnull.
The address d–19, if still open, is closed, following the primary definite
reference to this particular set of farmers. [[::Grumble(x)]] denotes the set
the answer. Even so, ‘occasion’ ALL is not suitable for just any higher-order predicate. A sentence like All
farmers are numerous is incoherent even though numerous is an intrinsically higher-order predicate.
The reason seems to be that the natural language semantics of ALL requires that no member of the
restrictor set (the set of farmers, in this case) be left out—a condition that makes sense for predicates
like disperse or sit in a circle but not for a predicate like numerous.
9
This might help explain the hitherto unexplained fact that all is allowed to ‘float’, occupying an
adverbial modifier position, as in The farmers all grumble, whereas such ‘floating’ is impossible, or
anyway much less current, for existential quantifiers. Note that ‘floating’ quantifiers typically occupy
adverbial positions in the sentence, not only in English but in all languages I have so far checked.
252 The Logic of Language
10
What has remained unexplained in the account given above is the possibility of the negative
polarity item any occurring in restrictive relative clauses attached to the restrictor term of sentences
quantified with all and every, but not, at least according to many speakers, with each:
(i) Every student who had done any work passed.
(ii) All students who had done any work passed.
(iii) *Each student who had done any work passed.
Nor do I have an explanation for the fact that each and all are allowed to ‘float’, as in The students
each (all) went home, while this possibility does not exist for every: *The student every went home.
Discourse incrementation 253
8.2 Instructions
The incrementation procedure IP is also able to follow INSTRUCTIONS con-
straining the further development of any given D. All standard operators of
propositional logic are, from a discourse-semantic point of view, instructions.
The logic of the propositional operators is seen as an emergent property of
basic-natural set theory combined with the discourse-semantic incrementa-
tion instructions. In the following subsections it is shown how this ‘emer-
gence’ can be traced in detail. This way, our combined theory of basic-natural
set theory and discourse-semantic incrementation is meant to provide an
alternative to the currently dominant pragmatic accounts.
8.2.1 Conjunction
The sentential functor AND is, in principle, nothing but an instruction to
increment the conjuncts in the order given. It is the basic discourse-incre-
mentation functor. In many cases, IP is iconic in that it follows the temporal,
causal, or motivational order of the events or situations described, as appears
from the difference between (8.35a) and (8.35b):
(8.35) a. She went to Spain and married. (A Spaniard?)
b. She married and went to Spain. (Alone or with her husband?)
Discourse incrementation 255
This is not always so, as will become clear in a moment. When the incremen-
tation is iconic, we speak of an ORDERED INTERPRETATION.
Whether the difference between (8.35a) and (8.35b) is truth-conditional or
not is hard to say.11 If I hear (8.35a) and am then told that, in fact, she married
first and then went to Spain, I think I would feel cheated and I might be
prepared to say that what I was told was false. Standard logic is unable to
account for this, but it is consistent with a dynamic logic as proposed in
Groenendijk and Stokhof (1991).
But this is by no means always so. In sentence (8.36), the order of the
conjuncts is definitely relevant but not, it seems, truth-conditionally so:
(8.36) It’s raining and we’re out of booze.12
Here the ordered interpretation is motivated by the speaker’s wish to paint a
picture of utter misery. It wouldn’t be so bad if it were raining and speaker and
company still had a sufficient supply of spiritual refreshments, but being out
of booze while it’s raining is, one gathers, the ultimate agony for speaker and
company.
As is said in Section 7.2.2.3, the second conjunct normally restricts the
first and not vice versa, precisely because, as is said in Section 7.3, the
normal function of every new increment consists in homing in on the target
situation, restricting the number of possible situations in which D can be
true. This is, however, not an absolute rule, because sometimes the
new increment recapitulates what has been said or draws a conclusion, as
in (8.37), where (in terms of valuation spaces) the fact of John’s being in
his office is restricted by the fact that his light is on. Yet in (8.37) John’s light
is on precedes he is in his office, which is made possible by therefore, which
draws a conclusion:
(8.37) John’s light is on and, therefore, he is in his office.
The standard function of every new increment to restrict the set of possible
situations in which D is true is naturally expressed in terms of VS-modelling
(see also Figure 7.1 in Section 7.3.2). Take the two logically and semantically
independent sentences (where she is keyed to the same person):
11
According to Cohen (1971) the difference is truth-conditional, but according to Gazdar (1979:
69–71) it is a Gricean implicature, defeasible by, for example, the addition of in reverse order to a
sentence like (8.35a). Gazdar’s argument, however, fails to convince, because the sentence with that
addition is not the sentence without it: the one may have an entailment which the other lacks.
12
See Blakemore and Carston (2005) for related and highly interesting cases of and-conjunction
such as, for example, Paul can’t spell and he is a linguist.
256 The Logic of Language
a. b.
/39/ /39/
/38/ /38/
/38/
Some politicians only do politics and forget about policy—a conjunction type
that linguists tend to stay away from.13
It won’t do to speak of multiply ambiguous and unless one is forced to. The
ideal solution would be to reduce all existing varieties of conjunction to
speech-act or propositional conjunction, but the literature is far from unani-
mous on the success of that enterprise.
The reduction of natural language and to full L-propositional conjunction
is complicated by the fact that the conjunctor AND does not conjoin full
L-propositions in surface structure but induces the grammatical process of
CONJUNCTION REDUCTION (CR). The grammars of all languages allow for
a variety of ways to shorten full L-propositional conjunction in the
corresponding surface structures, which then require a reconstruction of the
full L-propositional form by way of a syntactic parsing procedure.14 In this
regard, some notable success has been booked in linguistics, except for so-
called PHRASAL CONJUNCTION, as in (8.40), which is, of course, not reducible to
‘John is a nice couple and Rose is a nice couple’:
(8.40) John and Rose are a nice couple.
Phrasal conjunction clearly requires a separate analysis, which may result in
the conclusion that here, too, one has to do with L-propositional conjunction,
albeit in an encapsulated form. But no well-motivated answer to this question
has so far come to light.15
CR is already visible in (8.35a,b), where the common subject of both L-
propositional conjuncts is eclipsed in the second conjunct. More drastic cases
of CR are shown in (8.41) (the commonly used labels for the type of CR have
been added:
(8.41) a. (Both) Bert and Alex bought a new car. (Left CR)
b. Bert bought a car for Mary and a horse for Sue. (Right CR)
c. Bert bought a car and Alex b/o/u/g/h//t a horse. (Gapping)
d. Bert likes and Alex hates the play. (Right-Node Raising)
What we now see is that some of these reduced forms are and some are not
open to an ordered interpretation. To begin with, the form of CR known as
13
But see Seuren (1996: 325–6) for the scope-sensitivity of AND with regard to quantifiers and the
ensuing restrictions on the syntactic processes of CONJUNCTION REDUCTION.
14
For more detailed discussions, see, for example, Van Oirsouw (1987), Seuren (1996: 323–38).
15
Nor do I have an account of the idiosyncratic use of and in texts like:
(i) Moving now to sports—and in London, Arsenal have qualified for the Europe cup.
As far as I am aware, this use of and is restricted to BBC radio English.
258 The Logic of Language
8.2.2 Negation
Negation looks simple but that appearance is deceptive. It has all the com-
plexities one should expect of natural language phenomena. Like AND, it
primarily takes propositions and propositional functions in its scope, but,
unlike AND, it cannot stand over a speech-act operator. It may restrict itself
to predicates, as in nonconformist, non-Catholic, but such cases can, in princi-
ple, be treated as negation over a propositional function because that is
what first-order predicates are, propositional functions that take one or
more individuals and deliver a truth value—that is, of type (e,t). The same
applies to the internal negation of predicate calculus, as in Some fishermen do
not swim. Here again, the negation is construed as applying to a propositional
function, in this case the function ¬ ::Swim(x).
Lexically incorporated negation, discussed in Section 8.6.4 of Volume I,
presents many problems, as already noted by Aristotle. The words polite and
impolite denote not contradictory but only contrary properties. Neither
immoral nor amoral is the contradictory of moral, though both are contrary
with it. This, however, is a matter of lexical semantics, not relevant in the
present context.
What is relevant is the phenomenon of metalinguistic usage, as in:
(8.47) a. Not Liz, you twit, but Queen Elizabeth has just been on TV.
b. The man isn’t intelligent, he’s a whopping genius.
c. In this house, we don’t eat grub, we eat food.
d. I did NOT only see Act One, Two, and Three, I only saw Act One.
260 The Logic of Language
Cases like these are discussed in Horn (1985, 1989) and, in their wake, in a
flurry of publications during and after the 1990s. The main trend, in this
literature, is to treat these cases as instances of a ‘pragmatic’ transfer from
the standard ‘descriptive’ negation to a metalinguistic, more or less ‘meta-
phorical’, negation, but how exactly this metamorphosis takes place, has never
been made clear in a proper falsifiable way.
In Seuren (1988, 2000) I argue that if such pragmatic accounts predict
anything, it is this: when someone suggests that, say, some politician is a crook
by quoting the famous ironical line And Brutus is an honourable man from
Shakespeare’s Julius Caesar, a second person, who strongly believes that the
politician in question is entirely blameless, should, if the pragmatic account is
correct, be able to rebut this suggestion by saying And Brutus is NOT an
honourable man, which, of course, does not work. All such a person can
do is say something like To hell with your Brutus! It is also argued there that,
while cases like (8.47a,b,c) are instances of lexical-choice correction, cases
like (8.47d), though likewise of a ‘metalinguistic’ nature, belong to a separate
category, namely the category of presupposition denials. This latter point
is taken up in Section 10.4, and we will not touch on it here. Cases of lexical-
choice correction are discussed below.
Leaving aside, for the moment, distracting phenomena such as those
mentioned above, we start with ordinary sentential negation. For IP,
sentential negation, as in NOT-S, is an instruction banning the incrementa-
tion of S from D. The banning order is symbolized in D-representations as
‘ * ’. Ordinary (default) negation is presupposition-preserving, which means
that the non-negated proposition must be normally incrementable, or, if
you like, must ‘have the right papers’, for the D at hand. IP takes the
subject-S (the scope) of NOT in the SA-tree and processes it first without
negation (if it hadn’t been processed already, for it to be negated subse-
quently by a second speaker). Subsequently, NOT places an asterisk before
the increment.
Double negation bans the banning, and thus establishes incrementation in
a roundabout way, based on inferential bridging. Treble negation, as is easily
seen, makes one dizzy and confused, in accordance with the principle of
natural set theory PNST–6, discussed in Section 3.2.2, which says that the
function COMPLEMENT is nonrecursive in basic-natural, and only once-recur-
sive in strict-natural set theory. Quadruple negation is, though formally well
defined, entirely unrealistic as a cognitive process—unless, of course, the
later negations are not logically functional but serve only to lend emphasis
to the first, original, negation, as in the New York Black-English sentence
(8.48a), taken from Labov (1972: 130), or the London Cockney (8.48b), taken
Discourse incrementation 261
from the 1960s BBC TV play Cathy Come Home (see also note 23 in Chapter
8 of Volume I):
(8.48) a. It ain’t no cat can’t get in no coop.
b. ‘E’s an odd fella. ‘E ain’t never been no good to no woman, not
never.
Grammatically, both (8.48a) and (8.48b) have four negations (if we
disregard the rhetorical addition not never in (8.48b)) but only the first is
logically functional; the other three result from the well-known grammatical
process of NEGATION COPYING, found in a vast number of languages in some
form or other, and putting a copy of the ‘original’ negation on some or all
subsequent existential operators (see Seuren 1996: 269). (8.48a), where the
copying of the negation even penetrates into a relative clause, is equivalent
to standard English There isn’t any cat that can get into any coop. (8.48b)
reads as the standard English He is an odd fellow. He hasn’t ever been any
good to any woman, which, somehow, makes less of an impression than
(8.48b).
Following the argument in Chapter 2, we treat negation as an abstract
predicate in SA-structure, like the other propositional operators and the
quantifiers. Its subject-S is what is normally called its scope. Thus, (8.49a)
has the SA (8.49b). (8.49c) is the incremental result on the basis of the pre-
existing and therefore pre-recorded open address d–2 [a jMouse(a),[b j Cat
(b), Catch(b,a)]] (‘A cat caught a mouse’) given above in (8.11).
(8.49) a. The mouse did not escape.
b.
S
Pred S
NOT
Pred NP
Escape
Det S
the x
Pred NP
Mouse x
(8.47) a. Not Liz, you twit, but Queen Elizabeth has just been on TV.
b. The man isn’t intelligent, he’s a whopping genius.
c. In this house, we don’t eat grub, we eat food.
These are cases of lexical-choice correction and, therefore, of a meta-
linguistic use of the negation. No grammar and no proper semantics has
been set up, as yet, for cases of this nature, probably because the dominant
attitude has been, over the past twenty years, to relegate such phenomena
to pragmatics.
It cannot be our purpose here to present a full formal account of such
sentences, as that would require a separate monograph. What we can do is
point out that sentences of this type have a common structure representable
as something like (8.55a), where the quotation marks signal the metalinguistic
nature of the sentences at issue. A more formal (SA) rendering of (8.55a)
would be (8.55b), where the caret quotes secure reference to the phonological
(or, if appropriate, the phonetic) form of the words in the range of the
variables a and b and where Bev is the value-assigning predicate discussed
in Section 3.2:
(8.55) a. The proper expression for x in ‘—x—’ is not ‘a’ but ‘b’.
b. NOT [Bev ^a^ (the proper expression for x in ^—x—^)];
Bev ^b^ (the proper expression for x in ^—x—^)
(8.47a) is then read as ‘The proper expression for who has just been on TV is
not ^Liz^ but ^Queen Elizabeth^’. (8.47b) is read as ‘The proper expression
for what this man is is not ^intelligent^ but ^a whopping genius^’, while
(8.47c) reads as ‘The proper expression for what we eat in this house is not
^grub^ but ^food^’.
In the analysis given in (8.55b), this type of sentence is a metalinguistic variety
of the cleft construction, expressing a topic–comment-modulated proposition.
This is confirmed by the fact that the isolation of x from ‘—x—’ is subject to
the normal isolation—that is, extraction or insertion—constraints that have
found to be valid in grammars, as appears from, for example, (8.56a), where
Liz’s cannot be isolated from the position occupied by Queen Elizabeth’s
because the position involved is a modifier genitive position and modifiers
cannot be isolated from S-structures (we can say John whose wife died but
not John’s who wife died). Not so in (8.56b), where the position involved is
that of a full NP, not of a modifier, or in (8.56c,d), where no isolation has taken
place:
264 The Logic of Language
(8.56) a. *It isn’t Liz’s, you twit, that he praised hat but Queen Elizabeth’s.
b. It isn’t Liz, you twit, that he praised the hat of but Queen
Elizabeth.
c. He praised not Liz’s, you twit, but Queen Elizabeth’s hat.
d. He praised the hat not of Liz, you twit, but of Queen Elizabeth.
Similar observations can be made regarding (8.47b) and (8.47c):
(8.57) a. *It isn’t an intelligent man’s that he is son but a whopping
genius’s.
b. It isn’t an intelligent man that he’s the son of but a
whopping genius.
c. He is not an intelligent man’s but a whopping genius’s son.
d. He is the son not of an intelligent man but of a whopping genius.
(8.58) a. *In this place, it isn’t grub we have standards but food.
b. In this place, it isn’t grub we have standards for but food.
c. In this place, we don’t have grub but food standards.
d. In this place, we have standards not for grub but for food.
How exactly the grammar turns a (8.55b)-type structure into a
corresponding surface structure, or what parameters are required in a seman-
tic theory for a proper formal interpretation of cases like (8.47a–c) are, as has
been said, questions that have so far received no attention at all in the
linguistic literature. Given this unsatisfactory state of affairs, it would be futile
even to try to present an incrementation procedure for such cases.
8.2.3 Disjunction
The disjunctive operator OR is far from analogous to its conjunctive counter-
part AND. While AND can be used to conjoin speech acts, as in the sentence Go
home and nobody will know what has happened quoted above, it is not at all
clear that OR can be used in an analogous way. One might think of an example
like (8.59a), but it is unclear how it should be read in the systematic terms of
speech act operators and propositions. We revert to this issue below, when
discussing the tacit expansion of disjunctions to the form specified in (8.66).
(8.59) a. Don’t try, or you’ll get caught.
b. Do you want coffee or do you want tea?
(8.59b) is likewise not without problems. One may presume that it reads as
something like ‘I am asking which: what you want is coffee or what you want
is tea’. If that is correct, OR is used in combination with the specific-question
Discourse incrementation 265
operator WHICH asking for a choice to be made among given alternatives. Little
is known about how such a combination works, owing to the primitive state
of research on the grammar of speech acts. One notes, incidentally, that
(8.59b) is unambiguous as regards the scope of OR, unlike (8.60), which is
ambiguous (unless intonation is taken into account):
(8.60) Do you want coffee or tea?16
Like NOT, OR can be used to correct lexical choices, as in:
(8.61) He was a sad, or rather pathetic, man.
AND can only be used this way when followed by NOT, as in (8.62a), and in such
cases it can be left out. One notes that the use of but instead of and, as
in (8.62b), takes away the metalinguistic reading, leaving only the object-
language reading:
(8.62) a. He was a sad, (and) not a pathetic, man.
b. He was a sad, but not a pathetic, man.
Like AND, OR has a form of phrasal coupling, as in the example John and Rose
are a nice couple quoted above as an instance of phrasal conjunction. The
disjunctive version would be something like:
(8.63) Aberdeen or Inverness is an impossible choice.
Yet phrasal disjunction, as in (8.63), has a metalinguistic flavour which
phrasal conjunction lacks. (8.63) reads as ‘“Aberdeen or Inverness” is an
impossible choice’.
One remembers from Section 7.2.2.3 the curious reduction of (8.64a) to
(8.64b):
(8.64) a. Paul may be at home and he may be in hospital.
b. Paul may be at home !and/ or in hospital.
On the whole, the relation between natural language AND and OR is far from
clear. It is certainly not exhausted by the purely logical account of these two
operators.
16
There is an anecdote about Bertrand Russell, said to have replied ‘Yes’ to a stewardess asking him
(8.60). According to the anecdote, Russell wanted to teach the poor lady that or is inclusive, not
exclusive, but, if that is so, he would have had to ignore the stewardess’s intonation, which, in all
likelihood, had a rising tone on coffee and a falling tone on tea.
266 The Logic of Language
It is clear that (8.70) and (8.71) express quite different speaker commitments,
just as Frege’s original The morning star is the evening star expresses a quite
different speaker commitment from The morning star is the morning star. But
this is due to the different truth conditions. The actual identity of the
morning star and the evening star ensures that, despite the different truth
conditions, the truth value stays unchanged.17
As regards D-structures, it seems to be generally so that subdomains
created in virtue of an instruction associated with a propositional operator
are always extensional in the sense defined. This holds for conjunction,
negation, disjunction, and also, as will become clear in a moment, for condi-
tionals and it seems to be due to the fact that the increments stored in these
subdomains must be well-anchored—that is, ready for direct attachment
to the current D (they must have ‘the right papers’ for the current D).
Being a subdomain set up under such an instruction is thus a sufficient
condition for the subdomain to be extensional, but it is not a necessary
condition. For there are extensional subdomains that are not the result of
such an instruction but are embedded under a lexical predicate, such as
the subdomain embedded in the object position of the verb Cause, as in
(8.72a), or the factive subdomain embedded in subject position under verbs
like Prove or Suggest, as in (8.72b):
(8.72) a. The arrival of the police caused the butler to flee.
b. That the butler had fled suggested that he was guilty.
Suppose the butler in question is identical with the person referred to by
the phrase the man with the white gloves, any truth value of (8.72a) or
(8.72b) remains unchanged when the man with the white gloves is substituted
for the butler.
Semantically, it is clear why this should be so. Just as Frege said (see Section
7.2.1.2), a subdomain is intensional when it reflects the thought-contents of a
thought-predicate, usually in object position. Since Cause is not a thought-
predicate, it is listed in the lexicon as extensional with regard not only to its
subject but also to its (possibly sentential) object term, while Suggest is
lexically marked as extensional with regard to its subject term, whether
sentential or nominal, but intensional with regard to its object term because
Suggest is a thought-predicate and its object term contains the contents of
what is suggested.
17
I have wavered on this issue in the past but, on reflection, I think I must come down in favour of
the standard position and retract what I argued for in Seuren 1985: 396.
270 The Logic of Language
8.2.4 Conditionals
Let us now have a look at conditionals—that is, sentences of the type if P then
Q. Ever since the Stoic philosophers set up their propositional logic during the
third century BCE, natural language IF has been widely regarded, especially by
philosophers and logicians, as representing the truth-functional operator of
material implication—a view that immediately gave rise to public controversy,
as appears from a little epigram written by the Alexandrian librarian and poet
Callimachus in the third century BCE, saying ‘Even the crows on the rooftops
caw about the nature of conditionals’ (Kneale and Kneale 1962: 128). The
question is: were the Stoics, and with them the logical tradition till the present
day, right or wrong in considering IF a truth-functional propositional operator?
The answer to that question is not simple. Perhaps one should say that to a
considerable extent they were right, but to some extent they were not. Such an
answer, of course, calls for some comment.
Most semanticists assume that the natural language subordinating conjunc-
tion if represents the well-known truth-functional operator ‘!’ of propositional
logic, also known as ‘material implication’, which yields falsity only when the
antecedent clause (or if-clause, also called by the ancient Greek name prótasis) is
true and the consequent clause (also known as apódosis) is false, and truth in all
other cases. Yet the same semanticists accept, or admit, that this assumption is
tenable only with a heavy supply of pragmatic support. It seems incontrovertible
that the equation of natural language if with material implication falls far short
of accounting for the way if is naturally used by speakers, which gives rise to the
question of whether the deficit is to be made up for by pragmatics or by a
different kind of logic, for example basic-natural logic, combined with the
mechanics of discourse incrementation. The latter is preferable on methodolog-
ical grounds, as it makes for greater precision.18 It is also the view defended here.
18
Johnson-Laird (1986) agrees that context plays an important role in the interpretation of
conditionals, but he takes a dim view of the powers of logic to account for their vagaries. He writes
(1986: 73):
Discourse incrementation 271
In searching for a logical, or in any case formal, basis we are helped a great
deal by what we have found regarding the semantics of OR. In standard logic,
of course, P!Q is equivalent with ¬P ∨ Q, and it is interesting to see to what
extent this parallelism can be sustained—an issue to which we return below.
Meanwhile we can say that the parallelism fails in some respects. For example,
it fails with regard to the number of propositional terms taken by IF: whereas
OR can take any number of arguments IF is restricted to two. There is a parallel
in that speech-act conditionals are as unclear as regards their analysis as are
speech-act disjunctions. A sentence like (8.73), though perfectly normal from
the point of view of English usage, has not so far received a proper analysis:
(8.73) Shoot me, if you dare.
NP-conditionals, as a parallel to NP-disjunctions and NP-conjunctions, are
questionable. (8.74a) is quaint, though better with also; (8.74b) is a great deal
better:
(8.74) a. ?If Aberdeen then (also) Inverness is an impossible condition.
b. If not Aberdeen then Inverness is an impossible dilemma.
Finally, conditionals seem to be subject to different conditions of contex-
tual fit from disjunctions. For example, if someone asks ‘Why does Philip pay
so much income tax?’, (8.75a) may be an appropriate answer but (8.75b) can
hardly be:
(8.75) a. Either he doesn’t know the law or he is a bachelor.
b. If he knows the law, he is a bachelor.
There are, therefore, considerable differences between conditionals and dis-
junctions, despite the fact that they are logically interchangeable.
How do people interpret conditionals? They set up a mental model based on the meaning of the antecedent, and on
their beliefs and knowledge of the context. They then determine the nature and the degree of the relation between
antecedent and consequent. This process may lead to a recursive revision in the antecedent model. Finally, if need be,
they set up a scenario relating the model of the consequent to the antecedent model. The relation may be merely that
the consequent state of affairs is relevant to a protagonist in the antecedent model, or it may be a logical, temporal,
causal or deontic relation between the two models.
What are the logical properties of conditionals? They are many and various. Conditionals are not creatures of a
constant hue. Like chameleons, as I once put it, they take on the colour suggested by their surroundings. Their logical
properties depend on the relation between antecedent and consequent, and that in turn depends on beliefs.
Although I can agree with Johnson-Laird in certain respects, I do find him more than a little failing in
his duty to be precise. In particular, to say that the logic of conditionals is like a chameleon, taking its
properties from ‘the relation between antecedent and consequent’ is a little too vague to my taste.
Greater precision is possible and should, therefore, be striven for.
272 The Logic of Language
But let us now look at the similarities. The schema IF P then Q does
not oblige the listener to add either P or Q to D. What it says is that
there are two possibilities: either P is to be incremented, but then also Q, or
else NOT-P is incremented. The question is: what happens when NOT-P is
incremented? Strict-natural and constructed standard logic say that no com-
mitment is entered with regard to Q in case NOT-P is incremented: Q may be
true (incremented) or false (not incremented). But this is not what basic-
natural logic says. Basic-natural propositional logic follows the principle of
natural set theory PNST–3, formulated in Section 3.2.2, and repeated here:
When two (or more) sets A and B undergo union, A and B are natural sets and are, at
the level of basic, but not strict, naturalness, totally distinct, with no element in
common, so that jA [ Bj ¼ jAj þ jBj.
19
In anticipation of Chapter 10, the negation operator is considered to be the presupposition-
preserving minimal negation and not standard bivalent¬.
274 The Logic of Language
20
Comrie (1986: 87) mentions examples (i) and (ii), in German and English, respectively, where the
antecedent has the grammatical form of a question. Dutch has the same phenomenon, as shown
in (iii):
(i) Hätte er das getan, wäre ich glücklich gewesen.
(ii) Had he done that, I would have been happy.
(iii) Had hij dat gedaan, was ik gelukkig geweest.
Harris (1986: 276–7) points at the frequently found parallel between conditional clauses and
embedded or indirect questions: ‘We must now look briefly at one quite separate use of SI/si/se in
Romance, namely as the complementizer required when the embedded sentence was originally a
polar question.’ The same phenomenon is found in English, which uses if for both conditional
antecedents and embedded polar questions.
Turkish regularly uses questions, next to other constructions, to express conditionals: O geldi mi,
ben burada durmam (Did he come? I won’t stop here) (Lewis 1984: 267).
Discourse incrementation 275
construction of the type If you were clever, you wouldn’t buy shares, counter-
factuals being analysed as IF P then Q, with the presupposition that P is false.
But apart from these discourse-related aspects of conditionals, there is a
great deal of explanatory profit to be gained from expanding, in the sense of
(8.76), IF P then Q to IF P then Q AND IF Q then P—that is, from reading the
material implication as the material bi-implication. It is widely known
among formal semanticists (though much less outside these circles) that
the standard logical material implication leads to strongly counterintuitive
results, often called ‘paradoxes’ when applied to natural language condi-
tionals.21 The following analysis shows that most of the ‘paradoxes’ resulting
from the application of the material implication to the semantics of natural
language conditionals simply vanish as soon as natural language if is read as
if and only if.
Just for the sake of clarity, let us set up a truth table for the propositions P,
Q, and R and four relevant compositions in the following way:
TABLE 8.1. Truth table for P, Q, and R and four compositions
valuations: 1 2 3 4 5 6 7 8
P T F T F T F T F VS: {1,3,5,7}
Q T T F F T T F F VS: {1,2,5,6}
R T T T T F F F F VS: {1,2,3,4}
a. (¬P ∧ ¬Q) ∨ (P ∧ Q) or: P$Q T F F T T F F T VS: {1,4,5,8}
b. (¬R ∧ ¬P) ∨ (R ∧ P) or: R$P T F T F F T F T VS: {1,3,6,8}
c. (¬R ∧ ¬Q) ∨ (R ∧ Q) or: R$Q T T F F F F T T VS: {1,2,7,8}
d. (¬(P ∧ R) ∧ ¬Q) ∨ ((P ∧ R) ∧ Q) T F F T F F T T VS: {1,4,7,8}
or: (P ∧ R)$Q
The translation of Table 8.1 into a standard VS-model for the logically and
semantically independent L-propositions P, Q, and R is shown in Figure 8.2.
For the three sentences P, Q, and R, there are eight valuations such that all
possible combinations of truth (T) and falsity (F) are represented. Given the
truth values for P, Q, and R in each column (valuation), the truth values of
their logical compositions follow automatically. I have selected four specific
compositions for reasons that will become clear in a moment.
Composition a: (¬P ∧ ¬Q) ∨ (P ∧ Q) or: P $ Q
representing NOT-P OR Q (or: IF P, then Q, or: IF Q, then P) as construed on
the principles of basic-natural logic, is true in the valuations 1, 4, 5, and 8,
21
An incisive study in this regard is Veltman (1985), to which we refer below.
276 The Logic of Language
/P/
7
3
8 /R/
5 1 4
6 2
/Q/
U
giving the valuation space {1,4,5,8}: P and Q are either jointly true or jointly
false.
Composition b: (¬R ∧ ¬P) ∨ (R ∧ P) or: R $ P
representing basic-natural NOT-R OR P (or: IF R, then P, or: IF P, then R), is
true in the valuations 1, 3, 6, and 8, giving the valuation space {1,3,6,8}: R and
P are either jointly true or jointly false.
Composition c: (¬R ∧ ¬Q) ∨ (R ∧ Q) or: R $ Q
representing basic-natural NOT-R OR Q (or: IF R, then Q, or: IF Q, then R), is
true in the valuations 1, 2, 7, and 8, giving the valuation space {1,2,7,8}: R and
Q are either jointly true or jointly false.
Composition d: (¬(P ∧ R) ∧ ¬Q) ∨ ((P ∧ R) ∧ Q) or: (P ∧ R) $ Q
representing basic-natural IF P and R then Q (or: IF Q then P and R), is true
in the valuations 1, 4, 7, and 8, giving the valuation space {1,4,7,8}: either P, Q,
and R are jointly true or P and R are jointly false or Q is false.
It is now easy to read off what entails what. Thus, a ∧ b ‘c (implication is a
transitive relation), because {1,4,5,8} \ {1,3,6,8} ¼ {1,8} and {1,8} {1,2,7,8}.
But a =‘ d, because {1,4,5,8} = {1,4,7,8}, which means that the embarrassing
standard-logic entailment from IF P then Q to IF P and R then Q (antecedent
strengthening) no longer holds. We will come back to this in a moment, when
we discuss example (8.84).
Discourse incrementation 277
22
For a survey of the literature in this respect, see Van Der Auwera (1998) and also Declerck and
Reed (2001). See also Johnson-Laird (1986: 59–60) for an illuminating discussion.
278 The Logic of Language
that truth is obtained for this formula by assigning falsity (F) to P and truth
(T) to Q.
(8.82)
¬ (¬ P ® ¬ Q)
F T
T ® F
F
T
According to the standard truth table for the material implication, the only
way for ¬P ! ¬Q to be false is for ¬P to be true and for ¬Q to be false, and
hence for P to be false and Q to be true, as in (8.82). Only then can the falsity
of ¬P ! ¬Q be turned into truth by the wide-scope negation. All other truth-
value assignments to P and Q will lead to falsity for the whole formula.
On this, standard, interpretation, (8.81), in fact, expresses the assertion that
God does not exist and that there are moral rules. But this is not the way (8.81)
is understood by normal speakers. No (normal) humanist philosopher will
accept that the truth of (8.81) amounts to saying that God does not exist but
moral rules do. What (8.81) says is, rather, that the nonexistence of God does
not license the claim that there are no moral rules.
Again, an appeal to a basic-natural-logic interpretation helps out. When
translated in terms of exclusive OR, (8.81) reads as follows:
(8.83) It is not so that either God exists and there are moral rules,
or God does not exist and there are no moral rules.
In formal notation, this amounts to: ¬((P ∧ Q) ∨ (¬P ∧ ¬Q)). Now it is no
longer so that the only way for (8.83) to be true is for P to be false and for Q to
be true, as in (8.82). To see this, just take composition a of Table 8.1,
representing IF P, then Q, and invert the truth values. This gives the valuation
space {2,3,6,7}, the complement of {1,4,5,8}. In the valuations 2, 3, 6, and 7, it is
not so that P is always false and Q is always true, as is the case for ¬(¬P ∨ Q).
On the contrary, in the valuations 2, 3, 6, and 7, P and Q never have the same
truth value: if the one is true, the other is false and vice versa. Therefore, all
that is needed for the truth of (8.83), and thus for the basic-natural truth of
(8.81), is that P and Q have different truth values. The ‘paradox’ of (8.81) thus
evaporates under the analysis in terms of basic-natural logic.
A further interesting example, provided by Veltman (1985: 194), is the
following:
280 The Logic of Language
23
Cf. Johnson-Laird (1986: 55): ‘A corollary of this theory is that the logical properties of
conditionals derive from their interpretation and not from any formal rules associated with them.’
282 The Logic of Language
of Jones wins the election and Smith dies before the election still licenses the
incrementation of Smith will retire to private life. And this is, of course, not so.
This, by the way, is also the conclusion reached in Veltman (1985). Veltman
argues that the logic of conditionals, to the extent that it can be a natural logic,
is predicated upon a given ‘data base’. In our terms, this means that the logic
of conditionals must be restricted by a Modulo D or ceteris paribus condition.
It would seem that such a condition hardly lends itself to full formalization,
though attempts are made in artificial intelligence to solve the formalization
problem by means of nonmonotonic logics.
Further ‘paradoxes’ resisting a solution in purely logical terms arise in the
context of what is known as CONTRAPOSITION. In standard logic the theorem
holds that whenever P ! Q is true, so is ¬Q ! ¬P and vice versa. In other
words, P ! Q ¬Q ! ¬P. The same theorem holds in basic-natural logic,
because the full expansions of P ! Q and of ¬Q ! ¬P are identical: namely
(¬P ∧ ¬Q) ∨ (P ∧ Q). Yet, as is observed by McCawley (1981: 50), there is a
clear difference between (8.86a) and (8.86b):
(8.86) a. If you don’t have somebody to take my place, I won’t leave.
b. If I leave, you have somebody to take my place.
The first puts a condition on my leaving, while the second expresses a conclusion
that can be drawn post hoc in the event that I leave. McCawley does not say so,
but the difference seems to be connected with the fact, discussed in Section 8.2.1,
that, as a matter of default, discourse incrementation follows the temporal,
causal, or motivational order of the events or situations described. Given the
IF-instruction, this means that the antecedent P is literally an ‘antecedent’: the if-
clause is incremented before the consequent clause. Sentence (8.86a) respects this
principle, since the lack of a replacement is the reason for my not leaving. (8.86b)
does too, but in a different way. What (8.86b) says is that in case I leave one may
conclude that a replacement has been available. Here my leaving precedes the
drawing of the conclusion.24
24
Comrie (1986: 83–4) implicitly confirms the principle that, normally, the if-clause is incremented
before the consequent clause:
Greenberg (1963: 84–5) states the following Universal of Word Order 14 concerning the linear order of the two
clauses:
In conditional statements, the conditional clause [¼protasis, BC] precedes the conclusion [¼apodosis, BC] as the
normal order in all languages.
Work leading up to the present paper has uncovered no counterexamples to this generalization. Although many
languages allow both orders, protasis–apodosis and apodosis–protasis, many grammars note explicitly that the usual
order is for the protasis to precede, and presumably the same will hold for many languages where the grammars are
silent on this point. In some languages the protasis must precede the apodosis, in particular in languages with a rigid
rule requiring the finite verb of the main clause to stand sentence-finally (e.g. Turkish). Since the positioning of
Discourse incrementation 283
In conclusion, we may say that some headway has been made towards the
realization of the programme of reducing the deviations from the standard
logical analysis of conditionals observed in ordinary language usage to the
differences between standard and basic-natural logic and to the design prop-
erties of discourse domains, including the instruction for conditionals. To the
extent that this programme is successful, an appeal to mostly not very well
defined pragmatic principles becomes unnecessary.
protases in such languages can be viewed as just a special case of the general rule whereby subordinate clauses must
precede main clauses, this does not necessarily say anything specific about conditional constructions. However, this
same restriction to protasis-apodosis order is also found in some languages which do not have a strict subordinate–
main clause order restriction, suggesting that there is indeed something special about conditional clauses in this
respect, i.e. the preponderance of the protasis–apodosis order in languages with free clause order is not ‘just
statistical’, but does reflect something significant about language.
9
9.1 Introduction
Apart from the continuous appeal to shared knowledge, there are in principle
four main devices in natural language serving the purpose of linking up
utterances with the current discourse in such a way that coherence is safe-
guarded: anaphora, presupposition, topic–comment modulation, and open
parameters in lexical meanings. The latter have been discussed in Section 7.4
and we will not deal with them again. The remaining three devices are
relevant in this and the following two chapters because they show up the
inadequacy of the Russellian-Quinean-Montagovian paradigm of natural
language semantics, with standard modern predicate calculus (SMPC) at its
centre. Even though the standard paradigm is meant to account for just the
truth-conditional properties of natural language sentences, leaving out the
clutter due to the exigencies of communicative usage, it breaks down on a
particular form of external anaphora, commonly called donkey anaphora,
discussed in the present chapter. It also breaks down on presupposition, as
is shown in Chapter 10. And one draws a total blank, not only in semantics
but also in the theory of grammar, when one looks for any account at all of
topic–comment modulation as distinct from predicate–argument structure.
Topic–comment modulation, already amply commented upon in Section 3.2
of Volume I, is taken up again in Chapter 11, where its role in the cementing of
discourse coherence is further elaborated and situated in a more general
cognitive and linguistic context.
The last three chapters of this book are thus a three-pronged attack on the
standard paradigm. But the standard paradigm has to cope with other threats
as well, less to do with the context-sensitivity of natural language and the
criteria for textual coherence. The main threat, apart from the three discussed
in Chapters 9, 10, and 11, is the problem of propositional attitudes, discussed
earlier in Sections 2.1.1 and 6.1 of Volume I. The inability of the standard
paradigm to account for this problem has proved to be due to the very basic
tenets of this paradigm, in particular its strictly extensional house ontology,
Primary and donkey anaphora 285
elaborated in Kamp and Reyle (1993). The roots of this theory lie in the donkey-
anaphora problem, discussed in Section 9.2.2, which is correctly diagnosed as
potentially fatal for the established paradigm of possible-world formal seman-
tics. In this theory, the mechanism of reference is mediated by a cognitive
system of mental representations, whose relation to any actual world is a matter
of independent concern. The discourse representations envisaged by Kamp and
Reyle not only contain entity representations but also store any propositional
information about the intended referents provided by prior linguistic input.
This halfway station of mental representations creates the extra room needed
for a semantic account of donkey anaphora. DRT is not a logical theory but a
formal theory of utterance incrementation, even though the format in which
newly incremented information is represented looks very much like the well-
known structure of SMPC expressions. The actual corresponding logic has been
investigated by Groenendijk and Stokhof in a number of joint publications,
notably their 1991 paper on ‘Dynamic Predicate Logic’. The main difficulty with
DRT is that its focus is too much on the donkey-anaphora problem, leaving
entirely out of account the notion of presupposition, let alone that of topic–
comment modulation, indispensable though these are for any adequate theory
of discourse incrementation. Nor does it offer a principled solution to the
problem of reference to nonexisting entities: there is no theory of virtual or
intensional entities. In general, one may say that DRT is a typical example of a
theory that has been fully formalized before the object of the theory has been
looked at from all angles so that one’s familiarity with the object gives one an
adequate idea of what is to be formalized. Such premature formalization may
impress the world for some time, but it is detrimental to the advancement of
lasting knowledge.
Practitioners of DRT have not been insensitive to the criticism that
their theory fails to account for presuppositions. Van der Sandt (1992) made
an attempt at incorporating presupposition theory into the anaphora-based
framework of DRT by equating presupposition with anaphora. This made it
look as if the presupposition deficiency of DRT could be remedied in one
swoop by declaring anaphora and presupposition one—a point of view
readily taken over by DRT-practitioners who were all too eager to dispose
of presuppositions, which they had all along regarded with diffidence
and suspicion and which they were only too glad to get rid of. In fact, in
the short section on presupposition in the article on DRT in the Stanford
Encyclopedia of Philosophy (Geurts and Beaver 2007), the authors take it for
granted that, indeed, presupposition and anaphora are of a piece. Since this
misunderstanding has meanwhile gained some currency, a separate section
(Section 10.8) has been added to Chapter 10, dedicated to its refutation. There
Primary and donkey anaphora 287
it is shown that this prima facie absurd view is sustainable only if the notions
of presupposition and of anaphora are kept fuzzy and essential facts are
ignored.
The SITUATIONAL SEMANTICS of Barwise and Perry (1983) is another represen-
tative of this class of theories. This theory sprouts mainly from the authors’
dissatisfaction with two aspects of the standard paradigm, namely its inability
to cope with propositional attitudes (see Sections 2.1.1 and 6.1 in Volume I)
and the lack of a proper demarcation of the contextually and situationally
restricted universe of discourse in terms of which utterance interpretation
takes place. No attention is paid to the donkey-anaphora problem, nor is
there an account of reference to nonexisting entities. Presuppositions likewise
stay out of the picture, even though their function in the delimitation of
restricted universes of discourse should have been obvious. The question of
topic–comment modulation remains untouched.
In Fauconnier (1985) a system of ‘mental spaces’, complete with ‘subspaces’,
is proposed in order to account for what is called ‘transdominial denotational
transparency’ in Section 7.2.2.1. This highly readable but entirely informal
little book concentrates on examples like The girl with brown eyes has blue eyes
(see also Section 5.3 in Volume I). Contrary to what one might expect, this
sentence is not internally inconsistent. Suppose there is a picture containing
the portraits of a number of girls, one of whom is portrayed as having brown
eyes, even though in reality she has blue eyes. In such a situation, the sentence
is true. The expression the girl with brown eyes then refers to the girl as
portrayed in the picture, which is mentally represented as a special subspace.
The predication has blue eyes takes the listener back to the real world, mentally
represented as the overall commitment domain. But the same sentence may
also be taken as saying that the girl who in reality has brown eyes has blue eyes
in the picture. Which of the two readings applies, depends on the context in
which the sentence is uttered. This essay is illuminating in many ways but its
scope is too restricted, and its elaboration too informal, for it to qualify as a
theory of context-sensitive utterance interpretation.
All three theories or approaches mentioned thus lack the generality that is
required for a proper theory of context-sensitive utterance interpretation.
I myself started out on this road as early as 1972, with my lengthy paper (1972)
in Leuvense Bijdragen, which contains a fairly elaborate section on discourse
incrementation and presupposition as a condition on incrementability.
I believe this was the first publication, even preceding Isard (1975), where
the notion of discourse incrementation was mooted (apart from Stout 1896,
mentioned and quoted extensively in Section 3.2 in Volume I). Yet it was
never acknowledged, then or later, though many protagonists of the new
288 The Logic of Language
1 For detailed discussions of sentence-internal anaphora, see Reinhart (1983), Seuren (1985: 346–86;
1986b) and Weijters (1989).
290 The Logic of Language
The odd one out is the c-variant in (9.1) and (9.2): (9.1c) only allows for the
external reading, but (9.2c), like (9.2a,b), allows for both the internal and the
external reading.
The matter becomes even more intriguing when EPITHET PRONOUNS are taken
into account (see also the examples in (4.41) of Chapter 4 in Volume I). These
are, grammatically speaking, not really pronouns but full lexical noun
phrases, though with an anaphoric function. They are always unstressed
and usually express an evaluation of some kind. Examples are the following
(the epithet pronouns are in italics):
(9.3) a. Where is John? I just saw the great genius leaving the building.
b. As John entered the room, the maniac saw it was empty.
Now consider again (9.1a–d), but with the epithet pronoun the fool instead
of the neutral his or he:
(9.4) a. In the fool’s office John reads detective stories.
b. John reads detective stories in the fool’s office.
c. In John’s office the fool reads detective stories.
d. The fool reads detective stories in John’s office.
One sees that the internal reading has disappeared altogether: all four sen-
tences only allow for the external reading. However, the same does not hold
for the epithet analogs of (9.2):
(9.5) a. While the fool’s office was being cleaned, John stood on the balcony.
b. John stood on the balcony while the fool’s office was being cleaned.
c. While John’s office was being cleaned, the fool stood on the balcony.
d. The fool stood on the balcony while John’s office was being cleaned.
Here, as in (9.2), the internal reading is allowed for (9.5a,b,c) but not for
(9.5d).
The epithet-substitution test is a useful but neglected diagnostic in anaph-
ora theory. It shows, for example, that there must be some hidden difference
between the status of his in (9.1a,b) as opposed to (9.2a,b). This difference is
expressed formally in some languages, like Latin or Swedish, which both
distinguish between a reflexive and a nonreflexive third person possessive
pronoun. In Latin, the reflexive possessive pronoun is the adjectivally de-
clined suus, while the nonreflexive variant is expressed in the singular by
the genitive eius (of him/her/it) of the third person personal pronoun is
(this, he) and in the plural by eorum (of them). Analogously, Swedish
distinguishes between the adjectival sin and the personal pronoun genitive
hans, the former being the reflexive, the latter the nonreflexive his. In both
Primary and donkey anaphora 291
I Clause-internal anaphora.
a. Reflexive anaphora: takes subject (sometimes indirect object) as ante-
cedent; not always formally marked.
b. C-command anaphora: anaphor must be C-commanded by anteced-
ent.2
c. Non-C-command anaphora: antecedent must precede anaphor.
II Clause-external but sentence-internal anaphora.
a. Indirect reflexive anaphora: occurs only in complement clauses and
anaphor takes subject, direct or indirect object of commanding clause
as antecedent; rarely formally marked.
b. Nonreflexive anaphora: occurs in clauses of any rank and anaphor takes
any NP in any other clause as antecedent, but if antecedent is in a lower
clause, it must precede the anaphor.3
III Bound-variable anaphora. The anaphor stands for a bound variable in the
semantic analysis (SA) of the sentence in question and is, therefore,
subject to the structural conditions of variable binding in LL, not in
surface structure.
IV Sentence-external anaphora. The antecedent is any overt or implicitly
understood NP in preceding text or in the situation given. Anaphora
resolution is subject to gradable criteria of closeness, pragmatic
probability, and syntactic function, besides, of course, to restrictions
imposed by gender and number, if any.4,5
were given the task of completing sentences like (i) interpreted the pronoun they as referring to the
members who were not at the meeting—a form of anaphora for which they invented the term
complement anaphora:
(i) Few students were at the meeting. They …
The subjects would, for example, complete the sentence as They had gone out with their girl-friends.
The only way to explain this seems to be the assumption of a discourse address established for the
students who were not at the meeting, the complement of the set delimited by FEW. If anything, this
shows the necessity to fall back on cognition-driven discourse incrementation processes for the
explanation of reference fixing.
294 The Logic of Language
anaphora. But I might as well have adopted the example There was a cat; it ran
away discussed in Section 8.1.1 in connection with the procedure of address
closure, which, as we shall see, is essential in the present context.
Why is primary anaphora important? It is important in its own right
because it is a central instrument for maintaining coherence in discourse,
but it is also important because it constitutes a problem for the logical analysis
of natural language sentences that has hitherto not found a final solution. It
turns out that established logic lacks the means to account for primary
anaphora and that this failure is entirely due to its decision to keep occasion
sentences away from the analysis. In other words, standard modern logic trips
over primary anaphora and it does so precisely because it fails to make room
for context-sensitivity.
And, again, the analyses given in (9.17a–c) fail to satisfy the epithet-
substitution test, which shows that the it-pronouns in question cannot be
bound-variable pronouns but must be instances of either clause-external but
sentence-internal anaphora (category IIb), or sentence-external, in particular
primary, anaphora (category IV):
(9.18) a. Either Socrates does not own a donkey or he feeds the animal.
b. If Socrates owns a donkey, he feeds the animal.
c. Every farmer who owns a donkey feeds the animal.
Quine (1960) shows no awareness of the donkey-anaphora problem. He
does, however, deal with a similar problem posed by sentences of the type
(1960: 138):
(9.19) If any member contributes, he gets a poppy.
If the word any is taken to represent the existential quantifier, the pronoun he
is left stranded, or, as Quine says (1960: 139), ‘left high and dry’:
(9.20) ∃x[Member(x) ∧ Contribute(x)] ! ∃y[Poppy(y) ∧ Get(@,y)]
Quine then proposes not to use the existential quantifier and get the universal
quantifier to do all the work, stipulating that ‘by a simple and irreducible trait
of English usage’, every always takes the smallest and any the largest possible
scope. (9.19) would then translate into SMPC as (21):
(9.21) 8x[[Member(x) ∧ Contribute(x)] ! ∃y[Poppy(y) ∧ Get(x,y)]]
This proposal was used later in an attempt to solve the problem of the
stranded variables in cases like (9.17a–c). The idea was to translate (9.16a–c)
with the help of the universal quantifier only:
(9.22) a. 8x[Donkey(x) ! [¬Own(Socrates,x) ∨ Feed(Socrates,x)]]
b. 8x[Donkey(x) ! [Own(Socrates,x) ! Feed(Socrates,x)]]
c. 8x8y[[Donkey(x) ∧ Farmer(y) ∧ Own(y,x)] ! Feed (y,x)]
This does indeed eliminate the scope problem raised by (9.17a–c). Yet the
medicine has proved worse than the ailment. First, one wonders why natural
language chooses to use, clearly without any problem for natural interpreta-
tion processes, surface-structure representatives of the existential quantifier (a
donkey, no donkey), allowing unbound variables to dangle, instead of the
perfectly available surface-structure representatives of the universal quantifier,
if that is the quantifier used in the underlying logico-semantic structure. In
other words, one wonders what could justify the sudden change in the
translation or mapping relation between the logico-semantic and the
Primary and donkey anaphora 299
gratuitously true. But (9.23c) can be false in such a case, namely when there is
at least one farmer thought to be a donkey-owner (though no-one has an idea
about which donkey he owns) but not expected to feed the animal he is
thought to own. For (9.23c) is not about all donkeys but about farmers who
are thought to be donkey-owners.
And again, the epithet-substitution test shows that the its in (9.23a,b,c)
cannot be bound-variable pronouns but must be external anaphors. One is,
therefore, forced to conclude that SMPC is unable to account for them in a
way that bears normal scientific generalization and avoids ad hoc solutions.
The same conclusion holds for sentences of the types illustrated in (9.8),
(9.11), (9.16), (9.19) and (9.23).
The upshot is, therefore, that there is a hard core of sentences, those
containing donkey anaphora, which resist translation into SMPC. With or
without Russell’s analysis of definite descriptions, or Quine’s programme of
elimination of singular terms, SMPC is intrinsically unable to account for
donkey anaphora. The donkey sentences contain definite expressions, pro-
nouns, or pronominal epithets, which are neither directly referring expres-
sions nor bound variables but indirectly referring expressions whose
antecedent is hidden in a preceding existentially quantified sentence—the
category of primary anaphora, which is not catered for in SMPC. One must
conclude that, for such sentence types, SMPC is, by the terms of its own
charter, unable to provide an empirically adequate and methodologically
sound logico-semantic analysis.
6 I owe this information to Joachim Ballweg of the Institut für deutsche Sprache in Mannheim.
Primary and donkey anaphora 301
The ‘medieval logicians’ Geach argues against are in fact none other than
the never-mentioned Walter Burleigh, who adds the following comment to
his discussion of (9.25), thereby denying that (9.27a) and (9.27b) are contra-
dictories (1988: 92–3; translation mine):
It follows that the following are compatible: ‘Every man owning a donkey sees it’ and
‘Some man owning a donkey does not see it’. For assuming that every man owns two
donkeys, one of which he sees and one of which he does not see, then it is not only
true to say ‘Every man owning a donkey sees it’, but also to say ‘Some man owning a
donkey does not see it’. In the same way, suppose that every man who has a son also
has two sons, and that he loves the one but hates the other, then both the following are
true: ‘Every man who has a son loves him’ and ‘Some man who has a son does not love
him’.
Burleigh and Geach are thus seen to disagree on account of the truth
conditions of sentences like (9.27a,b). For Burleigh, these two sentences are
compatible and not contradictories. For Geach, however, they are contradic-
tories.
7 Geach has his farmers beat their donkeys. As this would offend the feelings of many readers with
more developed notions of animal rights, I use feed, rather than beat, in my examples.
Primary and donkey anaphora 303
8 Isidora Stojanovic pointed out that a sentence like Every time Mary goes out with a Frenchman, he
pays for her drinks does not seem to be falsified by some occasion where Mary goes out with two
Frenchmen, only one of whom pays for her drinks, although then there is a Frenchman she goes out
with who does not pay for her drinks. I consider this a valid objection and all I can say is that,
depending on contextual or situational factors, the phrase a Frenchman, or, as in (9.27a), a donkey, is
apparently interpreted as ‘one or more Frenchmen’ or ‘one or more donkeys’, respectively. The
sentences in question would then read as ‘Any man who owns one or more donkeys feeds them’ and
304 The Logic of Language
‘Every time Mary goes out with one or more Frenchmen, they pay for her drinks’, respectively. What
the appropriate conditions are for such a reading will then be a matter of further investigation. See also
Neale (1990: 222–63) for an interesting but inconclusive discussion of much the same cluster of
problems within the terms of Russell’s Theory of Descriptions.
Primary and donkey anaphora 305
(2) He feeds it
[[Own(S,x)]] Ç [[Don(x)]] = 1 → a Î [[Feed(S,x)]]
[[Own(S,x)]] Ç [[Don(x) ]] Ç [[Feed(S,x)]] = 1 → a Î [[Feed(S,x)]] (cannot be false)
(3) It is brown
[[Own(S,x)]] Ç [[Don(x)]] = 1 → a Î [[Brown(x)]]
[[Own(S,x)]] Ç [[Don(x)]] Ç [[Feed(S,x)]] = 1 → a Î [[Brown(x)]]
[[Own(S,x)]] Ç [[Don(x)]] Ç [[Feed(S,x)]] Ç [[Brown(x)]] = 1 → a Î [[Brown(x)]] (cannot be false)
… and so on
NB: a is the one member of set X when X = 1.
NP1 NP2
Pred _
EVERY x
_ Det_
S1 S2 S3
ˆx
the x
Pred NP Pred _ Pred
NP NP
_
_ NP :: Farmer x x
:: Feed x @
::S4
Pred NP NP Pred NP
_
:: Own x y Donkey y
Presupposition and
presuppositional logic
(B) ‘ A. Under PET, this would make A a logically necessary truth, which is
absurd for a contingent sentence like You had horns.1 To avoid this, PET would
have to be dropped, very much against Aristotle’s wish. Although Aristotle
himself was unable to show Eubulides wrong—his grumpy reaction was to say
that Eubulides’ paradoxes were just silly (átopa)—there is a flaw in the
paradox. It lies in the incorrectly assumed entailment in the first premiss
‘What you have not lost you still have’. For it is possible that a person has not
lost something precisely because he never had it.
To the best of present knowledge, there was no explicit awareness of
presuppositional phenomena until Frege, or perhaps more accurately, until
well into the twentieth century. We now know that the solution to Eubulides’
Paradox of the Horns lies in an adequate analysis of presuppositions,
but there are no signs that Eubulides himself was aware of that fact. All
the same, the proper answer to Eubulides’ Paradox of the Horns is still in
conflict with the classic Aristotelian Principle of the Excluded Third, as
is shown below.
The issue is not raised in any of the Ancient literature on logic or the
philosophy of language. Nor did the medievals, otherwise so resourceful and
so creative, have much to say about presuppositions. Occasionally, however,
they came close. In an anonymous text, Ars Meliduna, probably written
between 1154 and 1180 (Nuchelmans 1973: 165), Aristotle’s celebrated Principle
of the Excluded Third is called into question. One of the grounds for doubt
in this respect consists in the fact that utterances may be neither true nor
false but ‘nugatory’. In De Rijk’s edition of the Ars Meliduna we read (De Rijk
1967: 363):
. . . enuntiables such as that ‘Socrates is white because it is him’ or that ‘he loves his
son’ appear to become nugatory when Socrates is no longer white or no longer has a
son. [ . . . ] We must, therefore, posit that such enuntiables may become nugatory
[ . . . ] even if that goes against Aristotle . . . .
1
It is a safe bet that Eubulides meant to tease the prudish Aristotle by confronting him with the
absurd and somewhat disconcerting consequence that his logic made it a necessary truth for every
man to be a cuckold. See Seuren (2005), where it becomes apparent that Aristotle was not amused.
Presupposition and presuppositional logic 315
The two exponents are conjoined by and to give the meaning of the whole
exponible proposition. Therefore, in Peter of Spain’s words: ‘every true
2
According to De Rijk (1972: xcix) it begins to crop up at the end of manuscripts of Peter of Spain’s
Summulae Logicales starting from about 1350.
3
An observation by Horn (1985: 123; 1996: 300, and elsewhere) has caused some confusion in this
respect. Horn notes that the term praesupponere occurs in the Tractatus Exponibilium mentioned
above (Mullally 1945: 112), which he, following Mullally, incorrectly attributes to Peter of Spain (it was
written about a century later by an unknown author; see De Rijk 1972: xcix). Horn takes over Mullally’s
translation ‘presuppose’. But this cannot be correct. The term is used in the context of a sentence-type
called ‘reduplicatives’, such as: insofar as man is rational, he is capable of weeping. My best translation of
the passage in question is:
The first rule is that a reduplicative word [‘insofar as’; PAMS] anticipates (praesupponit) that some predicate inheres
in some entity and says (denotat) that the clause to which it is immediately attached expresses the cause of that
inherence.
That is, the expression insofar as anticipates that some predicate (in this case ‘capable of weeping’)
inheres in some entity (‘man’), and means in addition that ‘insofar as man is rational’ expresses the
cause of man’s being capable of weeping. Since there can be no question of ‘man is capable of
weeping’ being presupposed by the sentence mentioned, one must conclude that praesupponere is
used here in a different sense from what presuppose means today, just as supponere—the medieval
Latin term for ‘referring’—does not mean what suppose means today. As a matter of fact,
praesupponere does not occur anywhere else in the whole of the philosophical literature written in
Latin.
316 The Logic of Language
exclusive proposition leaves its prejacent true’ (De Rijk 1992: 110–11), or:
presuppositions are entailed by their carrier sentences.
Although the exclusives were the most discussed among the exponibles,
nothing suggests an awareness of their specific, discourse-related, presuppo-
sitional character. Burleigh applies, in principle, standard propositional logic
to exposed exclusives. Since only man walks is equivalent with man walks and
nothing but man walks, its negation, not only man walks, is true if at least one
of the conjuncts is false. He writes (Green-Pedersen 1980: 119):
Note that the opposite of an exclusive proposition has two grounds for truth: because
no man walks, or because something other than man walks.
He thus denies the entailment from not only man walks to man walks, despite
the natural intuition that it does hold. Had he followed intuition and thus
done justice to language, he would have found that, in this respect, language is
in conflict with standard logic, and he might have embarked on an analysis of
presuppositions. Unfortunately, however, this did not happen.
Subsequent centuries do not even come close to presuppositions. Till Frege
(1892), there is no development at all on the presuppositional front. Strawson
(1950, 1952, 1954) follows up on Frege, but specifically with regard to existen-
tial presuppositions and only in a strictly logical perspective. Like Eubulides
and Frege, Strawson assumed full entailment of presupposition under nega-
tion for all cases and concluded that PET had to go. In Strawson’s view,
nonfulfilment of a presupposition leads to both the carrier sentence and its
negation lacking a truth value altogether.
Frege (1892) had come to the same conclusion, though from a different
angle. In a sentence like (10.1) the subject term lacks a referent in the actual
world, though the existence of such a referent is presupposed in virtue of the
existential precondition on the subject term of the verb run:
(10.1) The unicorn ran for its life.
This makes it impossible to test the truth of (10.1): given that there is no
actually existing unicorn, there is no way to check whether it (whatever this it
may stand for) actually ran. Therefore, Frege, like Strawson more than half a
century later, concluded that (10.1) lacks a truth value.
This posed a profound problem for standard logic in that the applicability
of standard logic to, say, English would have to be made dependent on
contingent conditions of existence—a restriction no logician will accept. In
the effort to solve this problem two traditions developed, the Russell tradition
and the Frege-Strawson tradition.
Presupposition and presuppositional logic 317
4
In his (1905: 485), Russell quips about the king of France’s alleged baldness:
By the law of the excluded middle, either ‘A is B’ or ‘A is not B’ must be true. Hence either ‘the present King of France
is bald’ or ‘the present King of France is not bald’ must be true. Yet if we enumerated the things that are bald, and
then the things that are not bald, we should not find the present King of France in either list. Hegelians, who love a
synthesis, will probably conclude that he wears a wig.
We may paraphrase this for the case at hand, saying that if one searches among the inhabitants of
Kathmandu one will not find Apollo there; yet if one looks among those who live elsewhere one will
not find him either. A Hegelian synthesis that makes him be of no fixed abode will not be of much
use either, because Apollo will be equally absent from the vagrants of this world.
Presupposition and presuppositional logic 319
All propositions in which Apollo occurs are to be interpreted by the above rules. If
‘Apollo’ has a primary occurrence [has large scope; PAMS], the proposition
containing the occurrence is false; if the occurrence is secondary [has small scope;
PAMS], the proposition may be true.
But the theory of descriptions fails to show how Apollo in sentence (10.4a) can
possibly be assigned small scope in such a way that it turns out true. Nor is it
likely that any form of linguistic or semantic analysis will achieve such a feat,
as (10.4a) is in no way ambiguous. I fully agree with Zalta (1988: 11):
[O]ne might offer Russell’s infamous theory of descriptions as the means of analyzing
away the propositions in question. Unfortunately, this theory not only fails to do
justice to the apparent logical form of the propositions in question, but more
importantly, when applied generally, it fails to preserve the intuitive truth value of a
wide range of other propositions. For example, it turns the historical fact that Ponce
de León searched for the fountain of youth into a falsehood. Results such as this
suggest that the theory of descriptions is, at best, not general and, at worst, false.
The solution lies, of course, in the fact that a predicate like be worshipped is
nonextensional (intensional) with regard to its subject term, while a predicate
like search for is nonextensional with regard to its object term. But if one
wants to account for the facts at hand by an appeal to the occasional
nonextensionality of predicates with regard to their terms, the semantic
definition of the existential quantifier must be changed so as no longer to
imply actual existence but rather ‘being’ in a wider sense than mere existence.
This, in turn, requires, besides an extension of the logical machinery, a
thorough revision of the concomitant ontology, not found in the relevant
logical literature.5
Further objections may be raised as regards what is known as the ‘unique-
ness clause’ in (10.2b): 8y[KoF(y) ! x ¼ y], meant to say that only one king
of France exists. Russell added the ‘uniqueness clause’ in order to account for
the uniqueness expressed by the definite determiner the. In fact, however, the
5
Richard Montague’s model-theoretic possible-worlds semantics comes closest, but it fails
irreparably on account of its inability to account for substitutivity salva veritate in intensional
contexts. Dowty et al. write: (1981: 175):
We must acknowledge that the problem of propositional attitude sentences is a fundamental one for possible world
semantics, and for all we know, could eventually turn out to be a reason for rejecting or drastically modifying the
whole possible world framework.
Meanwhile, a quarter century has passed, but no solution has appeared at the horizon. I take it,
therefore, that Montague’s programme of ‘extensionalisation of intensions’ has foundered on the
cliffs of the human mind.
320 The Logic of Language
But this superior, if not arrogant, attitude with regard to the facts of natural
language blinded him to the basic truth that definite reference in language is
unrelated to uniqueness of existence and fully related to uniqueness of
identification.
Then, this analysis is limited to definite descriptions and is unable to
account for other kinds of presupposition. Factive and categorial presupposi-
tions, and those derived from words like all, still, or only, fall outside its
coverage.
To account for other than existential presuppositions some have proposed
to change Russell’s analysis into (10.7) or ‘there is a king of France, and he is
bald’.
(10.7) ∃x[KoF(x)] ∧ Bald(he)
He is now no longer a bound variable but an instance of primary anaphora
outside the scope of the existential quantifier. With a logical mechanism for
such anaphora (as in Kamp 1981 or Groenendijk and Stokhof 1991), this
analysis can be generalized to all categories of presupposition. A sentence
BA (that is, B presupposing A) is now analysed as A AND BA, and NOT(BA),
though normally analysed as A AND NOT(BA) with the negation restricted to
the second conjunct, can also, forced by discourse conditions and marked by
special accent (see below), be analysed as NOT(A AND BA), with the negation
over the whole conjunction. This analysis, which saves PET, is known as the
CONJUNCTION ANALYSIS for presupposition.
Anaphora is needed anyway, since Russell’s analysis fails for cases like
(10.8), where quantifier binding is impossible for it, which is in the scope of
I hope, while I hope is outside the scope of I know:
(10.8) I know that there is a dog and I hope that it is white.
Presupposition and presuppositional logic 321
The conjunction analysis, however, still cannot account for the fact that
(10.9a) is coherent (though perhaps a little ponderous) but (10.9b) is not:
(10.9) a. There is a dog and it is white, and there is a dog and it is not white.
b. !There is a dog and it is white and it is not white.
(10.9a) speaks of two dogs, due to the repetition of there is a dog, but (10.9b)
speaks of only one. Yet the conjunction analysis cannot make that difference,
since the repetition of there is a dog makes no logical or semantic difference
for it.
Attempts have been made to incorporate this difference into the logic (e.g.
Kamp 1981; Heim 1982; Groenendijk and Stokhof 1991) by attaching a memo-
ry store to the model theory which keeps track of the elements that have
so far been introduced existentially. Though this is no doubt a move
in the right direction, it still falls short of what is needed, logically, philosoph-
ically, and linguistically. And even when these needs are satisfied, the con-
junction analysis still postulates existence for term referents whose existence
is denied:
(10.10) Santa Claus does not exist.
(One notes that the negation not in (10.10) in no way needs to be marked by
special accent nor be forced to take large scope by discourse factors.)
Since the first of these assertions is already false, the expression ‘the largest real
fraction’ makes no sense.
322 The Logic of Language
This is clearly reminiscent of the Ars Meliduna mentioned earlier, but the
difference is that this time the observation was followed up. Even so, however,
it still took some time for presupposition theory to flourish.
The follow-up started in Frege’s famous 1892 article ‘Ueber Sinn und
Bedeutung’ (On sense and reference). There he discusses, among other things,
what truth values are with regard to sentences and how truth values are
assigned to them. For Frege, the use of a definite term normally presupposes
(setzt voraus) the actual existence of its reference object. When we say The
moon is smaller than the earth we presuppose that there is an actual moon and
an actual earth, and we say of the former that it is smaller than the latter
(Frege 1892: 31). Only if this presupposition is fulfilled can the sentence have a
truth value. If not, the sentence may still have a sense or meaning, as in
fictional contexts, but it lacks a truth value.
Frege takes the medieval distinction between the extension and the inten-
sion of predicates as his point of departure and extends this distinction
to cover argument terms and sentences as well (see Section 6.1 in
Volume I). He considers a sentence to be composed of a predicate and its
argument terms, and he follows Aristotle in saying that when the referents of
the argument terms possess the property expressed by the predicate, the
sentence is true; otherwise it is false.
For Frege, the extension of a (definite) argument term is an individual
reference object, while its intension (or sense) is the way by which a speaker-
hearer cognitively arrives at the reference object—the search or reference
procedure. The extension of a predicate is a set of individual objects, while
its intension is the corresponding concept. And, surprisingly, the extension of
a sentence (Satz) is its truth value, while its intension is defined as the
underlying thought. We read (Frege 1892: 32–3; translation mine):
Let us assume, for the time being, that the sentence has a reference! If we replace one
of its words with another word that has the same extension (¼reference object;
PAMS) but a different sense, such replacement will have no bearing on the
extension of the sentence. But we see that the thought does change; for the thought
underlying a sentence like The morning star receives its light from the sun is different
from the thought underlying The evening star receives its light from the sun. Someone
who does not know that the morning star is identical with the evening star, might take
the one thought to be true and the other to be false. Therefore, the thought cannot be
the extension of the sentence. Rather, we take the thought to be its sense. But then,
how about its extension? Is it anyway appropriate to ask that question? Maybe the
sentence as a whole only has a sense and no extension? One may anyhow expect such
sentences to occur, just as there are sentence parts with a sense but without an
extension. Sentences with nominal expressions that lack a reference will be of that
Presupposition and presuppositional logic 323
nature. The sentence Odysseus was put ashore at Ithaca while sound asleep obviously
has a sense. But since it is doubtful that the name Odysseus occurring in this sentence
has an extension, it is equally doubtful that the whole sentence has one. Yet one thing
is certain: if one seriously takes this sentence to be true or false, one also assigns an
extension to the name Odysseus, and not just a sense. For it is to the extension of this
name that the predicate is assigned or denied. [ . . . ]
Why do we want every name to have not only a sense but also a reference? Why is
the thought alone insufficient? Because, and in so far as, we care about its truth value.
This is not always the case. For example, when we listen to an epic poem, it is, besides
the euphony of the language, only the sense of the sentences and the images and
feelings aroused by them that will captivate us. But as soon as we ask about the truth
of the story, we leave the precinct of aesthetic pleasure and enter upon the territory of
scientific investigation. As long as we take the poem as no more than a work of art, the
question of whether a name like Odysseus has an extension may remain a matter of
total indifference to us. It is, therefore, the effort to achieve truth that invariably drives
us forward from the sense to the reference.
FIGURE 10.1 Frege’s system of extensions and intensions for terms, predicates, and
sentences
324 The Logic of Language
oppose Russell directly was Peter Geach in a curious little article of 1950, in
which Geach deals with presuppositions (the term is used only in passing),
referring to the footnote in Frege (1884) quoted above. The article is a critique
of Russell’s theory of descriptions, first on account of its failure to recognize
presuppositions in ordinary language, then on account of Russell and White-
head’s defective definition of the iota operator in Principia Mathematica.
About the former, Geach writes (Geach 1950: 84–5):
On Russell’s view ‘the King of France is bald’ is a false assertion. This view seems to me to
commit the fallacy of ‘many questions’. To see how this is so, let us take a typical example of
the fallacy: the demand for ‘a plain answer – yes or no!’ to the question ‘have you been
happier since your wife died?’ Three questions are here involved:
The act of asking question 2 presupposes an affirmative answer to question 1; if the true
answer to 1 is negative, question 2 does not arise. The act of asking question 3 presupposes
an affirmative answer to question 2; if question 2 does not arise, or if the answer to it is
negative, question 3 does not arise. When a question does not arise, the only proper way of
answering it is to say so and explain the reason; the ‘plain’ affirmative or negative answer,
though grammatically possible, is out of place. (I do not call it ‘meaningless’ because the
word is a mere catchword nowadays.) This does not go against the laws of contradiction
and excluded middle; what these laws tell us is that if the question arose ‘yes’ and ‘no’
would be exclusive alternatives.
Similarly, the question ‘Is the present King of France bald?’ involves two other
questions:
And it does not arise unless the answer to 4 is affirmative and the answer to 5 is negative.
What Geach describes here is, of course, a gapped bivalent logic, which
does violate the Principle of the Excluded Third (not the Law of the Excluded
Middle, since what is at issue is not a truth value between ‘true’ and ‘false’, but,
rather, the absence of a truth value). But the interesting thing about this article
is that the existential presupposition is brought in line with other kinds of
presupposition, a step that was repeated soon afterwards in the linguistic
literature.
Presupposition and presuppositional logic 325
B B
∧ ∨
~A A A T F * A T F *
F T T T F * T T T *
T F F F F * F T F *
* * * * * * * * * *
Wilson (and Boër and Lycan) looked more carefully, they would have found,
for example, that the presupposition-cancelling ‘echo’ negation NOT is not
always possible, or that, by contrast, it is sometimes the only negation
possible. Since that is clearly so, the conclusion must be that their ‘pragmatic’
way out is basically flawed. The matter is important enough to deserve a
closer look. This is done in Section 10.4, but first we must provide some clarity
on the nature and structural basis of presuppositions and on the operational
criteria for recognizing them.
[[Be divorced]] ¼ {x : x actually exists; x has been married till time t |x’s
marriage has been dissolved by legal procedure since time t}
or: ‘the extension of Be divorced equals the set of all objects x such that (a) x
actually exists and x has been married till the time t of the process of getting
divorced, and (b) x’s marriage has been dissolved by legal procedure
since time t’. (Clearly, the predicate Be married again has its preconditions,
and so on.)6
Predicates that carry a precondition of actual existence with respect to a
term a are called extensional with respect to a. Since most predicates are
extensional with respect to their terms, we consider that to be the default case,
which can be left without special notation. Some predicates, however, are not
extensional with respect to some term. For example, the predicate think about
is extensional with respect to its subject term, but not with respect to its object
term, because one can think about anything at all, including nonexisting
entities, such as mermaids, unicorns, or dodoes. When a predicate F is
nonextensional (intensional) with regard to a term a, we asterisk that term
position in the semantic specification. Thus, in the specification of the
predicate Think about we asterisk the object-term position:
[[Think about]] ¼ {<x,y*> : x is endowed with cognitive powers j . . . }
This notation makes it clear that Think about is extensional with respect to its
subject term but not with respect to its object term.
When predicates are used in sentences, their preconditions become pre-
suppositions of the sentences in which they are used. This is how a sentence
like Jack is feeding the dog acquires its presuppositions that both Jack and
the dog actually exist and that at least the dog is an animate being. This is
also how a sentence like Jack is divorced evokes a context in which it has
been established that he was married until the moment the divorce became
effective.
Often predicates are ‘misused’ in the sense that they do not fit into the
current discourse. For example, when I say The trees whispered in the wind, in
a context about a picnic in the woods, then, clearly, the verb whisper is out of
6
Isidora Stojanovic correctly pointed out that the precondition of existence and that of having
been married before, though presented as being on a par, intuitively seem to have a different status.
I do not expand on this difference in the present text, but it seems to be reducible to the fact that
the contextual role of existential presuppositions differs considerably from that of categorial
presuppositions induced by specific lexical items such as be divorced. Denials of existence tend to
have a much more profound effect on discourse construction than denials of specific lexically bound
properties. Moreover, existential preconditions do not lend themselves to metaphoric use, as
categorial preconditions do.
330 The Logic of Language
place, because it carries the precondition that the one who does the whisper-
ing must be a living being capable of speech. Since trees do not satisfy that
condition, the wrong context is evoked and (radical) falsity should ensue. The
current term for such a ‘misuse’ is CATEGORY MISTAKE. Some category mistakes,
however, are evocative in a way that turns out to be inspiring or amusing or
perhaps even moving, in that the object failing the precondition in question is
regarded, for the purpose of the current discourse, as satisfying that precon-
dition. The fact that the wrong context is evoked is exploited precisely for the
purpose of evocation and association. When this is the case, one speaks of
METAPHOR. In the sentence at hand, for example, the trees in question are
regarded, for the purpose of the current discourse, as living beings capable of
speech, which evokes a magical world of comparisons and associations.
A presupposition is thus a semantically defined property of a sentence
making that sentence fit for use in certain contexts and unfit for use in others.
This property is partly based on the fact that if a sentence Q presupposes a
sentence P (Q >> P), then Q entails P (Q ‘ P): whenever Q is true, P is
necessarily also true—given the same situational reference points—in virtue
of the meanings of Q and P. Presuppositions are thus a subclass of entail-
ments, which, for the purpose of the present discussion, we call P-ENTAILMENTS.
Entailments that are not presuppositional are called CLASSICAL or C-ENTAIL-
MENTS. (10.14) illustrates a C-entailment (‘c); (10.15a–d) illustrate P-entail-
ments or presuppositions (>>):
(10.14) Jack has been murdered. ‘c Jack is dead.
(10.15) a. Jack lives in Manchester. >> Jack exists.
b. Jill has forgotten that Jack is her student. >> Jack is Jill’s student.
c. Jack is divorced. >> Jack was married before.
d. Only Jack left. >> Jack left.
(10.15a) is an instance of existential presupposition: to be murdered one must
be an actually existing entity. (10.15b) exemplifies factive presuppositions
(Kiparsky and Kiparsky 1971): the factive predicate have forgotten requires
the truth of the that-clause.7 (10.15c) is a case of categorial presupposition,
derived from the lexical meaning of the main predicate be divorced. (10.15d)
belongs to a remainder category, the presupposition in question being due to
the particle only. For such cases, the generalization that presuppositions are
7
Some predicates are WEAK FACTIVES, in that the truth requirement of the factive complement clause
is not absolute but can be overruled, albeit with some difficulty. Examples of weak factive predicates
are regret, surprise, anger, as in:
Presupposition and presuppositional logic 331
(i) Harold was under the illusion that his son had failed and this (namely that his son had failed)
angered him.
Weak factive predicates invariably involve an emotive factor. See Gazdar (1979: 119–22) for some
discussion.
332 The Logic of Language
8
The only exception, as was pointed out to me by Larry Horn (p.c.), is the English construction
with not even, as in Not even John was disappointed. This semantically and grammatically complicated
construction is discussed in Section 11.4.
336 The Logic of Language
As regards (10.26), both (10.26a) and (10.26b) are to be understood with the
negation as the highest operator, followed by the universal quantifier. (10.26a)
poses no problem for SMPC: since the first sentence of (10.26a) entails the
existence of doors, owing to the equivalence in SMPC of ¬A and I*, it is
incompatible with the second sentence. But (10.26b), which should have the
same analysis, does pose a problem for SMPC, precisely because its two
sentences are compatible to the native speaker.
(10.26b) is of particular relevance because it conflicts with standard
modern predicate logic and, therefore, with the conjunction analysis
discussed above. For SMPC, NOT[ALL DOORS were LOCKED] is equivalent
with SOME DOORS were NOT LOCKED and should, therefore, entail the
existence of doors. This would make the conjunction There were no doors
and not[all doors were locked] inconsistent. In fact, however, it is not, provided
the negation is placed in the canonical position and is provided with heavy
emphatic accent, as in (10.26b).
main verb is, though in the canonical position, unable to cancel the factive
presupposition, as shown in (10.32a), though it may affect other presupposi-
tions, as is made clear by (10.32c). Only if the factive subject clause is
extraposed, as in (10.32b), can the (canonically placed) negation cancel the
factive presupposition:
(10.32) a. !That Tom is clever does NOT irritate Joanna. He ISN’T clever!
b. It does NOT irritate Joanna that Tom is clever. He ISN’T clever!
c. That Tom is clever does NOT irritate the king of France. There IS no
king of France.
One notes, moreover, that when the factive clause is pronominalized by
means of that, the factive presupposition still remains intact under negation,
as is shown in (10.33a). But when the negation is reinforced with epistemic
possibility and comes out as cannot, the factive presupposition can be can-
celled, as in (10.33b):
(10.33) a. !That does NOT irritate Joanna. He ISN’T clever! (cf. (10.32a))
b. That CANNOT (possibly) irritate Joanna. He ISN’T clever!
These observations were not made in either Wilson (1975) or Boër and
Lycan (1976). Had they been made, they would have undermined their
analysis.
F. CLEFT AND PSEUDOCLEFT CONSTRUCTIONS
As is well known, cleft and pseudocleft constructions have a specific existen-
tial presupposition associated with the clefted WH-constituent: if in the non-
cleft version of the sentence this constituent requires a really existing
object for the sentence to be true, so does the clefted constituent, whether
in cleft or in pseudocleft constructions. This presupposition is uncancellable
by negation:
(10.34) !What he said was NOT ‘Damn!’. He said nothing at all!
Here the existential presupposition applies to the WH-constituent what he
said, since for something to be said it must, albeit for a brief moment, actually
exist. But other presuppositions not directly associated with the clefted
constituent are fully cancellable:
(10.35) Who wrote the letter was NOT Mr. Davis. Mr. Davis doesn’t exist!
Here it is presupposed that someone wrote the letter, since in order to write a
letter one must actually exist. But Mr. Davis’s existence is not presupposed,
338 The Logic of Language
because in the semantic analysis of the cleft sentence be Mr. Davis is the value-
assigning predicate bev.
G. CONTRASTIVE ACCENTS
Contrastive accents form an exact parallel to the (pseudo)cleft constructions.
In sentences with contrastive accent the accented constituent serves as a
predicate establishing the identity of the entity mentioned in the nonaccented
part. This latter entity is presupposed to exist in all cases where it is in
the corresponding sentence without contrastive accent. This presupposition
cannot be cancelled under negation:
(10.36) !The WAITER did NOT start the argument. Nobody did!
Again, however, other presuppositions, such as those associated with
the accented part functioning as an underlying predicate, are freely can-
cellable:
(10.37) The WAITER did NOT start the argument. There wás no waiter!
H. NEGATIONS WITH NEGATIVE POLARITY ITEMS
As is well known, every language has a, usually large, number of so-called
‘negative polarity items’ (NPIs). These are words, constructions, or expres-
sions which, mostly for unknown reasons, require a negation or, for some
NPIs at least, a negative word, when used in simple declarative sentences.
(Their behaviour in other clause-types differs in ways that have as yet never
been exhaustively studied.) Some, but not all, NPIs allow for emphatic
auxiliaries (do-support when there is no auxiliary) as a form of negativity.
In the examples below the NPIs are italicized. (10.38a) is a standard case.
In (10.38b,c) one has NPIs with negative words (hardly, difficult). (10.38d) is
a case of emphatic do-support:
(10.38) a. She couldn’t possibly have known that.
b. She could hardly breathe any more.
c. It was difficult (*easy) for him to go on any longer.
d. It DOES matter that Jones is an alcoholic.
The negation required in simple assertive clauses with NPIs (if there is no
other negative word and no auxiliary emphasis) is per se presupposition-
preserving, for all presuppositions in the sentence. Thus, the examples of
(10.39) are all felt to be inconsistent, if not outright ungrammatical:
Presupposition and presuppositional logic 339
(10.39) a. !It does NOT matter that Jones is an alcoholic. He ISN’T! (factive)
b. !Jones does NOT live in Paris any more. He doesn’t exist!
(existential)
c. !He did NOT at all acknowledge my presence. I wasn’t there!
(factive)
NPIs have a counterpart in so-called ‘positive polarity items’ (PPI). When a
PPI stands directly under negation, the sentence loses its default property of
inviting presuppositional inferences and acquires what is known as an ‘echo-
effect’ to an even stronger degree than in presupposition-cancelling sentences
without a PPI: it sounds as if the same sentence but without the negation has
been uttered (or strongly suggested) in immediately preceding discourse,
preferably by a different speaker. Take, for example, the PPI still, which
induces the presupposition that what is said in the rest of the sentence, if in
the present tense, was true at least till the moment of utterance, and the
sentence as a whole asserts that that situation continues to obtain. Contrast
this with the NPI any more, which induces the same presupposition but
lets the sentence, with the obligatory negation, assert that that situation has
ceased to obtain. Thus, given a sentence with the PPI still, its natural negation
will not be that sentence with the default-cancelling and ‘echoing’ NOT
but rather that sentence with still replaced by not . . . any more, as in the
following pair:9
(10.40) a. Harold still lives in Paris.
b. Harold doesn’t live in Paris any more.
The test is now that the presuppositions of (10.40a) have not become invited
inferences but are cancelled altogether when simple not is inserted, whereas
those of (10.40b) are not cancellable, just as in (10.39b):
(10.41) a. Harold does NOT still live in Paris: he has never set foot in France.
b. !Harold doesn’t live in Paris any more: he has never set foot in
France.
Examples of English PPIs are (see also Seuren 1985: 233): rather, far from,
hardly, terrific, daunting, ravenous, staunch, as fit as a fiddle, at most, at least,
perhaps, already, certainly, surely, awful, even, each, both, most, some, several,
9
For the reader’s reassurance, I have often tested this out with my students. The reader, if in a
teaching position, might do the same. He or she will then find out that, when the students are asked to
give the negation of a sentence like (10.40a), their answer will be (10.40b).
340 The Logic of Language
few, not. Note that the negation word not is itself a PPI: a succession of two or
more occurrences of not has the effect of cancelling all presuppositions and
creating an echo. But if there is no stark succession of two nots, as in (10.42)
below, they can both be presupposition-preserving.
Thus, generally, when a PPI stands in the immediate scope of NOT it cancels
the presuppositions of the sentence, not leaving even an invited inference. It
then also produces an echo-effect. However, Baker (1970) observed that,
interestingly, this is not so when there is double negation (other than stark
succession of nots), as in (10.42), with the PPI rather:
(10.42) There is nobody here who wouldn’t rather be in Montpelier.
This sentence carries no echo-effect and contains no radical negation, despite
the occurrence of rather. Baker’s observations are tantalizing, but still unex-
plained.
A further unexplained complication is that some, but not all, PPIs can
stand under an unaccented not when an explicit or implicit comparison is
made:
(10.43) a. You are not still building (as we are).
b. She hadn’t already finished (as you had).
Such sentences have a (slight) echo-effect, but preserve presuppositions in so
far as these are not induced by still or already.
PPIs are generally excluded in the scope of implicitly negative operators (or,
if one prefers, operators with underlying negation), such as the comparative
than, as shown in (10.44a). In (10.44b), the PPI some is outside the scope of
than (as opposed to any in the same position); this sentence is interpreted as
‘there are some of her colleagues who she is richer than’ (‘*’: ungrammatical):
(10.44) a. *She is richer than you already/still are.
b. She is richer than some of her colleagues.
If the comparative particle than is analysed as containing an underlying
negation (see Seuren 1973), this agrees with the observation made in A
above that morphologically incorporated negations are necessarily presuppo-
sition-preserving and cannot take PPIs in their immediate scope.
Just as it is, for the most part, unknown what causes the phenomena
mentioned under A–H, it is not known what system or mechanism is
responsible for the emergence of polarity items, whether positive or negative,
and their behaviour. Nor is much known about the question of what factors
Presupposition and presuppositional logic 341
lie behind the fact that often the negation word cancels presuppositions as
entailments but leaves them as invited inferences, while in certain classes of
cases it preserves some or all of the presuppositions in the sentence at hand,
and in other classes of cases it eliminates even the invited inference of the
presuppositions—the projection problem. It would seem that a theory of
topic–comment modulation might lay bare the grounds of the necessary
preservation of presuppositions in the categories E (non-extraposed factive
clauses), F ((pseudo)clefts) and G (contrastive accents). Sentences that fall
under these categories have a grammatically fixed topic–comment structure
built into them in such a way that the presupposition adheres to the topic,
and presupposition-cancelling can probably be shown to be incompatible
with topic-hood. Yet on the whole, our theoretical insights still fall short of
an explanation of the facts concerned.
Even so, however, the answer cannot be merely that the negation operator
in language is just the simple bivalent truth-functional operator known from
standard logic, somehow modified by pragmatic factors. Pragmatic types of
analysis are in principle unable to cope with the clear-cut difference between
the cases where presuppositions are necessarily preserved and those where
they are necessarily cancelled. The minimal conclusion to be drawn is that
there are at least three systematically differing ways of using the negation:
(i) with the presuppositions necessarily preserved, (ii) with the presupposi-
tions reduced to invited inferences, and (iii) with even the invited inferences
removed. The question is now: what theory has the best chance of coming
to grips with the facts observed above? A Gricean pragmatic theory may
be considered for certain peripheral parts of the question, but it does not
seem the first choice for the central problems, given the known failure, so
far, of such theories in those areas. The observed facts are anyway too
linguistically structural to be a natural object for pragmatics, whose typical
hunting ground is the nonlinguistic interactional aspects of communication
by means of utterance tokens.
If presupposition is not a pragmatic phenomenon, is it a logical phenome-
non? Again, the answer appears to have to be negative. If, as we have posited,
presuppositions originate in the preconditions of predicates, and if the
raison d’être of these preconditions is to restrict the use of predicates to
certain classes of situations, then presupposition is primarily a semantic
property of sentences, whose function it is to restrict the use of sentences
to certain classes of contexts (discourses). Presupposition is thus a discourse-
semantic phenomenon with, as one might expect, consequences for the logic
of language. These consequences, however, are epiphenomenal on the true
nature of presuppositions; they do not define presuppositions. This enables us
342 The Logic of Language
10
The inverse does not hold. It is possible for both B and ~B to entail A, without A being a
presupposition of B (or ~B). For example, both B and ~B entail any necessary truth T, but T is most
probably not a presupposition of B (or ~B). Or consider the fact that if B and/or ~B entail A, they also
entail A ∨ C (the entailment schema of addition; see Section 3.3.2), where C is any arbitrary sentence,
but A ∨ C is highly unlikely to be a presupposition of A. Moreover, as is shown in Section 10.6,
conjunctions of the form B ∧ BA cannot be taken to presuppose A, even though (minimal or radical)
falsity of A leads to the radical falsity of B ∧ BA.
Presupposition and presuppositional logic 343
been spelled out explicitly, their post hoc suppletion will take them as far as
possible into the higher domains. That process is stopped only when the
introduction into a higher domain leads to inconsistency either with the
domain itself or with independently available situational or world knowledge.
In other words, E-presuppositions will percolate upward as long as they
are compatible with any higher domain they are about to invade. It is for
that reason that presupposition projection is, normally speaking, a default
process, which can be overruled by contrary information. It is stopped by
inconsistency and enforced when the presupposition in question is entailed in
the domain in question. In between there is a gliding scale of possibilities,
which is analysed in some detail in the present section.
For sentences that are looked at in isolation, regardless of any specific
context they may occur in, the question of whether an E-presupposition
makes it upward or not, and if so in what form, depends on the predicate
or the instruction that has created the subdomain, given the principles of
D-construction. Following Karttunen (1973: 178), the literature has made a
distinction between (a) HOLES: predicates which cannot stop presuppositions
and let them through as full entailing presuppositions because they entail and
often presuppose their argument clauses, (b) FILTERS: predicates which let
presuppositions through but in the weakened form of default inferences,
and (c) PLUGS: predicates which categorically stop presuppositions from
creeping upward.
During the 1970s and 1980s, presupposition theory was entirely dominated
by the question of what formalism would account for presupposition projec-
tion, whereby no account was taken of the fact that the entire projection
mechanism is driven by, and becomes transparent in the light of, the Principle
of Maximal Unity that governs discourse incrementation processes. Instead,
one looked for a strictly formal calculus, very much in the tradition of formal
semantics that was en vogue during that period. The total disregard for the
ecology of the phenomena at issue led to a situation where the debate became
sterile and fruitless, as a result of which it inevitably petered out. Now that a
more realist course is taken in discourse semantics and some greater clarity
exists regarding D-structures and incrementation processes, the question can
be looked at in a new and more explanatory light.
(10.46) Joan hoped that her son’s new girlfriend would have better manners
than the previous one.
Taken in isolation (10.46) carries the following default inferences: (a) Joan
had a son, (b) Joan’s son had a girlfriend, (c) Joan’s son had a previous
girlfriend, and (d) her son’s previous girlfriend’s manners could be improved.
In a context containing the information (a), (b), (c), and (d), (10.46) is
perfectly sequential—that is, consistent with the context given, orderly, and
informative. Yet none of these inferences are entailed, as they can all be
cancelled. Let us start with (d). (10.46) is still fully sequential if preceding
context says that Joan’s son’s previous girlfriend had perfectly good manners
but Joan herself thought less well of the manners of this girl. The default
inference (c) is equally vulnerable, because the preceding context may contain
the information that Joan’s son’s actual girlfriend was his first one, in which
case the default inference (c) is cancelled and prevented from projecting, but
the text remains fully sequential. Likewise for the default inference (b),
because preceding text may have told the listener/reader that Joan had been
misinformed and that, in fact, her son never had a girlfriend at all, which still
leaves (10.46) fully sequential. Similarly again for the default inference (a),
which can likewise be eliminated by preceding context. D may contain the
information that Joan lives in a world of her own making. Poor Joan never
had any children but in her own fantasy she had a son, who once had an ill-
mannered girlfriend. In such a context, (10.46) may well describe Joan’s latest
delusion. In sum, the more default inferences are scrapped, the more Joan
seems to be out of touch with reality. The conclusion is that the verb Hope is
a so-called ‘filter’, letting through its E-presuppositions as default inferences
as long as they are not stopped by contrary information stored in higher
domains or in available knowledge.
One would expect presuppositions that do not make it into a higher
domain to stay put as presuppositions of the domain in which they have
been generated. While this is true for many subdomain-creating predicates, it
is not true for all. As was pointed out in Section 7.3.3, there appears to be a
class of predicates that do not let their subdomains take in nonprojected
presuppositions but, instead, send them to a subdomain created by a predi-
cate that is higher in the subdomain hierarchy. Hope is one such predicate. In
a context where the default inference (d) of (10.46) is blocked, (10.46) is not
interpreted as saying something like (10.47a), but rather as saying something
like (10.47b). The blocking of the default inference (c) requires a specification
of a subdomain for Joan’s belief about her son’s previous girlfriend or
girlfriends:
Presupposition and presuppositional logic 345
(10.47) a. Joan hoped that her son’s previous girlfriend’s manners could be
improved and that her son’s new girlfriend would have better
manners.
b. Joan believed that her son’s previous girlfriend’s manners could
be improved and she hoped that her son’s new girlfriend would
have better manners.
Those subdomains that need an appeal to a different subdomain for the
storage of nonprojected presuppositions are called subsidiary subdomains,
while the receiving subdomain is named recipient subdomain (Section 7.3.3).
As far as can be seen, given the present state of the enquiry, it looks as if
subsidiary subdomains are created or continued either exclusively or typically
by emotive complement-taking predicates.
These include predicates like hope or fear, but also the weak factive pre-
dicates like regret, or surprise. In the case of strong or weak factives, projection
does not primarily affect the E-presuppositions of the weak factive comple-
ment clause but the clause itself in its entirety or any of its semantic, including
presuppositional, entailments. Weak factives differ from ordinary strong
factives in that a negation of their presupposed that-clauses in D or in
available knowledge strongly resists, but in the end does yield to, the incre-
mentation of the sentences in the superordinate D (see note 7). In Sections 3.2
and 6.2.3.2 of Volume I it was observed that weak factives do not allow for
substitition salva veritate of topic–comment modulation and the following
examples were given, repeated here as (10.48a,b):
(10.48) a. It surprised/angered Ann that JOHN (and not Kevin) had sold the car.
b. It surprised/angered Ann that John had sold THE CAR (and not the
speedboat).
It requires little effort to see that (10.48a) differs truth-conditionally from
(10.48b)—a fact that has so far remained undiscussed in the semantics or
pragmatics literature.
Weak factive predicates are thus almost ‘holes’, but not quite. Contextual
overruling seems possible, since sentences like (10.49a,b) are still sequential—
that is, consistent and informative—albeit perhaps with some difficulty:
(10.49) a. Kevin had been falsely told that John had left the country and it
surprised/angered him that John had done that.
b. Someone had whispered into Ann’s ear that John had sold the car,
although John had done nothing of the sort, and it surprised/
angered her that JOHN had done that.
346 The Logic of Language
requires the incrementation of its object clause before it can itself be incre-
mented, and, with its object clause, also the presuppositions thereof.
(10.52) She realized that she would never get her husband to give up
smoking.
In Karttunen’s terms, the predicate realize is, therefore, also a ‘hole’, but not
for the same reason that makes causative predicates ‘holes’.
Now to the epistemic modal predicates may and must. Epistemic may,
discussed earlier in Section 7.2.1.3, strongly projects its E-presuppositions into
any D that is compatible with them. It has the peculiar property that, on the
one hand, it does not entail its argument clause or that clause’s E-presupposi-
tions, while, on the other hand, the whole modal sentence is refused in a D
(with the concomitant knowledge base) where that argument clause or any of
its E-presuppositions have been declared false, as is shown by (10.53b) below.
Incrementation in an incompatible D of a sentence with a main predicate of
epistemic possibility results in a strong intuition of inconsistency. Yet such a
sentence does not entail its argument clause. When the current D is not
explicit on the truth or falsity of the complement clause and/or its presuppo-
sitions and thus leaves open the possibility that they are true, the E-presup-
positions are projected into a subdomain of epistemic possibility. We shall see
in a moment that the truth-functional disjunctive operator OR shares with
MAY the property of being blocked by negative information on either of its
disjuncts in D while not entailing them, merely requiring compatibility. Just
as MAY, OR strongly projects its E-presuppositions in any compatible D, but
does not entail them. What is entailed is that the disjuncts, together with their
projected E-presuppositions, are possible.
The following sentences illustrate the projection properties of epistemic
MAY:
by (10.53b), which is strongly felt to be inconsistent. The reason for this is that,
as argued in Section 7.2.1.3, epistemic MAY requires truth of the relevant
present knowledge state K, besides the requirement that the embedded
infinitival clause is consistent with K. This makes it inconsistent to say first
that Andy has no car and then that he left his car in the garage. But it does
allow for a knowledge state that is not explicit on whether Andy has a car,
because Andy having left his car in the garage is consistent with such a
knowledge state.
Sentence (10.53b) is thus felt to be inconsistent. (10.53c) shows that the E-
presuppositions of MAY project into a subdomain of possibility unless blocked
by negative information in D. And (10.53d) shows that ‘Andy has a car’ is not
entailed by (10.53a).
Like MAY, epistemic MUST requires a knowledge state that contains or still
admits the incrementation of ‘Andy has a car’. But unlike MAY, epistemic MUST
entails its embedded clause, because it requires for truth that the relevant
knowledge state K of the speaker is correct and that the deduction schema
followed for the MUST-statement is valid. These two conditions suffice for the
entailment of the embedded clause complete with its presuppositions. The
difference between epistemic MAY and MUST shows up in (10.54a–d). (10.54a)
and (10.54b) run parallel with (10.53a) and (10.53b), respectively. (10.54c) is
incoherent on account of the fact that it violates the domain hierarchy for
epistemic inference shown in Figure 7.3 in Section 7.3.3. The incoherence of
(10.54d) shows that (10.54a) entails ‘Andy has a car’ (according to the opera-
tional criterion developed in Section 10.3):
(10.54) a. Andy must have left his car in the garage.
b. !!Andy doesn’t have a car but he must have left it in the garage.
c. !Andy may have a car and he must have left it in the garage.
d. !Andy may not have a car but he must have left it in the garage.
11
A large part of the recent literature on the projection properties of the propositional connectives
is, though formally very elaborate, in fact irrelevant, as it fails to take into account the ecological
embedding of language in cognition and society. To take just one example, Schlenker (2007) criticizes
the dynamic approach proposed in Heim (1990) and in principle endorsed in the present book on the
grounds that (Schlenker 2007: 327–8):
Presupposition and presuppositional logic 349
Heim’s dynamic semantics is just too powerful: it can provide a semantics for a variety of operators and connectives
which are never found in natural language. To make the point concrete, it suffices to observe that in Heim’s
framework one could easily define a deviant conjunction and* with the same classical content as and but a different
projection behavior […] with the order of the conjuncts reversed.
It should be obvious that Schlenker’s and* is unnatural for reasons that have nothing to do with the
formalities of presupposition projection and everything with the fact that the order of
incrementation corresponds with the order of presentation of successive utterances (with post hoc
suppletion as a functional artifice making it possible to economize on the effort of speaking). One
thus sees how research can get out of touch with reality as a result of an all too narrow formalist
approach.
350 The Logic of Language
· Negation
In the light of what has been said in Section 10.4, we can be brief about the
projection properties of the minimal and radical negation operators. The
minimal negation entails and thus obligatorily projects the E-presuppositions
of its argument L-proposition. When any such E-presupposition is incom-
patible with D, the negative sentence is inadmissible in D. In such cases, the
speaker/listener must fall back on the radical negation, which takes its argu-
ment L-proposition as a linguistic object (hence the echo-effect induced by
radical NOT) and declares it inadmissible in D on account of presupposition
failure.
· Disjunction
The disjunctive operator OR, as has been said, is in many ways like epistemic
MAY. It does not entail its disjuncts and projects their E-presuppositions as
default inferences so as to ensure maximal unity in the overall discourse
domain. But it requires a superordinate D that is compatible with the
disjuncts as well as with their negations, since there is no point in presenting
the disjuncts as possible increments if D has already banned them. When the
superordinate D is compatible with but silent about the E-presuppositions of
a disjunct, the E-presuppositions in question are projected into D as relatively
strong default inferences and they are entailed in a subdomain of epistemic
possibility: P (AND NOT-Q) OR (NOT-P AND Q) entails that both P (AND NOT-Q)
and (NOT-P AND Q) are epistemically possible.
The OR-expansion defined in (8.67) of Section 8.2.3 incorporates AND but
this makes no difference for the projection mechanism. All one has is two or
more alternative incrementation packages.
· Implication
In Section 8.2.4 we surmised that the conditional structure IF P then Q is, at a
basic-natural level, expanded to the biconditional IF P then Q AND IF Q then
P and at a strict-natural level to IF P then P AND Q, which is logi-
cally equivalent with the standard material implication. Two alternative
Presupposition and presuppositional logic 351
subdomains are set up, one for P AND Q and one for NOT-P AND NOT-Q at a
basic-natural, and for just NOT-P at a strict-natural level. The difference with
OR is that, for implication, the relation between the overt antecedent and the
overt consequent clause is the dynamic relation that exists between successive
conjuncts. The disjunctive operator OR has no such relation between the overt
disjuncts.
The projection properties of natural language implication follow from this
description. The antecedent clause requires a D that is compatible with it and
hence with its E-presuppositions, so that the whole conditional sentence is
inadmissible in a D that contains information contrary to what is said or
presupposed in the antecedent clause. The consequent clause requires com-
patibility with D as it is after the incrementation of the antecedent clause.
Primary anaphora is thus allowed in conditionals, as shown in (10.56a). In
(10.56b), the antecedent clause presupposes that Nancy has a husband. If D
contains information that is incompatible with the news that Nancy has a
husband, the whole of (10.56b) is inadmissible in D. (10.56b) thus entails that
it is possible that Nancy has a husband and it projects the invited inference
that she has one, in accordance with the Principle of Maximal Unity proposed
in Section 7.2.2. The consequent clause of (10.56b) has the factive, and thus
entailed, E-presupposition that Nancy’s Norwegian husband is faithful. This
E-presupposition is obligatorily projected not in the D as it was before
IP started work on (10.56b) but in the D as it will be after the antecedent is
incremented, ensuring that a text like ‘Nancy has a husband. He is Norwegian.
He is faithful. She knows that he is.’ is fully sequential—that is, consistent with
the context given, orderly, and informative.
(10.56) a. If Nancy has a husband, he is Norwegian.
b. If Nancy’s husband is Norwegian, she knows that he is faithful.
All the, partly far-fetched, examples adduced in Gazdar (1979: 83–7)
and later literature regarding the projection properties of conditionals are
accounted for by the incrementation mechanism proposed here and the
ecological context in which it is placed.
Entail-
ment Entailed Not entailed
scale
Projec- Requires compatible D Not refused Indifferent Requires
tion scale Refused in contrary D in contrary D to contrary D contrary D
strong 1 2 Weak 3 4 5
(emotive)
Causatives factives
Epistemic
Factives MAY Belief verbs
c. John has no dog but he thinks he has one and he hopes that it is in
quarantine.
d. John has no dog but he thinks he has one and he is under the
illusion that it is in quarantine.
e. John has no dog but, if he had one and it was in quarantine, he
would miss it.
Sentence (10.60a) is a classical example of projection blocking: since D is incon-
sistent with the E-presupposition that John has a dog, this E-presupposition is
not projected but stays with John’s belief-domain. (10.60b,c,d) are examples
of projection blocking whereby the blocked E-presupposition is confined
to the recipient subdomain of John’s beliefs. (10.60d) instantiates the ANTIFACTIVE
main verb be under the illusion. Antifactives are predicates inducing a presuppo-
sition not of the truth but of the falsity of their embedded clauses. Examples
are be under the illusion that, lie that, falsely suggest that, or German wähnen
(used by Frege in his 1892 article). They form a neglected category, yet it is a
real one. In so far as antifactives involve a sincere but false belief, as with be
under the illusion that, their E-presuppositions are relegated to a recipient sub-
domain of belief.
(10.60e) is a counterfactual construction. Counterfactual IF carries the
precondition that the clause C in its scope has been coded in D as being
false. Thus, the sentence If John’s dog were in quarantine, John would miss
it presupposes that John’s dog is not in quarantine, while the counterfactual
IF-clause has the E-presupposition that John has a dog. Now in cases where
D says or entails that John has no dog, it also entails that John’s dog cannot
be in quarantine, thus providing a proper anchoring base for (10.60e). The
E-presupposition that John has a dog is now prevented from projecting and
must stay within the subdomain created by counterfactual IF.
B B
A ~A ~A
_ ∧ T F1 F2 ∨ T F1 F2
A 1 2 4 A 1 2 4
T F1 F1 T T F1 F2 T T T T
2 3 5 2 3 5
F1 T F1 F1 F1 F1 F2 F1 T F1 F1
4 5 6 4 5 6
F2 F2 T F2 F2 F2 F2 F2 T F1 F2
The minimal negation operator (~) makes minimal falsity true and leaves
radical falsity unaffected, whereas the radical negation operator (’) makes
radical falsity true and leaves minimal falsity unaffected, both making truth
minimally false. The ~ operator yields truth when all preconditions of the
main predicate are satisfied but at least one update condition is not. The ’
operator says that at least one precondition of the main predicate is not
satisfied. The standard bivalent negation ¬, though probably not occurring
in natural language, remains operative in that it yields truth when either a
precondition or an update condition is not satisfied. Thus, for any sentence
A, ¬A ~A ∨~A.
Conjunction (∧) is defined by the condition that radical falsity (F2) is
infectious in the sense that when at least one conjunct is radically false, so is
the conjunction. Then when neither conjunct is radically false, minimal falsity
(F1) is infectious in that, when at least one conjunct is minimally false, so is
the conjunction. A conjunction is true only when both disjuncts are true.
Conversely, disjunction (∨) is defined by the condition that truth (T) is
infectious in the sense that when at least one disjunct is true, so is the
disjunction. Then when neither disjunct is true, minimal falsity (F1) is
356 The Logic of Language
VS: 1 2 3 4 5 6
A T F1 T F1 F2 T F2 F1 F2
B T T F1 F1 T F2 F1 F2 F2
AND*_
_~AND _
~AND*
_~AND*
_~AND
_~AND _
~AND*
~AND AND*
~AND ~AND*
AND ~AND*
6 5 4 3 2 1 2 3 4 5 6
OR ~OR*
OR OR*
~OR OR*
OR ~OR*
~OR UR OR*
_
~OR OR*_ _
~OR*
OR*_
U
OR*_
~AND {2,3}
C C CDR
SCR C
{1,2,4} {2,3,5}
OR SCR OR*
C
C
CDR
SCR
~OR AND*
SCR {3}
{3,5} CDR
CDR
~AND*
{1,2}
Meanwhile, the reader will quickly ascertain that the following expression
types of PPropC3 correspond with the valuation spaces specified:
/AND/ = {1} /~AND/ = {2,3} /~AND/ = {4,5,6} /AND*/ = {6}
/OR/ = {1,2,4} /~OR} = {3,5} /~OR} ¼ {6} /OR*/ ¼ {4,5,6}
/AND*/ ¼ {3} /~AND*/ ¼ {1,2} /~AND*/ ¼ {4,5,6} ... ... ...
/OR*/ ¼ {2,3,5} /~OR*} ¼ {1,4} /~OR*} ¼ {6} ... ... ...
One also finds that the following duality relations hold between AND and OR:
~OR AND* ~OR* ~AND OR* ~AND*
That is, AND and OR are duals only under the radical negation. Moreover,
De Morgan’s laws do hold for the radical negation ~ and also for the standard
negation ¬ but not for the minimal negation ~, as is easily shown. This means
that PPropC3 is isomorphic with standard bivalent propositional calculus for
the negation operators ~ and ¬, but not for the minimal negation ~. Figure
10.6 shows immediately that, for the minimal negation, AND ‘ ~OR* but not
Presupposition and presuppositional logic 359
B B
∨ T F1 F2 • T F1 F2
A 1 2 4 A 1 2 4
T F1 T T T F1 F1 F2
2 3 5 2 3 5
F1 T F1 F1 F1 F1 T F2
4 5 6 4 5 6
F2 T F1 F2 F2 F2 F2 F2
FIGURE 10.7 Truth tables for OR (∨) and NOR (•) in basic-natural PPropC3
vice versa, and ~AND* ‘ OR but not vice versa. Moreover, ~AND ‘ OR* and
AND* ‘ ~OR, but not vice versa.
As is shown in Weijters (1985), PPropC3 can be expanded to PPropCn with
n–1 negations and n truth values, for n > 1. Standard propositional calculus is
thus seen to be just the extreme minimal instance of PPropCn, while PPropC3
represents the three-valued variant. In Seuren et al. (2001) it is shown that the
Kleene calculus (Kleene 1938, 1952) is likewise the three-valued variant of a
system with, in principle, an unlimited number of truth values between, not
beyond, any given pair of successive truth values. The Kleene set of many-
valued propositional logics can be combined with the presuppositional set
of many-valued logics, resulting in a system of propositional logic with an
unlimited number of definite truth values and an equally unlimited number
of values that are intermediate between any pair of successive definite truth
values. This rather extends the number of possible propositional logics
natural language could choose from when it started on its evolutionary path.
Basic-natural PPropC3 differs in certain respects from its strict-natural or
standard counterpart. The operators ~, ~ and ∧ are not affected, but the
operator ∨ is. Moreover, as one recalls from Section 3.4.2, there is the further
·
operator NEITHER NOR, symbolized here as ‘ ’, and occurring in the sentence
type NOR, which is defined as ~A ∧ ~B. The three-valued truth tables for
basic-natural PPropC3 are as given in Figure 10.7.
Basic-natural (exclusive) OR differs from strict-natural or standard OR in that it
produces truth only when one of the constituent L-propositions is true. Given
this, it is still so that T has priority over F1 and F1 over F2. Basic-natural NOR is
defined on the basis of the assumption that the negations involved are minimal
negations. On this assumption, NOR produces truth only when both (all) constit-
uent L-propositions have the value F1—that is, in space 3. Given this, the principle
holds that F2 has priority over F1 and F1 over T (as for the conjunctive operator
∧). The corresponding VS-model is shown in Figure 10.8.
Here we see that NOR is equivalent with AND*, both sharing the VS {3}.
Moreover, OR* entails ~AND but not vice versa, since {2} {2,3}. This is of
360 The Logic of Language
AND*_
_~AND _~AND*
_
~AND*
_~AND
_~AND _~AND*
~AND AND*
~AND ~AND*
AND ~AND*
6 5 4 3 2 1 2 3 4 5 6
~OR ~OR*
~NOR
OR OR*
~OR ~NOR
~OR*
OR NOR ~OR*
R
~OR U OR*_
OR*
_
~OR _
~NOR _
~OR*
OR*_
_
U ~NOR
OR*_
_
~NOR
special interest in the light of examples (3.24) and (3.25) of Section 3.4.2,
repeated here as (10.61) and (10.62):
(10.61) a. He doesn’t like planes or trains.
b. He doesn’t like planes and he doesn’t like trains.
This is a problem that plagues all theories of ambiguous not. One might
counter this argument by saying that such ‘universal’ ambiguities do occur.
No language is known, for example, to distinguish formally between the two
senses of:
(10.67) There’s a fly in the middle of the picture.
Such ambiguities are likely to come with the language machine the human
race is natively endowed with. Even so, however, it must be admitted that a
theory that manages to subsume all different varieties of negation under one,
Presupposition and presuppositional logic 363
a. ~ ~A b. ~
– ~A
–
~A ~I* ~A ~I*
–
– –
~A I* ~A I*
~A I* ~A I*
A ~I* A ~I* ~– ~I*
4 3 2 1 2 3 4 4 3 2 1 2 3 4
I ~A* ~
– ~I I ~A*
I ~A* I ~A*
~I A* ~I A*
~I U' ~A*
– ~I
– U' ~A*
–
~ ~A* ~
– ~A*
U – U
Consider the A-form ALL F is G and the corresponding A*-form ALL F is NOT-
G. Since NOT in NOT-G is presupposition-preserving, the complement of [[G]]
is restricted to UR, within which all preconditions of the matrix predicate G
are satisfied. Given that under radical negation truth results only when one or
more of the preconditions of G are not satisfied, and since the preconditions
of G and NOT-G are identical, it follows that the radical negation makes all
eight basic sentences types of the calculus not containing the radical negation
equivalent.
a. b.
[[F]] = Ø
~A ¬A
~A ¬A
~I N ¬I N
~I ~A N ¬I ¬A N
I ~N I ¬N
A A
~I ~N ¬I ¬N
4 3 2 1 2 3 4 4 3 2 1 2 3 4
N* ~I* N* ¬I*
~N* ~A* I* ¬N* ¬A* I*
~N* ~I* ¬N* ¬I*
~A* ¬A*
N* ~I* N* ¬I*
A* A*
UR ~A* ¬A*
U U
Figure 10.10a shows the VS-model for BNPPredC3. Figure 10.10b, which
represents bivalent basic natural predicate calculus (BNPC) extended to a full
logic, is repeated from Figure 3.8b to enable the reader to compare the two.
The spaces 1 to 3 in Figure 10.10a,b are reserved for situations where the
preconditions of the G-predicate are satisfied, while space 4 covers those
situations in which they are not (in Figure 10.10b only existential presupposi-
tions are taken into account). As with bivalent BNPC, the spaces 1 to 3
are defined for the following conditions: space 1 for [[F]] [[G]], space 2 for
O [[G]] or [[G]] [[F]], space 3 for [[F]] OO [[G]].
[[F]] O
Now we can revert to the problem discussed in Section 4.4 of the intuitive
equivalence of ~A with both I and I* (intuition 7). If it is assumed that, at least
prototypically, NOT ALL F is G is interpreted as the topic–comment (cleft)
structure ‘the F that is G is not all F’, then NOT ALL F is G presupposes and
thus entails SOME F is G and also SOME F is NOT-G, as the two are equivalent.
This establishes the entailment from ~A to both I and I*. Conversely, both I
and I* entail ~A because that is how they are defined in the system of Figure
10.10a: /I/ ¼ /I*/ ¼ {2,4} while /~A/ ¼ {2,3,4}, which means that I/ I* ‘ ~A. We
thus have entailment both ways between ~A and I or I*, and hence equiva-
lence.
For PpredC3 this result is not attainable. As one can read from Figure 10.9a,
the conjunction of ~A with its presupposition I gives the VS {2}, whereas
/~A/ ¼ {2,3}, which makes for a one-way entailment from I to ~A, and no
equivalence. Of course, ~A and I* are equivalent because the Conversions
370 The Logic of Language
hold in PpredC3 (with minimal negation). Nor is this result attainable for any
presuppositional form of AAPC, as is easily checked.
The conclusion is that, provided one lets ~A presuppose I on account of
topic–comment structure, we may add intuition 7 (NOT ALL F is G SOME F
is G SOME F is NOT-G) to the score of BNPC, now transformed into
BNPPredC3, as specified in Section 4.4, which makes BNPPredC3 the clear
overall winner of the systems considered. The only intuition still missing is
intuition 3 (SOME F is G SOME G IS F or I I!), but, as has been said, here
we have no remedy. All we can say is that this particular intuition is weaker
than the others and may be due to the fact that cases where SOME F IS G is true
while [[G]] [[F]] are relatively rare and untypical, which may lead intro-
specting subjects to overlook them. But even if that is so, we have no
explanation for the fact that PNST–5 apparently does not translate directly
into BNPC or its presuppositional counterpart BNPPredC3.
a song). The most powerful predicate logic allowing for existential statements
without complete situational knowledge, the Square of Opposition, is thus
seen to be operative precisely where it needs to be, in the actual, restricted
contexts and situations that utterances are used in. Psychologists rightly speak
about shared speaker–hearer’s knowledge to explain the fact that so much can
be left unsaid. Here we have another device that makes language as efficient as
it is.
But there is a price to be paid for the logical soundness of the Square. First,
it appears necessary to distinguish three truth values for natural language,
true (T), minimally false (F1) and radically false (F2), the two varieties of
falsity being contingent upon whether or not it is a lexical update condition or
a lexical precondition that has remained unsatisfied. Correspondingly, two
varieties of natural language negation are distinguished, the minimal negation
to repair minimal falsity, and the radical negation to repair radical falsity.
Radical falsity is now interpreted as a cry for discourse correction, which
explains the metalinguistic ‘echo’ the radical negation provokes in actual use.
The resulting trivalent propositional calculus (which differs from that pre-
sented in Kleene 1938, 1952) incorporates standard propositional logic as long
as one stays within the boundaries set by presuppositional conditions. Anal-
ogously, the resulting new trivalent predicate calculus incorporates the Square
as long as there is no presupposition failure. The Square has now been cured
of its logical disease while, at the same time, it has been made to fit into
a flexible, dynamic system of discourse-restricted linguistic interaction—
something no human or superhuman engineer would have thought of,
presumably. In this perspective, the trivalence of natural logic is not really a
concession to the facts of language and cognition. On the contrary, it is a
small investment yielding a huge profit.
A second price to be paid arises from the fact that the logico-semantic
system envisaged and partly developed here requires a nonextensional ontol-
ogy containing ‘Meinongian’ virtual or intensional objects which derive their
being from the fact that they are the product of the cognitive powers of
imagination and representation. They have been thought up one way or
another and thus combine specific identity with a specification that is not
complete on all relevant parameters, as opposed to the complete specification
that is a necessary characteristic of actually existing objects. This will perhaps
cause some consternation among Anglo-Saxon philosophers trained in the
tradition of Russell and Quine. But then, there are plenty of reasons why the
purely extensional ontology that is standard in the Western world is anyway
hopelessly inadequate if one wants to achieve a proper semantic theory of
natural language, regardless of any requirement issuing from a trivalent
372 The Logic of Language
logical system. Here again, therefore, if there is a price to pay, that price is not
a concession to Ockham’s principle of minimal assumptions or Quine’s
(anyway unlovable) ‘desert landscape’, but much more an investment that is
necessary for an adequate insight into the nature of human language and
cognition.
continuing the lines set out in Russell (1905) for the analysis of definite
descriptions (Section 10.1.2), a tradition attempting to reduce presuppositions
to pragmatic phenomena (Wilson 1975) and a discourse-semantic tradition,
defended in the present study, reducing presuppositions to lexical precondi-
tions, together with a trivalent logic and two kinds of falsity (Dummett 1973:
425–6; Seuren 1985, 1988, 2000, and many other publications). Given this
diversity of opinions on presuppositional phenomena, one is mystified by
Van der Sandt’s expression ‘the traditional view’.
Moreover, calling presuppositions ‘expressions’, and ‘referring expressions’
at that, is something that no current of thought on presuppositions, tradi-
tional or not, has, to my knowledge (based on forty years of research in
presupposition theory), ever been daring enough to do. It would have helped
if Van der Sandt had mentioned some references, to jog his readers’ memory
or let them find out what could be meant, but, of course, he does not, because
there aren’t any. Nor does he give any further explanation of this highly
original thought. In other words, the first sentence of Van der Sandt’s paper
is simply bizarre.
But perhaps we shouldn’t worry too much about the supposedly ‘tradi-
tional’ but in fact unidentifiable, view, because a few lines down on the same
page we read: ‘For Frege it is referring expressions that give rise to presuppo-
sitions.’ In the light of the previous quote, this should be read (in the
presumed but unidentifiable ‘traditional view’) as: ‘For Frege it is referring
expressions that give rise to referring expressions.’ This is an inauspicious start
of what is presumably meant to be a serious paper. Since the remainder of the
paper hardly adds anything, we can safely let it rest.
Geurts and Beaver (2007), an encyclopedia article on DRT, is equally
evasive as regards the notion of presupposition. These authors define, or
describe, presupposition as follows (Section 5.2): ‘Presuppositions are chunks
of information associated with particular lexical items or syntactic construc-
tions.’ This is a very different notion from the one put forward by Van der
Sandt, but equally unhelpful. An English word like pal and an English
construction like ain’t are ‘associated with’ the ‘chunk of information’ that a
colloquial register is being used. Does that make this information presuppo-
sitional? Presumably not. But then, what distinguishes this information from
the information conveyed by presuppositions? The answer is not to be found
either in this encyclopedia article or anywhere else in the DRT literature.
Geurts and Beaver (2007) thus shares the fuzziness with Van der Sandt (1992)
but it directs its diffusing bundle of darkness in a different direction.
Yet, despite these notional unclarities, the authors in question maintain
that there is one unified mechanism underlying both external-anaphora
374 The Logic of Language
The often heard claim that definite determiners, including those thought to
be present in definite pronouns, induce existential presuppositions is false.
As has been amply shown in Seuren (1985, 1988, 1994, 2000) and again in
Section 3.5.1 of Volume I and Section 10.7 of the present volume, existential
presuppositions originate from the extensional character of argument posi-
tions and not from definite determiners. Yet Geurts and Beaver (2007, Section
5.2) still claim that existential presuppositions derive from definite determi-
ners, even though they are fully aware of the arguments put forward in many
publications by me over the past thirty years showing that this position is
untenable. But much as one may try to ignore arguments or theories out of
existence, it doesn’t necessarily make them go away.
Since the first sentence in (10.74) denies the actual existence of a donkey
owned by Pedro, the sequence as a whole is semantically incoherent, though
with a perfectly straightforward anaphora resolution. In (10.76a–d), there is
again no problem as regards anaphora resolution (although, according to
Geurts and Beaver, there should be one in (10.76a)), but they differ from
(10.74) in that they are all fully consistent. Their consistency derives from the
fact that not all predicates involved in the second clauses of (10.76a–d) are
fully extensional with regard to their object-term positions or with regard to
the subdomains they introduce:
pffi
(10.76) a. John doesn’t own a Ferrarii. He has simply invented iti.
b. John may own a Ferrarii, but I have never seen iti.
c. John may own a Ferrarii but he may also have simply invented iti.
d. Geert has announced that he is going to make an anti-Islam filmi
but I doubt that iti will ever materialize.
Example (10.76a) illustrates the fact that anaphora resolution does not
require an antecedent address in the main or truth domain of the discourse
at hand. All it requires is the presence of an appropriate antecedent address in
any domain or subdomain (in this case, the extensional subdomain under
negation). This is quite different from presupposition projection, which is
restricted by conditions of overall semantic consistency. That no clash arises
in (10.76a) is simply due to the fact that the predicate invent (in the sense of
‘pretend there to be’) is intensional with respect to its object term, so that it
does not need an antecedent for iti in the extensional truth domain. Analo-
gous analyses apply to (10.76b,c,d).
Of course, this analysis requires an ontology that allows for nonexisting,
virtual entities, which is something most Western philosophers, burdened as
they are with the legacy left by Russell and Quine, have great difficulty with.
They ought, however, to consider that this is apparently the way humans
376 The Logic of Language
naturally construct their ontologies, as appears from the fact that humans
refer to and quantify over virtual entities with the same ease and naturalness
as they do with regard to actually existing ones (see Chapter 2 of Volume I for
extensive discussion). Refusing to admit an intensional ontology means inter
alia the inability to explain the coherence of sentences like (10.76a–d).
Closer inspection quickly reveals that there are many cases where anaphora
resolution proceeds without a hitch but where incoherence arises owing to the
blocking of presupposition projection. Consider the following examples:
(10.77) a. ! Johni pretends that he has left but hei has come back.
b. ! I know Johni hasn’t left, so hei must have come back.
c. ! John never had a wifei. He simply divorced heri.
d. ! If Johni pretends to have childrenj, theyj must be staying with
hisi sister.
The first clauses in (10.77a,b) entail that John hasn’t left while the second
clauses presuppose that he has. This clash of entailments blocks the obligatory
upward projection of the embedded presupposition of the second clause and
thus causes the incoherence of (10.77a,b). Yet there is no problem with the
resolution of the anaphor hei in the second clauses of (10.77a,b), which is an
external anaphor with John as its antecedent.
Similarly in (10.77c), where the first clause denies a presupposition of the
second. Yet there is no problem with the anaphoric pronoun heri, which
resolves (by primary or donkey anaphora) into a wife in the first clause. In
(10.77d), the first clause entails that John has no children, whereas the second
presupposes that he has, owing to the fact that the predicate stay with one’s
sister is extensional with regard to its subject term and must is a ‘hole’, giving
rise to existential import. But the resolution of the (donkey) anaphor theyj is
entirely unproblematic.
Further examples illustrating free anaphoric access across subdomains are:
(10.78) a. John has no children. So they (the poor creatures) can’t be on
vacation.
b. If Juan had any children, they (the poor creatures) would speak
Spanish.
c. A farmer who has no donkey can’t feed it (the animal).
d. A farmer who has no donkey can still dream about it (the animal).
(The possibility of epithet anaphora (Section 9.2) shows that these are anyway
cases of external and not of bound-variable anaphora, which precludes an
analysis in terms of Geach (1969, 1972); see Section 9.4.1.)
Presupposition and presuppositional logic 377
This suffices to show the untenability of the position defended by Van der
Sandt, Geurts and Beaver, and all too easily adopted by other adherents of
DRT. But there is more. Presuppositions are recoverable from their carrier
sentences on account of the lexical meanings of the predicates inducing them,
while no such help is available for anaphors. Thus, while (10.79a) allows for
the conclusion So he was married, no corresponding counterpart is available
for (10.79b):
(10.79) a. Harold wanted to get divorced. (So he was married!)
b. Harold wanted to see her. (Who?)
Van der Sandt’s reply to this argument is that (Van der Sandt 1992: 341)
. . . unlike pronouns, <presuppositions> contain descriptive content which enables
them to accommodate an antecedent in case discourse does not provide one.
But this won’t do. The difference is that presuppositions consist of proposi-
tional content while definite descriptions have descriptive content. Epithet
pronouns, as has been shown, also have descriptive content, yet such descrip-
tive content does not suffice to provide an antecedent:
(10.80) Harold wanted to see the old girl. (Who?)
We must conclude that Van der Sandt’s claim that (Van der Sandt
1992: 341):
. . . presuppositions are just anaphors. They can be treated by basically the same
mechanism that handles the resolution of pronominal and anaphoric expressions.
is confused and bizarre. Not only does it require notional obfuscation for it to
be palatable to outsiders such as the school of DRT-practitioners, it also
confuses the phenomenon itself with a mechanism associated with it. In the
end, DRT still lacks an account of presupposition. Anaphora is anaphora and
presupposition is presupposition. The twain meet only in so far as they link
arms in context-driven utterance interpretation.
11
Topic–comment modulation
What matters here is that what Aristotle had in mind was the attribution of
a specific property to some specific entity, as in for example, ‘Mr. G. has the
property of being obnoxious’. This notion of proposition is schematically
rendered in Figure 11.1 (where one notices that Aristotle had no term for what
we now take to be the subject term in a propositional structure).
This analysis implies that the something of which something is said—the
reference object of what was later to be called the subject term—must be given
for a proposition to be conceived, uttered, and interpreted. And there are
precious few ways in which an object can be given—that is, intentionally
focused on by the speaker and identifiable by the listener.
A traditional view holds that reference objects are identified in linguistic
interaction through the meaning of the predicate embedded in the referring
expression (mostly a noun phrase). Husserl writes (1900: 49):
. . . daß es also mit Recht heißt, der Ausdruck bezeichne (nenne) den Gegenstand
mittels seiner Bedeutung.
( . . . that it is, therefore, correct to say that the expression denotes (names) the object
by means of its meaning).
Topic–comment modulation 379
Proposition
(prótasis)
Thought/
Language
Predicate
? (Subject term) –
(kategoróumenon)
true or false
fact or fiction
But if this is intended to imply that the meaning of the predicate embedded in
a definite NP suffices to identify the reference object (æ-value), it is wrong and,
in fact, rather naı̈ve. The meaning of the embedded predicate in a definite NP
suffices to identify the reference object only if the object in question is unique
in the speaker–hearer world of experience, such as the sun or the moon. But
such cases are rare compared with those where the reference object needs ad
hoc situational and/or contextual information to be identified. Pointing is, of
course, one device, possibly combined with the use of a pronoun, but in the
vast majority of cases æ-values are established in that the cognitive-linguistic
context of utterance enables speaker and listener to home in on the intended
reference object. (One recalls from Section 9.5 that in some cases of primary
anaphora the truth of the utterance in question makes for the fixing of
reference.)
An Aristotelian proposition is thus the mental assignment of a specific
property to one or more objects that have been given enough salience to act as
reference object. When such a proposition is expressed linguistically, the
property is expressed by means of a predicate, while that to which the property
is assigned is expressed by what is now known as the subject term. This analysis
was adopted, uncritically one has to admit, by the earliest grammarians in the
Greek world and mistaken for what we now know as the grammatical or
syntactic analysis of sentences. Thus, the first syntactic analysis of sentential
structure consisted of a distinction between a subject term and a predicate,
which were seen as the linguistic counterparts of the Aristotelian something of
380 The Logic of Language
which something was said, respectively. Luckily for the earliest grammarians,
what they discovered was real, since there can be no doubt that syntactic
structure, with a subject term (along with a main verb and other argument
terms plus any number of adverbial modifiers), is real. But they were less
lucky in that they remained fixated on subject-predicate structure, failing
to see the roles of further subdivisions within what they considered the
‘predicate’.
1
All translations are my own.
382 The Logic of Language
what Von der Gabelentz had called logical, which for him was the grammatical
structure of the sentence (Meyer-Lübke 1899: 352):
I want to stress that ‘subject’ is used here in a purely grammatical sense, and
designates, therefore, the agent of the action. Admittedly, this goes against the original
meaning of this term, which, as one knows, originated in logic. From the point of view
of logic there can be no doubt that in the sentence il arrive deux étrangers [two
foreigners arrive] the subject is il arrive while deux étrangers is the predicate [. . .]. But
from the point of view of grammar the relation between Noun and Verb remains
unchanged, no matter which comes first in the sentence.
He was followed by Theodor Lipps, who introduced the notion that the
‘psychological’ predicate is in fact the answer to a question about the hypo-
keı́menon that has arisen in the current context (Lipps 1893: 40):
The grammatical subject and predicate of a sentence now agree now do not agree with
those of the judgement. When they do not, the German language has intonation as a
means of marking the predicate of the judgement. The subject and predicate of the
associated judgement are best recognised when we bring to mind the question to
which the sentence is an answer. That which the full and unambiguous question is
about is the subject, while the information required is the predicate. The same
sentence can, accordingly, serve to express different judgements, and hence different
subjects and predicates.
previously indeterminate. The subject is the previous qualification of the general topic
or universe of discourse to which the new qualification is attached. The subject is that
product of previous thinking which forms the immediate basis and starting-point of
further development. The further development is the predicate. Sentences are in the
process of thinking what steps are in the process of walking. The foot on which the
weight of the body rests corresponds to the subject. The foot which is moved forward
in order to occupy new ground corresponds to the predicate. [. . .] All answers to
questions are, as such, predicates, and all predicates may be regarded as answers to
possible questions. If the statement, ‘I am hungry’ be a reply to the question, ‘Who is
hungry?’ then ‘I’ is the predicate. If it be the answer to the question, ‘Is there anything
amiss with you?’ then ‘hungry’ is the predicate. If the question is, ‘Are you really
hungry?’ then ‘am’ is the predicate. Every fresh step in a train of thought may be
regarded as an answer to a question. The subject is, so to speak, the formulation of the
question; the predicate is the answer.
These are the questions that have led, in our new linguistics, to a kind of distinction
that has found a rather widespread acceptance, but which, in my eyes, has increased
rather than solved the confusion resulting from the mixing of logic, grammar, and
psychology. If we are to believe G. von der Gabelentz we should distinguish between a
logical, a grammatical and a psychological subject and predicate. The logical subject
and predicate keep the function they have in logic. The psychological subject is seen as
‘the representational complex that occurs first in the consciousness of speaker and
hearer’, while ‘the content that is added to this prior representation’ should be the
predicate. Or, as v.d. Gabelentz formulates it from the teleological point of view, the
psychological subject is ‘that about which the speaker wants the hearer to think, to
which he wants to direct his attention, while the psychological predicate consists of
that which the hearer should think about the subject’. [. . .]
When one says that the two sentences Caesar crossed the Rubicon and The Rubicon
was crossed by Caesar have the same logical subject but different grammatical subjects,
one has already lost sight of the notion of subject in the Aristotelian sense, namely as
that on which the assertion is based, and surreptitiously introduced a psychological
consideration, namely that the subject must be an agent. Obviously, the agent in both
sentences is Caesar. But only in the first sentence, and not in the second, is he the basis
on which the proposition is grounded. The former is an assertion about Caesar, the
latter about the Rubicon.
Although one may disagree with Wundt on several counts, he makes some
important points, such as the difference between the genesis and the sub-
stance of a propositional thought, and the necessity to create a separate
terminology for the grammatical distinction of subject and predicate on the
one hand and the ‘psychological’ distinction of ‘what comes to mind first’ on
the other.
As one sees from the quotations given, there was a great deal of confusion
about this issue around the turn of the century, and the parties involved were
unable to settle on an agreed solution. In fact, the confusion was such that
Theodor Kalepky exclaimed (1928: 20): ‘Such a confusion simply cries out for
relief ’ (Eine derartige Wirrnis schreit förmlich nach Abhilfe).2 After 1930 the
subject-predicate debate, which had dominated linguistic theorizing for al-
most a century, disappeared from the limelight, mainly due to the lack of
empirical support and the general unclarity of the issues concerned, but also
because the new structuralism in linguistics had different interests.
2
Kalepky belonged to a group of linguists who felt that a theory of grammar should be set up
without any notion of subject and predicate at all. Others belonging to this movement were Svedelius
(1897) and Sandmann (1954). This movement, however, petered out without leaving as much as a
trace.
Topic–comment modulation 385
The only place where the debate was continued was Prague, largely owing
to a tradition of loyalty to good work done by local scholars. Anton Marty, a
disciple of the German phenomenologist Franz Brentano and professor of
philosophy at Prague by the end of the nineteenth century, made important
contributions to the subject-predicate debate. According to him, logic de-
serves no place in semantics, all semantics being psychological. Besides an
abstract propositional meaning, every sentence has an ‘inner form’ which
expresses the way the propositional meaning is to be integrated into running
discourse. He follows Lipps, Stout, and others in saying that this ‘inner form’
is determined in principle by question–answer structure. Unlike Wundt, he
maintains that the terms subject and predicate are most appropriately used at
this ‘inner form’ level, since it is here that the Aristotelian meaning of these
terms is immediately applicable. Despite some unclarities, this makes a great
deal of sense, as will be clear in a moment.
Marty’s work was continued by the Czech scholar Vilém Mathesius, pro-
fessor of English at Prague University and founder, in 1926, of the Prague
Linguistic Circle. Mathesius followed Wundt in wishing to see a separate
terminology for subject and predicate on the level of grammatical analysis
on the one hand, and the ‘known-new’ distinction found to exist at a more
psychological level by Lipps, Stout and company on the other. Not wishing to
upset existing terminology, he felt that the terms subject and predicate should
go on being used in grammar, no matter what confusions had occurred in
recent literature, and proposed a new term pair for the Aristotelian distinc-
tion, which is realized at the ‘psychological’ level. For the latter he proposed a
Czech term pair that has been rendered variously as theme versus rheme, topic
(or focus) versus comment, the former pair member indicating the Aristotelian
hypokeı́menon, the latter the Aristotelian predicate. The structure into which
both are combined is not called ‘proposition’ but the functional sentence
perspective (Mathesius 1939).
Although the question of the disparity between syntactic structure and
topic–comment modulation dominated all discussions about the nature of
language for well over half a century, it disappeared from the theoretical
agenda when the new structuralism made its appearance around 1930. This
was no doubt due to the fact that structuralism in linguistics, in particular the
American variety, was strongly focused on grammatical form and tried to
dispense with meaning as an object of ‘scientific’ enquiry altogether. In this
perspective, introspection-based talk about ‘what comes to the mind first’ and
things like that was considered unscientific and an improper intrusion of
phenomenological psychology into the much more ‘scientific’ arena of lin-
guistics and behaviourist psychology.
386 The Logic of Language
Needless to say, after 1960 the psychologists struck back and joined forces
with pragmaticists to develop a pragmatically oriented discipline of discourse
and text analysis, sometimes called ‘conversation analysis’, studying questions
of ‘information packaging’ and ‘information structure’ in the intuitive terms
of personal, introspective experiences. It is as if the practitioners of conversa-
tion analysis have turned their backs on formal grammar and even more on
everything to do with logic and the more formal aspects of meaning. They
represent extreme ecologism as described in Section 1.3.3 of Volume I. It is our
purpose here, in the context of discourse semantics, to redress the balance
somewhat and shed some light on the more formal and theoretical aspects of
topic–comment modulation.
3
For the contrast between English and and but, see, for example, Lakoff (1971), Bellert (1972),
Blakemore and Carston (2005). For the corresponding German contrast, one may consult Lang (1977),
Abraham (1991), Diewald and Fischer (1998), Fischer (2000). For a comparison between English and
German, see, for example, Asbach-Schnitker (1979).
4
An experiment (Van Kuppevelt 1991) showed that when test subjects read aloud the text of a real-
life news bulletin, which did not contain the anticipated or implicit questions, the intonation patterns
did not differ from readings by the same subjects of the same text but with the implicit questions filled
in as anticipated questions.
5
The existential quantifier FEW appears to do the same and even more (see note 5 of Chapter 9). As
reported in Moxey and Sanford (1986/7), the introduction of a plural discourse address under FEW
systematically induces the setting-up not only of a plural address for the set delimited by FEW but also
for the complement of that set. Moxey and Sanford found that subjects presented with a sentence like
(i) would continue a subsequent sentence starting with They . . . in ways that made it clear that they
were referring to those students that were not at the meeting. A typical continuation would be (ii):
(i) Few students were at the meeting.
(ii) They had (all) gone out with their girl friends.
This strongly suggests that FEW systematically gives rise to an implicit question of the form ‘How
about the others?’, which would also naturally be part of the speaker’s text (in the ‘patronizing’ style
of speech mentioned above). Moxey and Sanford use the term complement anaphora for primary
anaphora to the complement address set up in virtue of that question arising.
388 The Logic of Language
philosophical question of what kind of ‘entity’ this is, but simply accept the
reification procedure underlying expressions such as the time of sinking. The
property assigned to this abstract, reified ‘entity’ is that it was in 1912.6
An important corollary of the assumption that TCM-structure mirrors an
ongoing question-answering game in discourse is that TCM-structure is a
powerful factor in reducing the effort involved in the interpretation of
utterances. Since the topic has already been through the interpretive grinder,
all that remains to be processed in the interpretation is the comment, which is
often just one constituent and sometimes even no more than one single word.
It is surprising that this obvious fact is taken into account so little in the
literature on parsing and the experiments related to it.
Now consider again the example of the sentence John sold the car, discussed
in Section 3.2 of Volume I. Suppose this sentence is uttered in a context where
the (implicit or explicit) question is Who sold the car? Then the answer
requires emphatic accent on John, as in (11.3c), or the corresponding cleft or
pseudocleft, as in (11.3a) and (11.3b), respectively:
(11.3) a. It is JOHN who sold the car.
b. The one who sold the car is JOHN.
c. JOHN sold the car.
We take it (Seuren 1998b) that (11.3a,b,c) have a common underlying SEMANTIC
ANALYSIS (SA) corresponding to the topic–comment modulation or TCM
structure (11.4), which is input to both the grammar of the language in
which the sentence is to be expressed and to the discourse-incrementation
procedure (more is said about the predicate Bev-ind below):
(11.4) Bev-ind JOHN (the x[x sold the car])
In (11.4), the grammatical subject term the x[x sold the car] is the topic and
the grammatical predicate bev-ind JOHN is the comment, as shown in Figure
11.2 (repeated from Figure 7.8 in Chapter 7 of Volume I).
Assume for the moment that, in general, the surface subject term attracts a
grade-1 accent and that what is the predicate of a sentence at SA-level attracts
6
It would appear that quantification in the comment, as in It is not everyone that is granted the gift
of tongues, cannot be handled by the mechanism of underlying cleft structures other than in ways that
are formally so complex as to make their occurrence in language unlikely. Other mechanisms seem to
be at work here, for example, the mechanism of quotation. This would impose a metalinguistic
interpretation on the sentence just quoted, in the sense that everyone should be taken as the quoted
form ‘everyone’: ‘the use of the word “everyone” is inappropriate in the given context’. Note also that
someone or anyone are hardly usable in the sentence quoted: *It is not someone/anyone that is granted
the gift of tongues.
Topic–comment modulation 389
S1
SPEECH-ACT S2
OPERATOR
TOPIC
<ASSERT> NP Auxiliary
PRED System
be v-ind JOHN Det S3
the x
PRED S4
Matrix-S
PAST
PRED S5
COMMENT ∆
PRED NP NP
sell x the car
7
Such lowering is, of course, subject to the well-known, possibly universal, ISLAND CONSTRAINTS. One
such island constraint forbids the lowering of the comment predicate into a relative clause, as in
(i). No constraint prevents such lowering in sentence (ii):
(i) *Not for JOHN but for BILL are those who work on strike.
p
(ii) Not JOHN’s but BILL’s workers are on strike.
Since parallel observations can be made for the cleft structures (iii) and (iv), it seems that the island
constraints in question should be taken to apply to the relation between lambda-abstracted and non-
lambda-extracted pairs at SA-level:
(iii) *It is not for JOHN but for BILL that those who work are on strike.
p
(iv) It is not JOHN’s but BILL’s workers that are on strike.
To what extent this might be taken to restrict the cognitive process of questioning in discourse is a
matter for further research.
390 The Logic of Language
predicate JOHN will carry its coding for grade-1 sentence-nuclear accent along
to the new Matrix position of surface subject term and that the grade-1 accent
for surface subjecthood is reinforced to a grade-2 accent. This grade-2 accen-
tual peak thus signals the fact that the constituent thus marked is the
comment SA-predicate of the corresponding TCM-structure (11.4)—that is,
the structure shown in Figure 11.2. The accentual peak is reinforced even
further when the comment SA-predicate is contrasted with another such
comment predicate.
One consequence of this analysis is that an elegant parallel can be drawn
between structures underlying WH-questions such as (11.6) and the
corresponding reply (11.4):
(11.6) Bev-ind WHO? (the x[x sold the car])
The open-place question predicate ‘WHO?’ indicates that a value is required in
this position. The answer (11.4) provides the value JOHN.
We now define the topic of a sentence as that element in a situation whose
specific identity is open to question, or as that parameter in a situation whose
value is being requested. The comment then provides the answer by specifying
the element (value) in question. Otto Jespersen already saw this parallel, as
appears from the following quote (Jespersen 1924: 145):
The subject is sometimes said to be the relatively familiar element, to which the
predicate is added as something new. ‘The utterer throws into his subject all that he
knows the receiver is already willing to grant him, and to this he adds in the predicate
what constitutes the new information to be conveyed by the sentence. [. . .] In “A is B”
we say, “I know that you know who A is, perhaps you don’t know also that he is the
same person as B” ’ (Baldwin’s Dict. of Philosophy and Psychol. 1902, Vol. 2.364). This
may be true of most sentences, but not of all, for if in answer to the question ‘Who
said that?’ we say ‘Peter said it’, Peter is the new element, and yet it is undoubtedly the
subject.
It thus makes sense to propose a separate structural analysis for the genesis
of a proposition, as opposed to its actual truth-conditional substance. The
process of genesis—that is, the progress from what has been established in the
discourse to what is added as new information—produces the TCM-struc-
ture, which contains, besides the truth-conditional substance, also informa-
tion about the process that gave rise to the proposition in question.
At this point we need some terminology. Let us call a merely truth-
conditionally presented proposition a flat proposition (fprop), while its
topic–comment modulated variants, each representing a particular history
of its genesis, will be called modulated propositions (modprop). A flat
Topic–comment modulation 391
Speaker Hearer
Question-Answer
Structure
modprop Truth
fprop
Calculus
MIND
_______________________________________________
WORLD
Uttered sentence Situation in World
p
c. JOHN isn’t in the least interested, nót PETER.
(11.8) a. *NOT [Bev-ind JOHN (the x [Be in the least interested(x)])]
p
b. Bev-ind JOHN (the x [NOT [Be in the least interested(x)]])
The answer lies in the fact that, in the corresponding SA, the NPI in the least
requires a negation immediately over the S-structure in which it occurs, as in
(11.8b), which corresponds to The one who isn’t in the least interested is John,
expressing (11.7c).8 (11.8a), however, is unwellformed already at SA-level,
because there is too much intermediate structure between the negation and
in the least, causing the ungrammaticality of (11.7b). These facts are thus
explained by the two assumptions (a) that the grammatical analysis of TCM
is provided by an underlying (SA-level) cleft structure of the type exemplified
in (11.4) and (11.8), and (b) that the NPI in the least requires a negation
immediately over its own clause in SA-structure. (The fact that both assump-
tions happen to be unpopular in certain schools of linguistics does not
diminish their explanatory power.)
Now consider the German examples (11.9) and (11.10) (repeated from note
6 in Section 7.2.1 of Volume I). In German, the use of the conjunction sondern
(meaning ‘but’) is strictly limited to comment-correction. It has a metalin-
guistic flavour in that it says that a comment that was given earlier is to
be replaced by a new comment, as in (11.9). It thus requires a comment
negation, with scope over the whole sentence, and not a negation that belongs
8
It is marginally possible for in the least to be separated from the negation by an intervening factive
verb taking the clause in which in the least occurs as an object complement, as in:
(i) ?She didn’t realize that I was in the least interested.
Cases like (ii) are explained by the rule of NEGATIVE RAISING, which takes the negation out of the
embedded clause in SA-structure and places it in construction with the commanding verb believe :
p
(ii) She didn’t believe that I was in the least interested.
Topic–comment modulation 393
(11.9)
Nicht HERBERT hat gelacht (sondern sein SOHN).
not Herbert has laughed (but his son)
It wasn’t HERBERT who laughed (but his son).
9
Classical Attic Greek had a special set of (pristine Indo-European) third-person reflexive
pronouns (hou for the genitive, hoi for the dative, and he for the accusative) reserved for use in
embedded object clauses where an oblique term refers back to the subject of the main clause, as in
Platoi said that Crito had not listened to himi.
10
This form of non-reflexivity is not possible in, for example, Dutch, where *Ik haat mij is
ungrammatical and must be Ik haat mezelf, also when one wants to express the kind of self-
alienation expressed by (11.14a,b).
Topic–comment modulation 395
11
Significantly, Montague wrote (1970: 217):
We have taken the indefinite article ‘a’ always indicating existential quantification, but in some situations it may also
be used universally, and indeed, in precisely the same way as ‘any’; such is the case with one reading of the ambiguous
sentence ‘a woman loves every man such that that man loves that woman’.
That is, Montague held that, apart from cases where it is used generically, the indefinite article always
expresses existential quantification. This implies, as he admits, that John is an American should be
analysed as ‘There is an x such that x is an American and such that John is identical with x’. Needless
to say, this analysis stands in no relation to linguistic reality. It just illustrates again the narrow
fixation on quantification at the expense of other, less studied, semantic categories.
Topic–comment modulation 397
Besides Bev-ind and Bev-cat, we also have Bev-val, which specifies the value of a
parameter for other than individuals or categories. For example, a sentence like
(11.21a), with the SA (11.21b), specifies the temperature of the room in question,
and (11.22a,b) specifies the cardinality of the set of John’s children:
(11.21) a. The temperature of the room is twelve degrees.
b. Bev-val twelve degrees the x[the temperature of the room is x]
(11.22) a. John has four children. / The number of John’s children is four.
b. Bev-val four the x[the cardinality of the set of John’s children is x]
The temperature parameter in (11.21) involves a function from objects
(possibly places) and times to temperature values. The number parameter
in (11.22) involves a function from sets to cardinality values. In similar fashion
cases can be analysed involving parameters for names, telephone numbers,
dates, etc.
As argued in Seuren (1993) and Scharten (1997), this shows the semantic,
nonpragmatic, nature of number specifications, generally mistaken for cases
of existential quantification. In both the semantic and the pragmatic litera-
ture, sentences like (11.22a) are analysed as existentially quantified, and are
accordingly taken to have the literal meaning ‘John has at least four children’,
which must then be changed into ‘John has precisely four children’ by means
of pragmatic principles that are generally too soft and not always justifiable.
The much more obvious reading in which, literally, a precise value is assigned
to the cardinality function for the set of John’s children is entirely neglected,
owing no doubt to the general neglect of parameters and value assignments in
standard formal semantics.12
The same goes for a question like Which month in the year has 28 days? The
most obvious answer is February, in the reading ‘the month of the year such
that the number of days it has is 28 is February’. But the answer All months
have 28 days is also justifiable, though clearly less probable. For some strange
reason, the latter answer is generally thought to be more ‘logical’ than the
former.
One notes that (11.21a) and (11.22a) are both analysed as TCM-structures.
Yet they have no corresponding cleft or pseudocleft surface structures. Sen-
tences like (11.23a,b) are highly artificial, if they are grammatical at all:
12
A Bev-analysis will also help out on the problem, raised in, for example, Donnellan (1966),
Kripke (1972, 1980), and Neale (1990), of the reference function for the Pope in sentences like The Pope
was Polish in 2000 but is German now, which does not imply that the Pope has changed nationality. The
sentence is now read as ‘the x such that x was the value of the parameter [Bev-cat the Pope] in 2000
was Polish but the x such that x is the value of the parameter [Bev-cat the Pope] now is German’.
398 The Logic of Language
(11.23) a. ?*It is twelve degrees that the temperature of the room is.
b. ?*It is four children that John has./?*It is four that the number of
John’s children is.
This is no doubt because sentences like (11.21a) and (11.22a) already possess a
value-assigning topic–comment structure, providing a value on a lexicalized
parameter such as temperature, price, or number under their SA-predicate
Bev-val. Although it is technically possible to topicalize again an element out
of such structures, leading to ‘double topicalization’, the result will be felt as
unnatural—though languages may well differ in this respect.
In general, it must be observed that, but for a few notable exceptions,
existing grammatical as well as formal semantic theories almost totally neglect
constructions involving the assignment of values to given parameters. This
means that the whole area of measurable gradable adjectives like broad, deep,
high, heavy, far, hot, old, etc., along with measure predicates like weigh, cost,
span, contain, and so on, has been left virtually untouched, which is a serious
handicap for the integration of topic–comment structure into an overall
theory of language.
13
I am indebted to Larry Horn, THE expert on only, even, and Neg-Raising, for useful critiques and
comments.
Topic–comment modulation 399
I follow the standard semantic analysis of the operator only, which says that
only presupposes the truth of the argument-S and asserts that no other entity
satisfies the main predicate of the argument-S. This analysis appears to
provide the best fit to the available facts (pace those analyses that propose a
laxer, purely existential presupposition ‘someone laughed’ for (11.24a)). Only
JÓHN laughed, as in (11.24a) is thus taken to presuppose that John laughed and
to assert that nobody else did. Sentence (11.24a) is analysed at SA-level as
(11.24b), intuitively paraphrased as (11.24c):14
(11.24) a. Only JÓHN laughed.
b.
S
Pred S
only
Pred NP
Bev-ind John
Det S
the x
Pred S
Past
Pred NP
laugh x
14
For details regarding the grammatical theory employed, see Seuren (1996).
400 The Logic of Language
15
We have, it seems, solved the problem of only some, which clearly excludes all. This is solved by
the assumption, made and developed in Chapter 3, that the meaning of some is rooted in basic-natural
predicate logic, where some excludes all.
16
Larry Horn pointed out to me that sentence-initial not even constitutes a counterexample (the
only one that has come to light so far) to my generalization (Section 10.4) that the negation in
noncanonical position is per se presupposition-preserving (see note 8 in Chapter 10).
402 The Logic of Language
NP-node. This leaves the SA-tree (11.31a), which then undergoes the standard
cyclic treatment, whereby the marked accent on JÓHN is preserved throughout.
(11.30) a. Even JÓHN laughed.
b.
S
Pred S
even
Pred NP
⊃ John
—
Det S
the –
x
Pred S
Past
Pred NP
–
x
laugh
Pred S Pred S
even even
Pred S Pred S
Past not
Pred NP Pred S
laugh JOHN Past
Pred NP
laugh JOHN
laugh, or, perhaps more idiomatically, Not even JÓHN laughed, presupposes a
class of non-laughers and asserts that John is an unexpected or unlikely
member of thát class (cf. Horn 1989: 151). Semantically, therefore, the negation
in Not even JÓHN laughed belongs in the S-structure of the subject-NP of the
predicate [3 John] and is not a negation creating the contradiction of its
nonnegated counterpart.
But how on earth does the negation in Not even JÓHN laughed land in that
position, while it originates as a negation internal to the subject term of the
SA-predicate [3 John]? What grammatical sorcery has been going on here?
Let us note first that English is well-nigh unique in having the negation in
sentence-initial position in sentences of this nature. Italian comes close in that
it has anche or anzi for ‘even’ and neanche (also (nem)manco or nemmeno) for
‘not even’. But French says Même JEAN n’a pas ri, while German says Sogar
JOHANN hat nicht gelacht and Dutch has Zelfs JAN heeft niet gelachen—all with
the negation in construction with the finite verb.
Given a structure like (11.31b), one is tempted to think of what is known as
the process of NEG-RAISING (NR), much maligned in the (pragmatically orient-
ed) literature yet, in my view, hard to deny as a real grammatical process
found in the grammars of many languages.17 In simple terms, NEG-RAISING is
the phenomenon that a negation which semantically belongs in the scope of a
predicate ends up in surface structure to the left of that predicate and thus
taking scope over it. Well-known examples are sentences like I don’t think he’ll
make it, which does not mean, literally, ‘it is not the case that I think that he
will make it’ but rather ‘I think that he will not make it’. Likewise for a
sentence like I don’t want to die, which does not mean what it says but rather
‘I want not to die’. Nineteenth-century normative grammars told one to avoid
such constructions because they ran counter to logic!
It must be observed that NR is less likely to occur in cases where the that-
clause is the comment in a TCM-structure. For example, when the discourse
has given rise to the (implicit or explicit) question ‘What does Harry believe?’,
then the appropriate answer form is rather one without NR, such as Harry
believes that the crisis will not be over next month. An idiomatic instance, as
was once pointed out to me by my Amsterdam colleague Wim Klooster, is
I thought you’d never come, rather than I never thought you’d come.
Horn argues (1971; 1978; 1989: 330–61) extensively that NR-phenomena are
the result of pragmatic factors such as politeness strategies, understatement,
hedging, or irony. In polite society, one prefers to avoid direct clashes of
17
For an admirable and well-nigh complete survey of the entire complex question of NR and the
literature pertaining to it, see Horn (1978; 1989: 308–30).
404 The Logic of Language
opinion and thus expresses oneself rather in the weaker terms of ‘I don’t
believe that . . .’ than in the crass terms of ‘I believe that not . . . ’. One prefers
to say She is unlikely to come, rather than She is likely not to come, or This is not
good rather than This is bad, etc. etc. (read Horn 1989: it’s very well written).
I fully agree with Horn to the extent that he seeks a pragmatic origin for NR: it
seems to me that his arguments are strong and convincing. I also agree with
his conclusion (Horn 1978: 215–16):
. . . that NR originates as a functional device for signalling negative force as early in a
negative sentence as possible. [. . .] NR must be regarded as a rule in the synchronic
grammar of English and other languages. NR would thus constitute an example of a
pragmatic process which has become grammaticized or syntacticized.18
With Horn, I argue that NR has indeed become part of the grammar and
lexicon of English and many, perhaps all, natural languages. This appears
from a number of facts. First, one sees that NR is, at the level of linguistic
description and not at the noncommittal pragmatic level, associated with
different verbs in different languages. Thus, English hope and its Italian
counterpart sperare do not induce NR but their French, Dutch, and German
cousins do.
Then, as shown in Seuren (2004: 178–81; see also the discussion around
examples (7.19–7.22) in Chapter 7 of Volume I), certain Negative Polarity
Items (NPIs), such as in the least or yet or the slightest, are licensed only when
occurring in the immediate scope of negation:
p
(11.32) a. She knows that John hasn’t arrived yet.
b. *She doesn’t know that John has arrived yet.
p
(11.33) a. Many people are not in the least interested.
b. *Not many people are in the least interested.
18
The same is often found for grammatical categories that find their origin in natural phenomena
but have been extended to becoming autonomous grammatical categories. Grammar often gets
‘started up’ by what takes place in the world or in communicative situations and then ‘takes over’ in
its own right. Consider, for example, grammatical gender distinctions. To the extent that they
correspond with natural gender, the grammatical gender of nouns is almost entirely predictable: la
femme, die Frau (both ‘the woman’) are predictably feminine (though German also has the neuter
nouns das Weib (‘the woman’, used derogatorily) and das Mädchen (‘the girl’), which is a diminutive).
But this natural motivation is no longer valid when these gender distinctions are applied to words
denoting objects that simply have no natural gender. Thus French has the masculine le soleil for ‘the
sun’, but German has the feminine die Sonne, and, ironically, the French and German words for ‘moon’
are the feminine la lune and the masculine der Mond, respectively. Speculations about totally different
French and German ‘popular spirits’ were rife in the nineteenth century but have been abandoned
now.
Topic–comment modulation 405
p
(11.34) a. Many people don’t show the slightest interest.
b. *Not many people show the slightest interest.
Yet we see that these three NPIs occur naturally and without a hint of
ungrammaticality in sentences like:
p
(11.35) a. I don’t think John has arrived yet.
p
b. I don’t expect John to be in the least interested.
p
c. Don’t expect John to take the slightest interest in your work.
Such facts are readily explained when one assumes SA-forms where the
negation stands over the embedded clauses and not over the main sentence
and one accepts NR as an automatic grammatical process not affecting the
semantics of the sentences involved. But they are hard to explain in purely
pragmatic terms. One would have to present a purely pragmatic explanation
of the occurrence restrictions of NPIs—a feat that has not so far been
achieved. As in the case of NR, the very phenomenon of NPIs may have
had a pragmatic origin, but their subsequent grammaticalization is hard to
deny.
The advantage of the grammaticalization view is that the application range
of NR can now be extended to cover less obvious cases. One may think, for
example, of the propositional operator AND as a Neg-Raiser, whereby and is
converted into its counterpart or. AND (NOT-P, NOT-Q) thus becomes NOT (OR
(P,Q)), as in (11.36a), which becomes (11.36b):
(11.36) a. He doesn’t like planes and he doesn’t like trains.
b. He doesn’t like planes or trains.
It has been noted repeatedly in previous chapters that the converse does not
work:
(11.37) a. He doesn’t like planes or he doesn’t like trains.
b. !!He doesn’t like planes and trains.
This also explains, in principle, the plural were in a sentence like
(11.38a), derived from an underlying (11.38b). One notes that (11.38c) is
ungrammatical:
(11.38) a. I don’t think John or Harry were late.
b. I don’t think John was late and I don’t think Harry was late.
c. *John or Harry were late.
In fact, English so far appears to be a Neg-Raiser, as we say not so far
achieved, and not so far not achieved, which is what it means. In like manner,
406 The Logic of Language
we now posit that English even is a Neg-Raiser (in accordance with Horn 1971:
132). This allows us to posit that an underlying (11.31b), repeated here
as (11.39a), is transformed into (11.39b), after which the syntactic Cycle
is free to operate.
(11.39)
a. S NR b. S
⇒
Pred S Pred S
even not
Pred S Pred S
not even
Pred S Pred S
Past Past
Pred NP Pred NP
laugh JOHN laugh JOHN
Here it is not possible for the one to be true while the other is false, even
though, of course, the two sentences differ as regards their anchoring condi-
tions for given discourse domains.
Yet, as was pointed out in Section 10.4, substitution salva veritate of truth-
conditionally equivalent TCM-structurings is again blocked under sentential
negation. Here, the truth-conditional difference resides in the presuppositional
truth conditions, as appears from the following examples:
(11.46) a. JÓHN hasn’t sold the car: he is a figment of Ann’s mind. PETER did.
b. !! JÓHN hasn’t sold the car: there never wás a car.
(11.47) a. John hasn’t sold the CÁR: there never wás a car. He sold the
SPEEDBOAT.
b. !!John hasn’t sold the CÁR: he is a figment of Ann’s mind.
Clearly, these observations are in full agreement with our analysis, according
to which the constituent under a grade-2 accentual peak is the underlying SA-
predicate Bev-ind, which induces no existential presupposition with regard to
the value specified. But non-accented constituents in the Matrix-S are subject
to the normal preconditions holding for the Matrix-predicate.
The conclusion is, therefore, that TCM is not merely a pragmatic
phenomenon but does contribute to sentence meaning. This again means
that type-level discourse-incremental properties of sentences cannot simply
be relegated to pragmatics but must be considered to be part of a semantic
theory of natural language.
Bibliography
ABRAHAM, W. (1991) ‘Discourse particles in German: How does their illocutionary force
come about?’, in W. Abraham (ed.), Discourse Particles: Descriptive and Theoretical
Investigations on the Logical, Syntactic, and Pragmatic Properties of Discourse Particles
in German. Benjamins, Amsterdam/Philadelphia: 203–52.
ALFORD, M. (1995) ‘Isaac Newton: the first physicist’. http://www.physics.wustl.edu/~alford/
newton.html
ANDRADE MARTINS, S. (2004) Fonologia e Gramática Dâw. 2 vols. LOT, Utrecht.
ASBACH-SCHNITKER, B. (1979) ‘Die adversativen Konnektoren aber, sondern und but nach
negierten Sätzen’, in H. Weydt (ed.), Die Partikeln der deutschen Sprache. De Gruyter,
Berlin-New York: 457–68.
BARWISE, J. and R. COOPER (1981) ‘Generalized quantifiers and natural language’, Linguistics
and Philosophy 4/2: 159–219.
—— and J. PERRY (1983) Situations and Attitudes. MIT Press, Cambridge, MA.
BÄUERLE, R., U. EGLI, and A. VON STECHOW (eds), Semantics from Different Points of View.
Springer, Berlin-Heidelberg-New York.
BELLERT, I. (1972) ‘On certain syntactical properties of the English connectives and and but ’,
in S. Ploetz (ed.), Transformationelle Analyse. Die Transformationstheorie von Zellig
Harris und ihre Entwicklung. Athenaeum, Frankfurt: 327–56.
BLAKEMORE, D. and R. CARSTON (2005) ‘The pragmatics of sentential coordination with and ’,
Lingua 115: 569–89.
BLANCHÉ, R. (1966) Structures intellectuelles. J. Vrin, Paris.
BOCHEŃSKI, I. M. (1956) Formale Logik. Orbis Academicus, Freiburg/Munich.
BOËR, S. E. and W. G. LYCAN (1976) ‘The myth of semantic presupposition’, Indiana
University Linguistics Club.
BOLINGER, D. (1972) ‘Accent is predictable (if you’re a mind-reader)’, Language 48/3: 633–44.
BORKOWSKI, L. (1970) Jan Łukasiewicz: Selected Works. North-Holland, Amsterdam.
BROUWER, L. E. J. (1966) ‘Life, art, and mysticism’, Notre Dame Journal of Formal Logic 37:
389–429. (Transl. by W. P. van Stigt of L. E. J. Brouwer, Leven, Kunst en Mystiek, 1905.)
BROWN, L. and M. S. DRYER (2008) ‘The verbs for “and” in Walman, a Torricelli language of
Papua New Guinea’, Language 84/3: 528–65.
BUCKNER, E. (2007) ‘The fourth corner’, paper read at the international congress
‘The Square of Opposition’, Montreux, Switzerland, June 1–3, 2007.
http://maverickphilosopher.powerblogs.com/posts/1177106402.shtml.
BURLEIGH, W. (1988) Von der Reinheit der Kunst der Logik. Erster Traktat. Von den
Eigenschaften der Termini. (De puritate artis logicae. De proprietatibus terminorum).
Translated and edited by Peter Kunze, with introduction and commentary. Felix Meiner,
Hamburg.
410 Bibliography
—— (1997) ‘All John’s children are as bald as the king of France: existential import and the
geometry of opposition’, in Papers from the 33rd Regional Meeting of the Chicago
Linguistic Society 1997. Papers from the Main Session. Chicago Linguistic Society,
Chicago, Illinois: 155–79.
—— (2000) ‘From if to iff: conditional perfection or pragmatic strengthening’, Journal of
Pragmatics 32/2: 289–326.
HUSSERL, E. (1900) Logische Untersuchungen. Vol 2: Untersuchungen zur Phänomenologie
und Theorie der Erkenntnis. Teil 1. Niemeyer, Halle.
ISARD, S. (1975) ‘Changing the context’, in E. L. Keenan (ed.), Formal Semantics of Natural
Language. Cambridge University Press, Cambridge: 287–96.
JANSSEN, TH. A. J. M. (1976) Hebben-konstrukties en indirekt-objektskonstrukties. HES
Publishers, Utrecht.
JASPERS, D. (2005) Operators in the Lexicon. On the Negative Logic of Natural Language. LOT,
Utrecht.
JESPERSEN, O. (1917) Negation in English and Other Languages. Det Kgl. Danske
Videnskabernes Selskab, Historisk-filologiske Meddelelser I,5. Andr. Fred. Hst and
sn, Copenhagen.
—— (1924) The Philosophy of Grammar. Allen and Unwin, London.
JOHNSON-LAIRD, P. N. (1986) ‘Conditionals and mental models’, in Traugott et al. (eds):
55–75.
KALEPKY, TH. (1928) Neuaufbau der Grammatik als Grundlegung zu einem wissenschaftlichen
System der Sprachbeschreibung. Teubner, Leipzig.
KAMP, H. (1981) ‘A theory of truth and semantic interpretation’, in J. A. G. Groenendijk, Th.
M. V. Janssen, and M. B. J. Stokhof (eds), Formal Methods in the Study of Language. Vol.
I. Mathematisch Centrum, Amsterdam: 277–322.
—— and U. REYLE (1993) From Discourse to Logic. Introduction to Model-Theoretic
Semantics of Natural Language, Formal Logic and Discourse Representation Theory.
Kluwer, Dordrecht.
KARTTUNEN, L. (1973) ‘Presuppositions of compound sentences’, Linguistic Inquiry 4/2:
169–93.
KEENAN, E. L. (2003) ‘The definiteness effect: semantics or pragmatics?’, Natural Language
Semantics 11: 187–216.
KING, P. (2005) ‘Ockham’s Summa Logicae’, in J. Shand (ed.), Central Works of Philosophy.
Volume I. Acumen Publishing, Chesham: 242–69.
KIPARSKY, P. and C. KIPARSKY (1971) ‘Fact’, in D. D. Steinberg and L. A. Jakobovits (eds),
Semantics. An Interdisciplinary Reader in Philosophy, Linguistics and Psychology.
Cambridge University Press, Cambridge: 345–69.
KLEENE, S. C. (1938) ‘On notation for ordinal numbers’, Journal of Symbolic Logic 3: 150–5.
—— (1952) Introduction to Metamathematics. North-Holland, Amsterdam.
KLIMA, G. (1988) Ars Artium. Essays in Philosophical Semantics, Mediaeval and Modern.
Doxa Library. Institute of Philosophy, Hungarian Academy of Sciences, Budapest.
KNEALE, W. and M. KNEALE (1962) The Development of Logic. Clarendon Press, Oxford.
KRATZER, A. (1979) ‘Conditional necessity and possibility’, in Bäuerle et al. (eds): 117–47.
414 Bibliography
KRIPKE, S. (1972) ‘Naming and necessity’, in D. Davidson and G. Harman (eds), Semantics of
Natural Language. Reidel, Dordrecht: 253–355.
—— (1980) Naming and Necessity. Blackwell, Oxford (¼ Kripke 1972).
LABOV, W. (1972) Language in the Inner City. Studies in Black English Vernacular. University
of Pennsylvania Press, Philadelphia.
LAKOFF, R. (1971) ‘If ’s, and’s, and but’s about conjunctions’, in Fillmore and Langendoen
(eds): 114–49.
LANG, E. (1977) Semantik der koordinativen Verknüpfung. Akademie-Verlag, Berlin.
LANGENDOEN, D. T. and H. B. SAVIN (1971) ‘The projection problem for presuppositions’, in
Fillmore and Langendoen (eds): 55–60.
LENZEN, W. (2008) ‘Ploucquet’s “refutation” of the traditional Square of Opposition’, Logica
Universalis 2: 43–58.
LEVINSON, S. C. (2000) Presumptive Meanings. The Theory of Generalized Conversational
Implicature. MIT Press, Cambridge MA.
LEWIS, D. (1979) ‘Scorekeeping in a language game’, in Bäuerle et al. (eds): 172–87.
LEWIS, G. L. (1984) Turkish Grammar. Oxford University Press, Oxford.
LIPPS, TH. (1893) Grundzüge der Logik. Dürr, Leipzig.
LÖBNER, S. (1990) Wahr neben Falsch. Duale Operatoren als die Quantoren natürlicher
Sprache. Niemeyer, Tübingen.
LONDEY, D. and C. JOHANSON (1987) The Logic of Apuleius: Including a Complete Latin Text
and English Translation of the Peri Hermeneias of Apuleius of Madaura. Brill, Leiden.
LUKASIEWICZ, J. J. I. (1934) ‘Z historii logiki zdán’, Przegla̧d Filozoficny 37: 417–37. [German
transl.: ‘Zur Geschichte der Aussagenlogik’, Erkenntnis 5 (1935): 111–31. English transl.:
‘On the history of the logic of propositions’, in McCall (1967: 66–87); reprinted in:
Borkowski (1970: 197–217).]
MCCALL, S. (1967) Polish Logic 1920–1939. Oxford University Press, Oxford.
MCCAWLEY, J. D. (1967) ‘Meaning and the description of languages’, Kotoba no Uchu 2/9:
10–18; 2/10: 38–48; 2/11: 51–7. (Also in McCawley 1973: 99–120.)
—— (1972) ‘A program for logic’, in D. Davidson and G. Harman (eds), Semantics of
Natural Language. Reidel, Dordrecht: 498–544.
—— (1973) Grammar and Meaning. Papers on Syntactic and Semantic Topics. Taishukan,
Tokyo.
—— (1981) Everything that Linguists have Always Wanted to Know about Logic* *but were
ashamed to ask. Blackwell, Oxford.
MCLEOD, E. (1971) Héloı̈se. A Biography. Chatto and Windus, London.
MATHESIUS, V. (1928) ‘On linguistic characterology with illustrations from modern English’,
in Actes du Premier Congrès International de Linguistes à La Haye. Reprinted in: J. Vachek
(ed.), A Prague School Reader in Linguistics. Indiana University Press, Bloomington,
Indiana (1964): 59–67.
—— (1939) ‘O tak zvaném aktuálnim cleneni vetném’ (On the so-called functional
sentence perspective), Slovo a Slovesnost 5: 171–4.
MEISER, C. (1880) Anicii Manlii Severini Boetii commentarii in librum Aristotelis Perı̀
Hermēneı́as. Pars posterior. Editio secunda. Teubner, Leipzig.
Bibliography 415
MEYER-LÜ BKE, W. (1899) Romanische Syntax (Grammatik der romanischen Sprachen III).
Reisland, Leipzig.
MIGNUCCI, M. (1983) ‘La teoria della quantificazione del predicato nell’antichità classica’,
Anuario Filosófico de la Universidad de Navarra 16: 11–42.
MONTAGUE, R. (1970) ‘English as a formal language’, in B. Visentini (ed.), Linguaggi nella
società e nella tecnica. Edizioni di Comunità, Milan: 189–223.
—— (1973) ‘The proper treatment of quantification in ordinary English’, in K. J. J.
Hintikka, J. M. E. Moravcsik, and P. Suppes (eds) Approaches to Natural Language.
Proceedings of the 1970 Stanford Workshop on Grammar and Semantics. Reidel,
Dordrecht: 221–42.
MOODY, A. E. (1953) Truth and Consequence in Mediæval Logic. North-Holland,
Amsterdam.
MOSTOVSKI, A. (1957) ‘On a generalization of quantifiers’, Fundamenta Mathematica 44:
12–36.
MOXEY, L. M. and A. J. SANFORD (1986/7) ‘Quantifiers and focus’, Journal of Semantics 5:
189–206.
MULLALLY, J. P. (1945) The Summulæ Logicales of Peter of Spain. (Publications in Medieval
Studies VIII) The University of Notre Dame Press, Indiana.
NEALE, S. (1990) Descriptions. MIT Press, Cambridge, MA.
NUCHELMANS, G. (1973) Theories of the Proposition. Ancient and Medieval Conceptions of the
Bearers of Truth and Falsity. North-Holland, Amsterdam.
PARSONS, T. (1997) ‘The traditional Square of Opposition. A biography’, Acta Analytica 18:
23–49.
—— (2006) ‘The traditional Square of Opposition’, in E. N. Zalta (ed.), The Stanford
Encyclopedia of Philosophy (October 1, 2006 revision) http://plato.stanford.edu/entries/
square/.
—— (2008) ‘Things that are right with the traditional Square of Opposition’, Logica
Universalis 2: 3–11.
PEIRCE, CH. S. (1974) Collected Works. Vol. II: Elements of Logic. Edited by Charles
Hartshorne and Paul Weiss. Harvard University Press, Cambridge, MA.
PETERS, S. and D. WESTERSTÅHL (2007) Quantifiers in Language and Logic. Oxford University
Press, Oxford.
PICA, P., C. LEMER, V. IZARD, and S. DEHAENE (2004) ‘Exact and approximate arithmetic in an
Amazonian indigene group’, Science 306 (October 2004): 499–503.
PICKERING, M. J. and S. C. GARROD (2004) ‘Toward a mechanistic psychology of dialogue’.
Behavioral and Brain Sciences, 27/2: 169–225.
QUINE, W. V. O. (1952) Methods of Logic. Routledge and Kegan Paul, London.
—— (1953) From a Logical Point of View. Harvard University Press, Cambridge, MA.
—— (1960) Word and Object. MIT Press, Cambridge, MA.
REINHART, T. (1983) Anaphora and Semantic Interpretation. Croom Helm, London.
RESCHER, N. (1969) Many-valued Logic. McGraw-Hill, New York.
ROSCH, E. (1975) ‘Cognitive representations of semantic categories’, Journal of Experimental
Psychology: General 104: 192–233.
416 Bibliography
SULLIVAN, M. W. (1967) Apuleian Logic. The Nature, Sources, and Influence of Apuleius’s Peri
Hermeneias. North-Holland, Amsterdam.
SVEDELIUS, C. (1897) L’analyse du langage appliquée à la langue française. Almqvist and
Wiksell, Uppsala.
TASMOWSKI-DE RYCK, L. and S. P. VERLUYTEN (1982) ‘Linguistic control of pronouns’, Journal
of Semantics 1/4: 323–46.
THOMPSON, M. (1953) ‘On Aristotle’s Square of Opposition’, The Philosophical Review 62/2:
251–65.
TRAUGOTT, E. C., A. TER MEULEN, J. SNITZER REILLY, and CH. A. FERGUSON (eds) (1986) On
Conditionals. Cambridge University Press, Cambridge.
VAN DALEN, D. (1999) Mystic, Geometer, and Intuitionist: The Life of L. E. J. Brouwer. Vol. 1,
The Dawning Revolution. Oxford University Press, Oxford.
VAN DER AUWERA, J. (1998) ‘Pragmatics in the last quarter-century: the case of conditional
perfection’, Journal of Pragmatics 27/3: 261–74.
VAN DER SANDT, R. A. (1992) ‘Presupposition projection as anaphora resolution’, Journal of
Semantics 9/4: 333–77.
VAN FRAASSEN, B. (1971) Formal Semantics and Logic. Macmillan, New York-London.
VAN KUPPEVELT, J. C. J. (1991) ‘Topic en Comment. Expliciete en Impliciete Vraagstelling in
Discourse’, Ph.D. thesis, Radboud University, Nijmegen.
VAN OIRSOUW, R. R. (1987) The Syntax of Coordination. Croom Helm, London.
VELTMAN, F. (1985) ‘Logics for Conditionals’, Ph.D. thesis, University of Amsterdam.
VON DER GABELENTZ, H. G. C. (1869) ‘Ideen zu einer vergleichenden Syntax. Wort- und
Satzstellung’, Zeitschrift für Völkerpsychologie und Sprachwissenschaft 6: 376–84.
—— (18911; 19012) Die Sprachwissenschaft. Ihre Aufgaben, Methoden und bisherigen
Ergebnisse. Leipzig: Tauchnitz.
WEGENER, C. (2008) A Grammar of Savosavo, a Papuan Language of the Solomon Islands.
Ph.D. thesis, Radboud University, Nijmegen. (¼ MPI Series in Psycholinguistics, 51).
Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH Nijmegen, The
Netherlands.
WEGENER, PH. (1885) Untersuchungen über die Grundfragen des Sprachlebens. Niemeyer,
Halle. [Reprinted 1991, with an introduction by Clemens Knobloch. Benjamins,
Amsterdam-Philadelphia.]
WEIDEMANN, H. (1994) Aristoteles’ Peri Hermeneias. Uebersetzt und erläutert von Herman
Weidemann. Akademie-Verlag, Berlin.
WEIJTERS, A. (1985) ‘Presuppositional propositional calculi’, Appendix to Seuren (1985:
483–525).
—— (1989) ‘Denotation in Discourse: Analysis and Algorithm’, Ph.D. thesis, Radboud
University, Nijmegen.
WHITEHEAD, A. N. and B. RUSSELL (1910–1913) Principia Mathematica. 3 vols. Cambridge
University Press, Cambridge.
WIERZBICKA, A. (1996) Semantics. Primes and Universals. Oxford University Press, Oxford.
WILSON, D. (1975) Presuppositions and Non-Truth-Conditional Semantics. Academic Press,
London-New York-San Francisco.
Bibliography 419
entailment 7–13, 27, 86, 331–2 et passim Gazdar, G. 255(n), 331(n), 351, 362
C-entailment 330–4 Geach, P. 294–6, 300–4, 324–5, 376
logical 10–13 Geis, M. 277
natural 10–12 gender, grammatical 404(n)
P- entailment 330–4 Genzen, G. 53(n)
epithet German 206(n), 228, 274(n), 386, 387(n),
pronoun 290–1, 376–7 392–3, 400, 403–4
substitution test 290–2, 296–300 Geurts, B. 286, 373–5, 377
Eubulides of Miletus 312–14, 316 Gilso, É, 173(n)
Euclid 68 Ginsburg, H. 68(n)
Euripides 217 golden ratio 171
Evans, J. 43 grammar 16–17, 19
existence see actual being categorial 323
existential import 14–15, 124–7, 150, 158–62, Greek 25, 71(n), 118, 170–1, 172(n), 241(n),
165–70, 173, 235–6 394(n), 401
origin of 175(n), 366–7 Green Pedersen, N. 315–16
undue 14, 93, 122–5, 132–3, 136, 145, Greenberg, J. 282(n)
149–50, 154, 158–9, 162, 173, 190, Grice, H. 71(n), 99–100, 341
363–4, 367, 370 Groenendijk, J. 255, 286, 320–1
exjunction 84, 112 Gussenhoven, C. 380, 389, 391
exponibles 314–16
extension of predicates 322, 329 Hall Partee, B. 396
extreme values 70, 75, 183–6, 193 Hamblin, C. 198(n), 217
Hamilton, W. 102(n), 103–7, 118, 166, 169, 192
falsification 102 Harris, M. 274(n)
falsity Heim, I. 321, 348–9(n)
minimal 15, 22, 32, 97(n), 125, 354 et passim Héloı̈se 172
radical 15, 22, 97(n), 125, 305–6, 326, 354 Hoeksema, J. 116
et passim hole 210–11, 343, 345, 347, 376
Fauconnier, G. 287 homunculus 24
feeder 386–7 Horn, L. 85(n), 99, 102(n), 114–5, 117–9,
Fibonacci numbers 171 158–61, 168, 170, 173, 211, 260, 277,
filter 210–11, 343 315(n), 335(n), 360–2, 398(n), 401,
Finno-Ugric languages 39(n) 403–4, 406
Fisher, K. 387(n) Horned Man, paradox of the 312–13
Fodor, J. 198, 226 Husserl, E. 378
formalism 3, 199 hypokeı́menon 379, 382–3, 385
formalization 3
Fowler, T. 5, 166 identification 76, 85
free variables see open parameters identity 49, 73, 76–7, 82
Frege, G. 15, 41, 44–5, 123, 202, 236, 269, implication 199, 270–282, 350–1
314, 316, 321–6, 334, 354, 372–3, material 27, 270, 275, 279
395, 406 paradoxes of 275, 277–82
French 59, 394, 400, 403–4 implicature 99–100, 255(n)
Fulbert, canon of Notre Dame 172 incrementation procedure 58, 109 et passim
functional sentence perspective 385 independence
logical 50, 72–3, 86, 88, 127, 132
Gapping 257–9 set-theoretic 50, 78, 83–4
Garrod, S. 214 Indo-European 394(n)
424 Index