JOHN CORCORAN AND WILIAM FRANK. Surprises in logic. Bulletin of Symbolic Logic. 19 (2013) 253. So... more JOHN CORCORAN AND WILIAM FRANK. Surprises in logic. Bulletin of Symbolic Logic. 19 (2013) 253. Some people, not just beginning students, are at first surprised to learn that the proposition “If zero is odd, then zero is not odd” is not self-contradictory. Some people are surprised to find out that there are logically equivalent false universal propositions that have no counterexamples in common, i. e., that no counterexample for one is a counterexample for the other. Some people would be surprised to find out that in normal first-order logic existential import is quite common: some universals “Everything that is S is P” —actually quite a few—imply their corresponding existentials “Something that is S is P”. Anyway, perhaps contrary to its title, this paper is not a cataloging of surprises in logic but rather about the mistakes that did or might have or might still lead people to think that there are no surprises in logic. The paper cataloging of surprises in logic is on our “to-do” ...
For each n > 0, two alternative axiomatizations of the theory of strings over n alphabetic cha... more For each n > 0, two alternative axiomatizations of the theory of strings over n alphabetic characters are presented. One class of axiomatizations derives from Tarski's system of the Wahrheitsbegriff and uses the n characters and concatenation as primitives. The other class involves using n character-prefixing operators as primitives and derives from Hermes'…
Twelfth OOPSLA Worship on Behavioral Semantics , 2002
The concept of role plays an important part in designing systems that work effectively with peopl... more The concept of role plays an important part in designing systems that work effectively with people, because people understand their own work in terms of roles. What is most distinct about the concept of role, and is missed in most formal approaches, is that roles do not stand by themselves: “role in” is the meaningful construct.
Definition of a role as special kind of type misses this key feature. The temptation to treat a role as a type comes from the fact that a role is specified with invariant statements concerning the composite in which the role occurs, and these invariants in the simplest cases each apply only to the model element playing a single role. But these invariants may also apply in a complex way across all the roles in the composite. And they apply to the elements playing the role only in the context of that particular composite. In addition, in identifying a role as a special kind of type, a role cannot be an abstraction of a single instance, but as we will see, roles sometimes are abstractions of single elements, not types of elements. Most of this paper unpacks this paragraph.
We survey some of the uses of the role concept used in a variety of ordinary human endeavors, some of the ways in which this concept has been represented in various formal modeling theories, and recommend a generic concept of role as a place holder in the specification of any kind of composite, with associated invariants that apply to whatever may or does hold that place. This concept of role makes role yet another different way people abstract. The way in which people abstract the contribution of each part, in consort, to some whole. In application to UML, these composites might be composite objects, composite actions, or associations, which are always composite model elements.
Proceedings of the Ninth OOPSLA Workshop on Behavioral Semantics, 2000
This paper describes the foundations for an approach called Semantic Solutions. This approach pr... more This paper describes the foundations for an approach called Semantic Solutions. This approach provides tools for facilitating the federation of business communities, via the integration of the automated systems used by the communities. These tools are based on a new paradigm of communications between automated components – effective communications is how people work together. For automated systems to support collaboration between groups of people, they must communicate in the same rich variety of ways as people. This kind of communication reflects the goals of the people who use the systems. Accordingly, a facility for supporting effective communication must rely, as people do, on knowledge of its community’s structure and purposes.
This paper presents an approach for formalizing the RM-ODP (Ref- erence Model for Open Distribute... more This paper presents an approach for formalizing the RM-ODP (Ref- erence Model for Open Distributed Processing), an ISO and ITU standard. The goal of this formalization is to clarify the RM-ODP modeling framework to make it more accessible to modelers such as system architects, designers and implementers, while opening the way for the formal verification of RM -ODP models, either
The communications scenario for support of the Columbus Orbital Facility (COF) operations led to ... more The communications scenario for support of the Columbus Orbital Facility (COF) operations led to the definition of an Interconnection Ground Subnetwork (IGS) to serve as the baseline communications infrastructure for flight operations for all manned spaceflight elements. This IGS communications support concept has been re-examined in the light of new requirements associated with the Automated Transfer Vehicle (ATV) and the
Since 1974, the terminology in the field has changed: the well-chosen expression "definition... more Since 1974, the terminology in the field has changed: the well-chosen expression "definitionally equivalent" replaces the awkward and misleading term "synonymous". After all, we are talking about uninterpreted formal theories. See John Corcoran, 1980. A note concerning definitional equivalence, History and Philosophy of Logic, 1, 231–34, available at Reseachgate.
... In this paper we investigate the logic described by G. Spencer Brown in his book Laws ofForm.... more ... In this paper we investigate the logic described by G. Spencer Brown in his book Laws ofForm.2 ... a new logic and a superficial perusal of his book, with its unusual notation, certainly suggests that something dif-ferent ... But (SI) has nothing in particular to do with Russell's paradox. ...
JOHN CORCORAN AND WILIAM FRANK. Surprises in logic. Bulletin of Symbolic Logic. 19 (2013) 253.
... more JOHN CORCORAN AND WILIAM FRANK. Surprises in logic. Bulletin of Symbolic Logic. 19 (2013) 253. Some people, not just beginning students, are at first surprised to learn that the proposition “If zero is odd, then zero is not odd” is not self-contradictory. Some people are surprised to find out that there are logically equivalent false universal propositions that have no counterexamples in common, i. e., that no counterexample for one is a counterexample for the other. Some people would be surprised to find out that in normal first-order logic existential import is quite common: some universals “Everything that is S is P” —actually quite a few—imply their corresponding existentials “Something that is S is P”. Anyway, perhaps contrary to its title, the paper abstracted here is not a cataloging of surprises in logic but rather is discussion about the mistakes that did or might have or might still lead people to think that there are no surprises in logic. The paper cataloging the surprises in logic is on our “to-do” list.
► JOHN CORCORAN AND WILIAM FRANK, Surprises in logic. Philosophy, University at Buffalo, Buffalo, NY 14260-4150, USA E-mail: corcoran@buffalo.edu There are many surprises in logic. Peirce gave us a few. Russell gave Frege one. Löwenheim gave Zermelo one. Gödel gave some to Hilbert. Tarski gave us several. When we get a surprise, we are often delighted, puzzled, or skeptical. Sometimes we feel or say “Nice!”, “Wow, I didn’t know that!”, “Is that so?”, or the like. Every surprise belongs to someone. There are no disembodied surprises. Saying there are surprises in logic means that logicians experience surprises doing logic—not that among logical propositions some are intrinsically or objectively “surprising”. The expression “That isn’t surprising” often denigrates logical results. Logicians often aim for surprises. In fact, [1] argues that logic’s potential for surprises helps motivate its study and, indeed, helps justify logic’s existence as a discipline. Besides big surprises that change logicians’ perspectives, the logician’s daily life brings little surprises, e.g. that Gödel’s induction axiom alone implies Robinson’s axiom. Sometimes wild guesses succeed. Sometimes promising ideas fail. Perhaps one of the least surprising things about logic is that it is full of surprises. Against the above is Wittgenstein’s surprising conclusion ([2], 6.1251): “Hence there can never be surprises in logic”. This paper unearths basic mistakes in [2] that might help to explain how Wittgenstein arrived at his false conclusion and why he never caught it. The mistakes include: (a) unawareness that surprise is personal, (b) confusing logicians having certainty with propositions having logical necessity, (c) confusing definitions with criteria, and (d) thinking that facts demonstrate truths. People demonstrate truths using their deductive know-how and their knowledge of facts: facts per se are epistemically inert. [1] JOHN CORCORAN, Hidden consequence and hidden independence. This Bulletin, vol.16 (2010), p. 443. [2] LUDWIG WITTGENSTEIN, Tractatus Logico-Philosophicus, Kegan Paul, London, 1921.
String theory: the foundation of proof theory, grammar, metamathematics, and word processing.
Joh... more String theory: the foundation of proof theory, grammar, metamathematics, and word processing. John Corcoran, William Frank & Michael Maloney (1974). String Theory. Journal of Symbolic Logic 39 (4):625-637.
For each positive n, two alternative axiomatizations of the theory of strings over n alphabetic characters are presented. One class of axiomatizations derives from Alfred Tarski's truth-definition monograph and uses the n characters and the concatenation operation as primitives. The other class derives from Hans Hermes' work on semiotics and involves using n character-prefixing operators as primitives. All underlying logics are second order. The two theories of strings over one character are essentially forms of the theory of positive integers. It is shown that, for each n, the two theories are definitionally equivalent—loosely speaking, differ only in notation, which should not be surprising. However, it is further shown that each member of one class is definitionally equivalent with each member of the other class; thus that all of the theories are definitionally equivalent with each other and with Peano arithmetic. For example, a theory of strings of Arabic digits {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}is definitionally equivalent to a theory of strings of one character—as in primitive stroke notation: 1, 11, 111, etc. Categoricity of Peano arithmetic then implies categoricity of each of the above theories.
String theory or, more fully, character-string theory—also called concatenation theory, or theoretical syntax—studies character strings over finite alphabets of characters, signs, symbols, or marks. The most basic operation on strings is concatenation, connecting two strings to form a longer string whose length is the sum of the lengths of the operands: abcde is the concatenation of ab with cde, in symbols abcde = ab ^ cde. String theory is foundational for formal linguistics, computer science, logic, and metamathematics especially proof theory. A generative grammar can be seen as a recursive definition in string theory. Mathematically inclined logicians noticed that string concatenation resembled number addition: both are homogeneous, associative, totally defined, two-place operations. This leads to discovery that strings-under-concatenation gives rise to mathematical theories analogous to theories of numbers-under-addition. Logicians then realized that the axiomatic tradition traced to Euclid’s predecessors required such theories be treated axiomatically. In 1956 Alonzo Church wrote:
Like any branch of mathematics, theoretical syntax may, and ultimately must, be studied by the axiomatic method.
Church was evidently unaware that string theory already had two axiomatizations from the 1930s: one by Hans Hermes and one by Alfred Tarski. Coincidentally, the first English presentation of Tarski’s 1933 axiomatic foundations of string theory appeared in 1956—the same year that Church called for such axiomatizations.
ALONZO CHURCH, Introduction to Mathematical Logic, Princeton UP, Princeton, 1956 ALFRED TARSKI, The concept of truth in formalized languages, Logic, Semantics, Metamathematics, Hackett, Indianapolis, 1983.
JOHN CORCORAN AND WILIAM FRANK. Surprises in logic. Bulletin of Symbolic Logic. 19 (2013) 253. So... more JOHN CORCORAN AND WILIAM FRANK. Surprises in logic. Bulletin of Symbolic Logic. 19 (2013) 253. Some people, not just beginning students, are at first surprised to learn that the proposition “If zero is odd, then zero is not odd” is not self-contradictory. Some people are surprised to find out that there are logically equivalent false universal propositions that have no counterexamples in common, i. e., that no counterexample for one is a counterexample for the other. Some people would be surprised to find out that in normal first-order logic existential import is quite common: some universals “Everything that is S is P” —actually quite a few—imply their corresponding existentials “Something that is S is P”. Anyway, perhaps contrary to its title, this paper is not a cataloging of surprises in logic but rather about the mistakes that did or might have or might still lead people to think that there are no surprises in logic. The paper cataloging of surprises in logic is on our “to-do” ...
For each n > 0, two alternative axiomatizations of the theory of strings over n alphabetic cha... more For each n > 0, two alternative axiomatizations of the theory of strings over n alphabetic characters are presented. One class of axiomatizations derives from Tarski's system of the Wahrheitsbegriff and uses the n characters and concatenation as primitives. The other class involves using n character-prefixing operators as primitives and derives from Hermes'…
Twelfth OOPSLA Worship on Behavioral Semantics , 2002
The concept of role plays an important part in designing systems that work effectively with peopl... more The concept of role plays an important part in designing systems that work effectively with people, because people understand their own work in terms of roles. What is most distinct about the concept of role, and is missed in most formal approaches, is that roles do not stand by themselves: “role in” is the meaningful construct.
Definition of a role as special kind of type misses this key feature. The temptation to treat a role as a type comes from the fact that a role is specified with invariant statements concerning the composite in which the role occurs, and these invariants in the simplest cases each apply only to the model element playing a single role. But these invariants may also apply in a complex way across all the roles in the composite. And they apply to the elements playing the role only in the context of that particular composite. In addition, in identifying a role as a special kind of type, a role cannot be an abstraction of a single instance, but as we will see, roles sometimes are abstractions of single elements, not types of elements. Most of this paper unpacks this paragraph.
We survey some of the uses of the role concept used in a variety of ordinary human endeavors, some of the ways in which this concept has been represented in various formal modeling theories, and recommend a generic concept of role as a place holder in the specification of any kind of composite, with associated invariants that apply to whatever may or does hold that place. This concept of role makes role yet another different way people abstract. The way in which people abstract the contribution of each part, in consort, to some whole. In application to UML, these composites might be composite objects, composite actions, or associations, which are always composite model elements.
Proceedings of the Ninth OOPSLA Workshop on Behavioral Semantics, 2000
This paper describes the foundations for an approach called Semantic Solutions. This approach pr... more This paper describes the foundations for an approach called Semantic Solutions. This approach provides tools for facilitating the federation of business communities, via the integration of the automated systems used by the communities. These tools are based on a new paradigm of communications between automated components – effective communications is how people work together. For automated systems to support collaboration between groups of people, they must communicate in the same rich variety of ways as people. This kind of communication reflects the goals of the people who use the systems. Accordingly, a facility for supporting effective communication must rely, as people do, on knowledge of its community’s structure and purposes.
This paper presents an approach for formalizing the RM-ODP (Ref- erence Model for Open Distribute... more This paper presents an approach for formalizing the RM-ODP (Ref- erence Model for Open Distributed Processing), an ISO and ITU standard. The goal of this formalization is to clarify the RM-ODP modeling framework to make it more accessible to modelers such as system architects, designers and implementers, while opening the way for the formal verification of RM -ODP models, either
The communications scenario for support of the Columbus Orbital Facility (COF) operations led to ... more The communications scenario for support of the Columbus Orbital Facility (COF) operations led to the definition of an Interconnection Ground Subnetwork (IGS) to serve as the baseline communications infrastructure for flight operations for all manned spaceflight elements. This IGS communications support concept has been re-examined in the light of new requirements associated with the Automated Transfer Vehicle (ATV) and the
Since 1974, the terminology in the field has changed: the well-chosen expression "definition... more Since 1974, the terminology in the field has changed: the well-chosen expression "definitionally equivalent" replaces the awkward and misleading term "synonymous". After all, we are talking about uninterpreted formal theories. See John Corcoran, 1980. A note concerning definitional equivalence, History and Philosophy of Logic, 1, 231–34, available at Reseachgate.
... In this paper we investigate the logic described by G. Spencer Brown in his book Laws ofForm.... more ... In this paper we investigate the logic described by G. Spencer Brown in his book Laws ofForm.2 ... a new logic and a superficial perusal of his book, with its unusual notation, certainly suggests that something dif-ferent ... But (SI) has nothing in particular to do with Russell's paradox. ...
JOHN CORCORAN AND WILIAM FRANK. Surprises in logic. Bulletin of Symbolic Logic. 19 (2013) 253.
... more JOHN CORCORAN AND WILIAM FRANK. Surprises in logic. Bulletin of Symbolic Logic. 19 (2013) 253. Some people, not just beginning students, are at first surprised to learn that the proposition “If zero is odd, then zero is not odd” is not self-contradictory. Some people are surprised to find out that there are logically equivalent false universal propositions that have no counterexamples in common, i. e., that no counterexample for one is a counterexample for the other. Some people would be surprised to find out that in normal first-order logic existential import is quite common: some universals “Everything that is S is P” —actually quite a few—imply their corresponding existentials “Something that is S is P”. Anyway, perhaps contrary to its title, the paper abstracted here is not a cataloging of surprises in logic but rather is discussion about the mistakes that did or might have or might still lead people to think that there are no surprises in logic. The paper cataloging the surprises in logic is on our “to-do” list.
► JOHN CORCORAN AND WILIAM FRANK, Surprises in logic. Philosophy, University at Buffalo, Buffalo, NY 14260-4150, USA E-mail: corcoran@buffalo.edu There are many surprises in logic. Peirce gave us a few. Russell gave Frege one. Löwenheim gave Zermelo one. Gödel gave some to Hilbert. Tarski gave us several. When we get a surprise, we are often delighted, puzzled, or skeptical. Sometimes we feel or say “Nice!”, “Wow, I didn’t know that!”, “Is that so?”, or the like. Every surprise belongs to someone. There are no disembodied surprises. Saying there are surprises in logic means that logicians experience surprises doing logic—not that among logical propositions some are intrinsically or objectively “surprising”. The expression “That isn’t surprising” often denigrates logical results. Logicians often aim for surprises. In fact, [1] argues that logic’s potential for surprises helps motivate its study and, indeed, helps justify logic’s existence as a discipline. Besides big surprises that change logicians’ perspectives, the logician’s daily life brings little surprises, e.g. that Gödel’s induction axiom alone implies Robinson’s axiom. Sometimes wild guesses succeed. Sometimes promising ideas fail. Perhaps one of the least surprising things about logic is that it is full of surprises. Against the above is Wittgenstein’s surprising conclusion ([2], 6.1251): “Hence there can never be surprises in logic”. This paper unearths basic mistakes in [2] that might help to explain how Wittgenstein arrived at his false conclusion and why he never caught it. The mistakes include: (a) unawareness that surprise is personal, (b) confusing logicians having certainty with propositions having logical necessity, (c) confusing definitions with criteria, and (d) thinking that facts demonstrate truths. People demonstrate truths using their deductive know-how and their knowledge of facts: facts per se are epistemically inert. [1] JOHN CORCORAN, Hidden consequence and hidden independence. This Bulletin, vol.16 (2010), p. 443. [2] LUDWIG WITTGENSTEIN, Tractatus Logico-Philosophicus, Kegan Paul, London, 1921.
String theory: the foundation of proof theory, grammar, metamathematics, and word processing.
Joh... more String theory: the foundation of proof theory, grammar, metamathematics, and word processing. John Corcoran, William Frank & Michael Maloney (1974). String Theory. Journal of Symbolic Logic 39 (4):625-637.
For each positive n, two alternative axiomatizations of the theory of strings over n alphabetic characters are presented. One class of axiomatizations derives from Alfred Tarski's truth-definition monograph and uses the n characters and the concatenation operation as primitives. The other class derives from Hans Hermes' work on semiotics and involves using n character-prefixing operators as primitives. All underlying logics are second order. The two theories of strings over one character are essentially forms of the theory of positive integers. It is shown that, for each n, the two theories are definitionally equivalent—loosely speaking, differ only in notation, which should not be surprising. However, it is further shown that each member of one class is definitionally equivalent with each member of the other class; thus that all of the theories are definitionally equivalent with each other and with Peano arithmetic. For example, a theory of strings of Arabic digits {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}is definitionally equivalent to a theory of strings of one character—as in primitive stroke notation: 1, 11, 111, etc. Categoricity of Peano arithmetic then implies categoricity of each of the above theories.
String theory or, more fully, character-string theory—also called concatenation theory, or theoretical syntax—studies character strings over finite alphabets of characters, signs, symbols, or marks. The most basic operation on strings is concatenation, connecting two strings to form a longer string whose length is the sum of the lengths of the operands: abcde is the concatenation of ab with cde, in symbols abcde = ab ^ cde. String theory is foundational for formal linguistics, computer science, logic, and metamathematics especially proof theory. A generative grammar can be seen as a recursive definition in string theory. Mathematically inclined logicians noticed that string concatenation resembled number addition: both are homogeneous, associative, totally defined, two-place operations. This leads to discovery that strings-under-concatenation gives rise to mathematical theories analogous to theories of numbers-under-addition. Logicians then realized that the axiomatic tradition traced to Euclid’s predecessors required such theories be treated axiomatically. In 1956 Alonzo Church wrote:
Like any branch of mathematics, theoretical syntax may, and ultimately must, be studied by the axiomatic method.
Church was evidently unaware that string theory already had two axiomatizations from the 1930s: one by Hans Hermes and one by Alfred Tarski. Coincidentally, the first English presentation of Tarski’s 1933 axiomatic foundations of string theory appeared in 1956—the same year that Church called for such axiomatizations.
ALONZO CHURCH, Introduction to Mathematical Logic, Princeton UP, Princeton, 1956 ALFRED TARSKI, The concept of truth in formalized languages, Logic, Semantics, Metamathematics, Hackett, Indianapolis, 1983.
CORCORAN AND FRANK ON TARSKI’S QUOTED LETTERS
Quoted-letters are three-character strings: lette... more CORCORAN AND FRANK ON TARSKI’S QUOTED LETTERS
Quoted-letters are three-character strings: letters flanked by quotation-marks. Quoted-letters have multiple uses. We survey Tarski’s quoted-letter uses. Four separate uses are noted in this abstract. We do not discuss quoted-quoted-letters: five-character strings that consist in two consecutive quotation marks follwed by a letter followed by two consecutive quotation marks. Tarski uses quoted-letter frequently and he often mentions them, sometimes by using quoted-quoted-letters. But Tarski uses quoted-quoted-letters infrequently and he rarely if ever mentions them, never by using quoeted-quoted-quoted-letters, which are seven-character string.
Tarski’s quoted-letters. Bulletin of Symbolic Logic. 19 (2013) 508–9. (Coauthor: William Frank)
► JOHN CORCORAN AND WILLIAM FRANK, Tarski’s quoted-letters. Philosophy, University at Buffalo, Buffalo, NY 14260-4150, USA E-mail: corcoran@buffalo.edu Quoted-letters are three-character strings: letters flanked by quotation-marks. Quoted-letters have multiple uses. We survey Tarski’s quoted-letter uses. Tarski insisted that no string, for example the letter p (sic), serves as an autonym—denoting itself [2, p. 344; 3, p. 104]. But quoted-letters can serve as letter-names. Tarski wrote [1, pp.159-160]:
The name “ ‘p’ ” denotes one of the letters of the alphabet.
Tarski regarded letter-names as unitaries—“syntactically simple expressions” having no independently meaningful parts. Presumably, quoted-letters are arbitrarily assigned to denote their letters when they could have been assigned something else or used non-denotatively [1, pp. 159-160; 3, p. 104]. Tarski also used quoted-letters as abbreviated sentence-names. He held that quoted-letters containing letters abbreviating sentences denote—not letters—but sentences [2, p. 347; 3, p. 108].
‘s’ is the sentence printed in this paper […]. “s” is undoubtedly a sentence in English.
In the truth-definition paper [1, pp. 152–278], “functions” contain free variables. The quotation-function ‘ ‘x’ ’ converts to expression-names: ‘ ‘a’ ’ , ‘ ‘b’ ’ , etc. Tarski noted that the quoted-letter ‘ ‘x’ ’ is “ambiguous” (sic): sometimes it is a function—not a name; sometimes it is a name of the 24th letter—not a function [1, p.162]. Another usage is in the truth schema [3, pp.105, 114].
“p” is true if and only if p
Here the quoted-letter is neither a function nor a name but merely contains its letter as a place-holder (schematic letter) co-ordinate with a place-holder not part of a quoted name.
[1] ALFRED TARSKI, Logic, semantics, metamathematics, Hackett, 1956/1983. [2] ALFRED TARSKI, Semantic conception of truth. Philosophy and Phenomenological Research, vol. 4 (1944), pp. 341–375. [3] ALFRED TARSKI, Truth and proof, Scientific American. June 1969. Hughes reprint.
String theory-or concatenation theory-studies abstract strings-or concatenations-of characters ex... more String theory-or concatenation theory-studies abstract strings-or concatenations-of characters exclusively and intrinsically. The qualification 'exclusively' separates string theory from many-sorted disciplines such as semantic arithmetic , which studies numerals (strings of digits) and the numbers they denote. The qualification 'intrinsically' separates it from empirical and technological subjects-e.g. cognitive psychology, computability theory, and information science-that study manipulation of concrete string tokens by persons or machines. Empirical disciplines don't study strings intrinsically-apart from tokens; they study strings extrinsically-through tokens. We distinguish concatenation-the abstract two-place operation coupling abstract strings [string types]-from juxtaposition-the humanly performed manipulation literally conjoining concrete inscriptions [string tokens]. Abstract, non-empirical string theory is distinguished not only from semantics and pragmatics but also from empirical juxtaposition theory or syntactics, which studies tokens manipulated by humans. Pragmatics encompasses semantics and syntactics. Building on [1], we describe string theory's subject matter, its basic concepts, and its basic laws. We also discuss its axiomatizations. The expression string theory occurs above as a necessarily singular proper name of a study-a distinctively human institution having a historical development; science and discipline are synonyms for study in this sense. Historians have yet to decide when string theory emerged as a recognizable science with laws and open problems. But string theory also occurs as a pluralizable common noun denoting axiomatized and non-axiomatized interpreted deductive theories each having a formal language and an intended interpretation. Two infinite families of axiomatized string theories are studied in Corcoran-Frank-Maloney 1974.
The word ‘assumption’ is appropriate in contrasting contexts with contrasting meanings, often in ... more The word ‘assumption’ is appropriate in contrasting contexts with contrasting meanings, often in one paper, e. g. [2]. We distinguish three context classes. In the first—possibly the earliest—people making an assumption believe it and thus necessarily understand it; axioms, postulates, and definitions are sometimes called assumptions [1]. No such context occurs in [2]. In the second—also old—people making an assumption necessarily understand it but often don’t believe it, or may even disbelieve it, e.g. assumptions “for purposes of reasoning”: reductio assumptions in indirect proofs and antecedents in conditional proofs. The first such context in [2] is §1.1, pages 371f: a conditional proof of an implication between two metalanguage propositions. Such assumptions are called illative indicating their role in illation, reasoning productive of knowledge. In the third—beginning around 1900—‘assumption’ is used in connection with uninterpreted syntactic strings [3]: there is nothing to understand much less believe. Dozens of such contexts occur in [2]—some in proofs of metatheorems. The first such context in [2] is §1.2, pages 372ff, where ‘assumption’ is used in the relation-verb phrase ‘is an assumption in’. Assumptions in this sense are called typographical indicating their connection with string-manipulation. Assumptions from the first context—understood and believed—are excluded from this study. We investigate illative assumptions—understood but possibly disbelieved—and typographical assumptions—uninterpreted character-strings. We question how illative assumptions are modeled, represented, or expressed by typographical assumptions. [1] JOHN CORCORAN, Sentence, proposition, judgment, statement, and fact, Many Sides of Logic, (Walter Carnielli et al., editors), College Publications, 2009. [2] JOHN CORCORAN AND GEORGE WEAVER, 1969. Logical Consequence in Modal Logic: Natural Deduction in S5, NDJFL, vol. 10 (1969), pp. 370–84. [3] JOHN CORCORAN, WILLIAM FRANK, AND MICHAEL MALONEY, String Theory, Journal of Symbolic Logic, vol. 39 (1974), pp. 625–637. AFTERWORD 091816 For the moment, let us stipulate use the adjective ‘material’ to distinguish the first sense from the illative and typographical senses. Every material assumption that a given person has is believed to be true by that person, i.e. is one of that person’s beliefs—either one of the person’s opinions or something the person actually knows to be true. In typical cases a person’s material assumptions remain being their material assumptions through extended intervals of time. My material assumptions concerning arithmetic have not changed, or at least have not changed much, in many years. However, my illative and typographical assumptions are temporary. As soon as the reasoning or manipulation for which a given illative and typographical assumption was made is finished, the thing that was an assumption longer is an assumption.
This applied-logic lecture builds on CORCORAN’s Inseparability of Logic and Ethics, which conside... more This applied-logic lecture builds on CORCORAN’s Inseparability of Logic and Ethics, which considers the hypothesis that character traits fostered by logic serve clarity and understanding in ethics, confirming hopeful views of Alfred Tarski. Hypotheses, in one strict usage, are propositions not known to be true and not known to be false or—more loosely—propositions so considered for discussion purposes. Logic studies hypotheses by determining their implications (propositions they imply) and their implicants (propositions that imply them). Logic also studies hypotheses by seeing how variations affect implications and implicants. People versed in logical methods are more inclined to enjoy working with hypotheses and less inclined to dismiss them or to accept them without sufficient evidence. Cosmic Justice Hypotheses (CJHs), such as “in the fullness of time every act will be rewarded or punished in exact proportion to its goodness or badness”, have been entertained by intelligent thinkers. Absolute CJHs, ACHJs, imply that it is pointless to make sacrifices, make pilgrimages, or ask divine forgiveness: once acts are done, doers must ready themselves for the inevitable payback, since the cosmos works inexorably toward justice. https://www.academia.edu/9413409/INSEPARABILITY_OF_LOGIC_AND_ETHICS https://www.academia.edu/11292701/Cosmic_Justice_Hypotheses
Uploads
Papers by William Frank
Definition of a role as special kind of type misses this key feature. The temptation to treat a role as a type comes from the fact that a role is specified with invariant statements concerning the composite in which the role occurs, and these invariants in the simplest cases each apply only to the model element playing a single role. But these invariants may also apply in a complex way across all the roles in the composite. And they apply to the elements playing the role only in the context of that particular composite. In addition, in identifying a role as a special kind of type, a role cannot be an abstraction of a single instance, but as we will see, roles sometimes are abstractions of single elements, not types of elements. Most of this paper unpacks this paragraph.
We survey some of the uses of the role concept used in a variety of ordinary human endeavors, some of the ways in which this concept has been represented in various formal modeling theories, and recommend a generic concept of role as a place holder in the specification of any kind of composite, with associated invariants that apply to whatever may or does hold that place. This concept of role makes role yet another different way people abstract. The way in which people abstract the contribution of each part, in consort, to some whole. In application to UML, these composites might be composite objects, composite actions, or associations, which are always composite model elements.
Some people, not just beginning students, are at first surprised to learn that the proposition “If zero is odd, then zero is not odd” is not self-contradictory. Some people are surprised to find out that there are logically equivalent false universal propositions that have no counterexamples in common, i. e., that no counterexample for one is a counterexample for the other. Some people would be surprised to find out that in normal first-order logic existential import is quite common: some universals “Everything that is S is P” —actually quite a few—imply their corresponding existentials “Something that is S is P”.
Anyway, perhaps contrary to its title, the paper abstracted here is not a cataloging of surprises in logic but rather is discussion about the mistakes that did or might have or might still lead people to think that there are no surprises in logic. The paper cataloging the surprises in logic is on our “to-do” list.
► JOHN CORCORAN AND WILIAM FRANK, Surprises in logic.
Philosophy, University at Buffalo, Buffalo, NY 14260-4150, USA
E-mail: corcoran@buffalo.edu
There are many surprises in logic. Peirce gave us a few. Russell gave Frege one. Löwenheim gave Zermelo one. Gödel gave some to Hilbert. Tarski gave us several.
When we get a surprise, we are often delighted, puzzled, or skeptical. Sometimes we feel or say “Nice!”, “Wow, I didn’t know that!”, “Is that so?”, or the like.
Every surprise belongs to someone. There are no disembodied surprises. Saying there are surprises in logic means that logicians experience surprises doing logic—not that among logical propositions some are intrinsically or objectively “surprising”.
The expression “That isn’t surprising” often denigrates logical results.
Logicians often aim for surprises. In fact, [1] argues that logic’s potential for surprises helps motivate its study and, indeed, helps justify logic’s existence as a discipline.
Besides big surprises that change logicians’ perspectives, the logician’s daily life brings little surprises, e.g. that Gödel’s induction axiom alone implies Robinson’s axiom. Sometimes wild guesses succeed. Sometimes promising ideas fail. Perhaps one of the least surprising things about logic is that it is full of surprises.
Against the above is Wittgenstein’s surprising conclusion ([2], 6.1251): “Hence there can never be surprises in logic”.
This paper unearths basic mistakes in [2] that might help to explain how Wittgenstein arrived at his false conclusion and why he never caught it. The mistakes include: (a) unawareness that surprise is personal, (b) confusing logicians having certainty with propositions having logical necessity, (c) confusing definitions with criteria, and (d) thinking that facts demonstrate truths. People demonstrate truths using their deductive know-how and their knowledge of facts: facts per se are epistemically inert.
[1] JOHN CORCORAN, Hidden consequence and hidden independence. This Bulletin, vol.16 (2010), p. 443.
[2] LUDWIG WITTGENSTEIN, Tractatus Logico-Philosophicus, Kegan Paul, London, 1921.
http://www.tandfonline.com/doi/full/10.1080/01445340.2014.952947
John Corcoran, William Frank & Michael Maloney (1974). String Theory. Journal of Symbolic Logic 39 (4):625-637.
For each positive n, two alternative axiomatizations of the theory of strings over n alphabetic characters are presented. One class of axiomatizations derives from Alfred Tarski's truth-definition monograph and uses the n characters and the concatenation operation as primitives. The other class derives from Hans Hermes' work on semiotics and involves using n character-prefixing operators as primitives. All underlying logics are second order. The two theories of strings over one character are essentially forms of the theory of positive integers.
It is shown that, for each n, the two theories are definitionally equivalent—loosely speaking, differ only in notation, which should not be surprising. However, it is further shown that each member of one class is definitionally equivalent with each member of the other class; thus that all of the theories are definitionally equivalent with each other and with Peano arithmetic. For example, a theory of strings of Arabic digits {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}is definitionally equivalent to a theory of strings of one character—as in primitive stroke notation: 1, 11, 111, etc. Categoricity of Peano arithmetic then implies categoricity of each of the above theories.
String theory or, more fully, character-string theory—also called concatenation theory, or theoretical syntax—studies character strings over finite alphabets of characters, signs, symbols, or marks. The most basic operation on strings is concatenation, connecting two strings to form a longer string whose length is the sum of the lengths of the operands: abcde is the concatenation of ab with cde, in symbols abcde = ab ^ cde. String theory is foundational for formal linguistics, computer science, logic, and metamathematics especially proof theory. A generative grammar can be seen as a recursive definition in string theory.
Mathematically inclined logicians noticed that string concatenation resembled number addition: both are homogeneous, associative, totally defined, two-place operations. This leads to discovery that strings-under-concatenation gives rise to mathematical theories analogous to theories of numbers-under-addition. Logicians then realized that the axiomatic tradition traced to Euclid’s predecessors required such theories be treated axiomatically. In 1956 Alonzo Church
wrote:
Like any branch of mathematics, theoretical syntax may, and ultimately must, be studied by the axiomatic method.
Church was evidently unaware that string theory already had two axiomatizations from the 1930s: one by Hans Hermes and one by Alfred Tarski.
Coincidentally, the first English presentation of Tarski’s 1933 axiomatic foundations of string theory appeared in 1956—the same year that Church called for such axiomatizations.
ALONZO CHURCH, Introduction to Mathematical Logic, Princeton UP, Princeton, 1956
ALFRED TARSKI, The concept of truth in formalized languages, Logic, Semantics, Metamathematics, Hackett, Indianapolis, 1983.
Definition of a role as special kind of type misses this key feature. The temptation to treat a role as a type comes from the fact that a role is specified with invariant statements concerning the composite in which the role occurs, and these invariants in the simplest cases each apply only to the model element playing a single role. But these invariants may also apply in a complex way across all the roles in the composite. And they apply to the elements playing the role only in the context of that particular composite. In addition, in identifying a role as a special kind of type, a role cannot be an abstraction of a single instance, but as we will see, roles sometimes are abstractions of single elements, not types of elements. Most of this paper unpacks this paragraph.
We survey some of the uses of the role concept used in a variety of ordinary human endeavors, some of the ways in which this concept has been represented in various formal modeling theories, and recommend a generic concept of role as a place holder in the specification of any kind of composite, with associated invariants that apply to whatever may or does hold that place. This concept of role makes role yet another different way people abstract. The way in which people abstract the contribution of each part, in consort, to some whole. In application to UML, these composites might be composite objects, composite actions, or associations, which are always composite model elements.
Some people, not just beginning students, are at first surprised to learn that the proposition “If zero is odd, then zero is not odd” is not self-contradictory. Some people are surprised to find out that there are logically equivalent false universal propositions that have no counterexamples in common, i. e., that no counterexample for one is a counterexample for the other. Some people would be surprised to find out that in normal first-order logic existential import is quite common: some universals “Everything that is S is P” —actually quite a few—imply their corresponding existentials “Something that is S is P”.
Anyway, perhaps contrary to its title, the paper abstracted here is not a cataloging of surprises in logic but rather is discussion about the mistakes that did or might have or might still lead people to think that there are no surprises in logic. The paper cataloging the surprises in logic is on our “to-do” list.
► JOHN CORCORAN AND WILIAM FRANK, Surprises in logic.
Philosophy, University at Buffalo, Buffalo, NY 14260-4150, USA
E-mail: corcoran@buffalo.edu
There are many surprises in logic. Peirce gave us a few. Russell gave Frege one. Löwenheim gave Zermelo one. Gödel gave some to Hilbert. Tarski gave us several.
When we get a surprise, we are often delighted, puzzled, or skeptical. Sometimes we feel or say “Nice!”, “Wow, I didn’t know that!”, “Is that so?”, or the like.
Every surprise belongs to someone. There are no disembodied surprises. Saying there are surprises in logic means that logicians experience surprises doing logic—not that among logical propositions some are intrinsically or objectively “surprising”.
The expression “That isn’t surprising” often denigrates logical results.
Logicians often aim for surprises. In fact, [1] argues that logic’s potential for surprises helps motivate its study and, indeed, helps justify logic’s existence as a discipline.
Besides big surprises that change logicians’ perspectives, the logician’s daily life brings little surprises, e.g. that Gödel’s induction axiom alone implies Robinson’s axiom. Sometimes wild guesses succeed. Sometimes promising ideas fail. Perhaps one of the least surprising things about logic is that it is full of surprises.
Against the above is Wittgenstein’s surprising conclusion ([2], 6.1251): “Hence there can never be surprises in logic”.
This paper unearths basic mistakes in [2] that might help to explain how Wittgenstein arrived at his false conclusion and why he never caught it. The mistakes include: (a) unawareness that surprise is personal, (b) confusing logicians having certainty with propositions having logical necessity, (c) confusing definitions with criteria, and (d) thinking that facts demonstrate truths. People demonstrate truths using their deductive know-how and their knowledge of facts: facts per se are epistemically inert.
[1] JOHN CORCORAN, Hidden consequence and hidden independence. This Bulletin, vol.16 (2010), p. 443.
[2] LUDWIG WITTGENSTEIN, Tractatus Logico-Philosophicus, Kegan Paul, London, 1921.
http://www.tandfonline.com/doi/full/10.1080/01445340.2014.952947
John Corcoran, William Frank & Michael Maloney (1974). String Theory. Journal of Symbolic Logic 39 (4):625-637.
For each positive n, two alternative axiomatizations of the theory of strings over n alphabetic characters are presented. One class of axiomatizations derives from Alfred Tarski's truth-definition monograph and uses the n characters and the concatenation operation as primitives. The other class derives from Hans Hermes' work on semiotics and involves using n character-prefixing operators as primitives. All underlying logics are second order. The two theories of strings over one character are essentially forms of the theory of positive integers.
It is shown that, for each n, the two theories are definitionally equivalent—loosely speaking, differ only in notation, which should not be surprising. However, it is further shown that each member of one class is definitionally equivalent with each member of the other class; thus that all of the theories are definitionally equivalent with each other and with Peano arithmetic. For example, a theory of strings of Arabic digits {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}is definitionally equivalent to a theory of strings of one character—as in primitive stroke notation: 1, 11, 111, etc. Categoricity of Peano arithmetic then implies categoricity of each of the above theories.
String theory or, more fully, character-string theory—also called concatenation theory, or theoretical syntax—studies character strings over finite alphabets of characters, signs, symbols, or marks. The most basic operation on strings is concatenation, connecting two strings to form a longer string whose length is the sum of the lengths of the operands: abcde is the concatenation of ab with cde, in symbols abcde = ab ^ cde. String theory is foundational for formal linguistics, computer science, logic, and metamathematics especially proof theory. A generative grammar can be seen as a recursive definition in string theory.
Mathematically inclined logicians noticed that string concatenation resembled number addition: both are homogeneous, associative, totally defined, two-place operations. This leads to discovery that strings-under-concatenation gives rise to mathematical theories analogous to theories of numbers-under-addition. Logicians then realized that the axiomatic tradition traced to Euclid’s predecessors required such theories be treated axiomatically. In 1956 Alonzo Church
wrote:
Like any branch of mathematics, theoretical syntax may, and ultimately must, be studied by the axiomatic method.
Church was evidently unaware that string theory already had two axiomatizations from the 1930s: one by Hans Hermes and one by Alfred Tarski.
Coincidentally, the first English presentation of Tarski’s 1933 axiomatic foundations of string theory appeared in 1956—the same year that Church called for such axiomatizations.
ALONZO CHURCH, Introduction to Mathematical Logic, Princeton UP, Princeton, 1956
ALFRED TARSKI, The concept of truth in formalized languages, Logic, Semantics, Metamathematics, Hackett, Indianapolis, 1983.
Quoted-letters are three-character strings: letters flanked by quotation-marks. Quoted-letters have multiple uses. We survey Tarski’s quoted-letter uses. Four separate uses are noted in this abstract. We do not discuss quoted-quoted-letters: five-character strings that consist in two consecutive quotation marks follwed by a letter followed by two consecutive quotation marks.
Tarski uses quoted-letter frequently and he often mentions them, sometimes by using quoted-quoted-letters. But Tarski uses quoted-quoted-letters infrequently and he rarely if ever mentions them, never by using quoeted-quoted-quoted-letters, which are seven-character string.
Tarski’s quoted-letters. Bulletin of Symbolic Logic. 19 (2013) 508–9. (Coauthor: William Frank)
► JOHN CORCORAN AND WILLIAM FRANK, Tarski’s quoted-letters.
Philosophy, University at Buffalo, Buffalo, NY 14260-4150, USA
E-mail: corcoran@buffalo.edu
Quoted-letters are three-character strings: letters flanked by quotation-marks. Quoted-letters have multiple uses. We survey Tarski’s quoted-letter uses.
Tarski insisted that no string, for example the letter p (sic), serves as an autonym—denoting itself [2, p. 344; 3, p. 104]. But quoted-letters can serve as letter-names. Tarski wrote [1, pp.159-160]:
The name “ ‘p’ ” denotes one of the letters of the alphabet.
Tarski regarded letter-names as unitaries—“syntactically simple expressions” having no independently meaningful parts. Presumably, quoted-letters are arbitrarily assigned to denote their letters when they could have been assigned something else or used non-denotatively [1, pp. 159-160; 3, p. 104].
Tarski also used quoted-letters as abbreviated sentence-names. He held that quoted-letters containing letters abbreviating sentences denote—not letters—but sentences [2, p. 347; 3, p. 108].
‘s’ is the sentence printed in this paper […].
“s” is undoubtedly a sentence in English.
In the truth-definition paper [1, pp. 152–278], “functions” contain free variables. The quotation-function ‘ ‘x’ ’ converts to expression-names: ‘ ‘a’ ’ , ‘ ‘b’ ’ , etc. Tarski noted that the quoted-letter ‘ ‘x’ ’ is “ambiguous” (sic): sometimes it is a function—not a name; sometimes it is a name of the 24th letter—not a function [1, p.162].
Another usage is in the truth schema [3, pp.105, 114].
“p” is true if and only if p
Here the quoted-letter is neither a function nor a name but merely contains its letter as a place-holder (schematic letter) co-ordinate with a place-holder not part of a quoted name.
[1] ALFRED TARSKI, Logic, semantics, metamathematics, Hackett, 1956/1983.
[2] ALFRED TARSKI, Semantic conception of truth. Philosophy and Phenomenological Research, vol. 4 (1944), pp. 341–375.
[3] ALFRED TARSKI, Truth and proof, Scientific American. June 1969. Hughes reprint.
In the first—possibly the earliest—people making an assumption believe it and thus necessarily understand it; axioms, postulates, and definitions are sometimes called assumptions [1]. No such context occurs in [2].
In the second—also old—people making an assumption necessarily understand it but often don’t believe it, or may even disbelieve it, e.g. assumptions “for purposes of reasoning”: reductio assumptions in indirect proofs and antecedents in conditional proofs. The first such context in [2] is §1.1, pages 371f: a conditional proof of an implication between two metalanguage propositions. Such assumptions are called illative indicating their role in illation, reasoning productive of knowledge.
In the third—beginning around 1900—‘assumption’ is used in connection with uninterpreted syntactic strings [3]: there is nothing to understand much less believe. Dozens of such contexts occur in [2]—some in proofs of metatheorems. The first such context in [2] is §1.2, pages 372ff, where ‘assumption’ is used in the relation-verb phrase ‘is an assumption in’. Assumptions in this sense are called typographical indicating their connection with string-manipulation.
Assumptions from the first context—understood and believed—are excluded from this study. We investigate illative assumptions—understood but possibly disbelieved—and typographical assumptions—uninterpreted character-strings. We question how illative assumptions are modeled, represented, or expressed by typographical assumptions.
[1] JOHN CORCORAN, Sentence, proposition, judgment, statement, and fact, Many Sides of Logic, (Walter Carnielli et al., editors), College Publications, 2009.
[2] JOHN CORCORAN AND GEORGE WEAVER, 1969. Logical Consequence in Modal Logic: Natural Deduction in S5, NDJFL, vol. 10 (1969), pp. 370–84.
[3] JOHN CORCORAN, WILLIAM FRANK, AND MICHAEL MALONEY, String Theory, Journal of Symbolic Logic, vol. 39 (1974), pp. 625–637.
AFTERWORD 091816
For the moment, let us stipulate use the adjective ‘material’ to distinguish the first sense from the illative and typographical senses. Every material assumption that a given person has is believed to be true by that person, i.e. is one of that person’s beliefs—either one of the person’s opinions or something the person actually knows to be true. In typical cases a person’s material assumptions remain being their material assumptions through extended intervals of time. My material assumptions concerning arithmetic have not changed, or at least have not changed much, in many years. However, my illative and typographical assumptions are temporary. As soon as the reasoning or manipulation for which a given illative and typographical assumption was made is finished, the thing that was an assumption longer is an assumption.
Hypotheses, in one strict usage, are propositions not known to be true and not known to be false or—more loosely—propositions so considered for discussion purposes.
Logic studies hypotheses by determining their implications (propositions they imply) and their implicants (propositions that imply them). Logic also studies hypotheses by seeing how variations affect implications and implicants. People versed in logical methods are more inclined to enjoy working with hypotheses and less inclined to dismiss them or to accept them without sufficient evidence.
Cosmic Justice Hypotheses (CJHs), such as “in the fullness of time every act will be rewarded or punished in exact proportion to its goodness or badness”, have been entertained by intelligent thinkers.
Absolute CJHs, ACHJs, imply that it is pointless to make sacrifices, make pilgrimages, or ask divine forgiveness: once acts are done, doers must ready themselves for the inevitable payback, since the cosmos works inexorably toward justice.
https://www.academia.edu/9413409/INSEPARABILITY_OF_LOGIC_AND_ETHICS
https://www.academia.edu/11292701/Cosmic_Justice_Hypotheses