Peter L. Elkin (ed.), Terminology, Ontology and their Implementations, Cham, Switzerland: Springer Nature (2023), 2023
We begin at the beginning, with an outline of Aristotle’s views on ontology and with a discussion... more We begin at the beginning, with an outline of Aristotle’s views on ontology and with a discussion of the influence of these views on Linnaeus. We move from there to consider the data standardization initiatives launched in the nineteenth century and then turn to investigate how the idea of computational ontologies developed in the AI and knowledge representation communities in the closing decades of the twentieth century. We show how aspects of this idea, particularly those relating to the use of the term “concept” in ontology development, influenced SNOMED CT and other medical terminologies. Against this background, we then show how the Foundational Model of Anatomy, the Gene Ontology, Basic Formal Ontology, and other OBO Foundry ontologies came into existence and discuss their role in the development of contemporary biomedical informatics.
Since 1950, when Alan Turing proposed what has since come to be called the Turing test, the abili... more Since 1950, when Alan Turing proposed what has since come to be called the Turing test, the ability of a machine to pass this test has established itself as the primary hallmark of general AI. To pass the test, a machine would have to be able to engage in dialogue in such a way that a human interrogator could not distinguish its behaviour from that of a human being. AI researchers have attempted to build machines that could meet this requirement, but they have so far failed. To pass the test, a machine would have to meet two conditions: (i) react appropriately to the variance in human dialogue and (ii) display a human-like personality and intentions. We argue, first, that it is for mathematical reasons impossible to program a machine which can master the enormously complex and constantly evolving pattern of variance which human dialogues contain. And second, that we do not know how to make machines that possess personality and intentions of the sort we find in humans. Since a Turing machine cannot master human dialogue behaviour, we conclude that a Turing machine also cannot possess what is called ``general'' Artificial Intelligence. We do, however, acknowledge the potential of Turing machines to master dialogue behaviour in highly restricted contexts, where what is called ``narrow'' AI can still be of considerable utility.
Essays on Wittgenstein and Austrian philosophy, 2004
Building on the writings of Wittgenstein on rule-following and deviance, Kristof Nyiri advanced a... more Building on the writings of Wittgenstein on rule-following and deviance, Kristof Nyiri advanced a theory of creativity as consisting in a fusion of conflicting rules or disciplines. Only such fusion can produce something that is both intrinsically new and yet capable of being apprehended by and passed on to a wider community. Creativity, on this view, involves not the breaking of rules, or the deliberate cultivation of deviant social habits, but rather the acceptance of enriched systems of rules, the adherence to which presupposes ...
When economist Hernando de Soto published The Mystery of Capital in 2000, its author made (on pag... more When economist Hernando de Soto published The Mystery of Capital in 2000, its author made (on page 221) a single seemingly insubstantial reference to philosopher John Searle's The Construction of Social Reality, which had appeared in 1995. Nobody need have paid attention to this fleeting citation, particularly because the other philosophers referenced in support of de Soto's claims���Popper, Dennett, Foucault, Derrida���constitute quite a philosophical hodgepodge. It required a catalyst at the time who worked on ...
Zeitschrift f��r philosophische Forschung, Oct 1, 1998
Die Philosophiegeschichte liefert verschiedene Ansatzpunkte f-ur die Behandlung der Frage nach de... more Die Philosophiegeschichte liefert verschiedene Ansatzpunkte f-ur die Behandlung der Frage nach dem Wesen von sozialen Objekten. Auf der einen Seite gibt es holistische und realistische Ontologien des Sozialen. Fur Heidegger z. B. sind Menschen keine isolierten Individuen sondern durch Andere, durch Werkzeuge, durch Traditionen verstrickt in eine Mitwelt, die nur als Ganzes existiert. Eine holistische Auffassung des Sozialen findet man auch bei Wittgenstein, der soziale Institutionen als Teile der Naturgeschiclite des ...
I shall presuppose as undefended background to what follows a position of scientific realism, a d... more I shall presuppose as undefended background to what follows a position of scientific realism, a doctrine to the effect (i) that the world exists and (ii) that through the working out of ever more sophisticated theories our scientific picture of reality will approximate ever more closely to the world as it really is. Against this background consider, now, the following question: 1. Do the empirical theories with the help of which we seek to approximate a good or true picture of reality rest on any non-empirical presuppositions? One can answer this ...
Changes in an upper level ontology have obvious consequences for the domain ontologies that use i... more Changes in an upper level ontology have obvious consequences for the domain ontologies that use it at lower levels. It is therefore crucial to document the changes made between successive versions of ontologies of this kind. We describe and apply a method for tracking, explaining and measuring changes between successive versions of upper level ontologies such as the Basic Formal Ontology (BFO). The proposed change-tracking method extends earlier work on Realism- Based Ontology Versioning (RBOV) and Evolutionary Terminology Auditing (ETA). We describe here the application of this evaluation method to changes between BFO 1.0, BFO 1.1, and BFO 2.0. We discuss the issues raised by this application and describe the extensions which we added to the original evaluation schema in order to account for changes in this type of ontology. The results of our study show that BFO has undergone eight types of changes that can be systematically explained by the extended evaluation schema. Finally, we...
... But of course, this concedes that thinking cannot be simply symbol manipulation. (129 ... me... more ... But of course, this concedes that thinking cannot be simply symbol manipulation. (129 ... mental competence leave off? Harnad believes that symbolic functions must be grounded in robotic ... Many responses to the Chinese Room argument have noted that, as with Leibniz' Mill, the ...
Despite a large and multifaceted effort to understand the vast landscape of phenotypic data, thei... more Despite a large and multifaceted effort to understand the vast landscape of phenotypic data, their current form inhibits productive data analysis. The lack of a community-wide, consensus-based, human- and machine-interpretable language for describing phenotypes and their genomic and environmental contexts is perhaps the most pressing scientific bottleneck to integration across many key fields in biology, including genomics, systems biology, development, medicine, evolution, ecology, and systematics. Here we survey the current phenomics landscape, including data resources and handling, and the progress that has been made to accurately capture relevant data descriptions for phenotypes. We present an example of the kind of integration across domains that computable phenotypes would enable, and we call upon the broader biology community, publishers, and relevant funding agencies to support efforts to surmount…
Peter L. Elkin (ed.), Terminology, Ontology and their Implementations, Cham, Switzerland: Springer Nature (2023), 2023
We begin at the beginning, with an outline of Aristotle’s views on ontology and with a discussion... more We begin at the beginning, with an outline of Aristotle’s views on ontology and with a discussion of the influence of these views on Linnaeus. We move from there to consider the data standardization initiatives launched in the nineteenth century and then turn to investigate how the idea of computational ontologies developed in the AI and knowledge representation communities in the closing decades of the twentieth century. We show how aspects of this idea, particularly those relating to the use of the term “concept” in ontology development, influenced SNOMED CT and other medical terminologies. Against this background, we then show how the Foundational Model of Anatomy, the Gene Ontology, Basic Formal Ontology, and other OBO Foundry ontologies came into existence and discuss their role in the development of contemporary biomedical informatics.
Since 1950, when Alan Turing proposed what has since come to be called the Turing test, the abili... more Since 1950, when Alan Turing proposed what has since come to be called the Turing test, the ability of a machine to pass this test has established itself as the primary hallmark of general AI. To pass the test, a machine would have to be able to engage in dialogue in such a way that a human interrogator could not distinguish its behaviour from that of a human being. AI researchers have attempted to build machines that could meet this requirement, but they have so far failed. To pass the test, a machine would have to meet two conditions: (i) react appropriately to the variance in human dialogue and (ii) display a human-like personality and intentions. We argue, first, that it is for mathematical reasons impossible to program a machine which can master the enormously complex and constantly evolving pattern of variance which human dialogues contain. And second, that we do not know how to make machines that possess personality and intentions of the sort we find in humans. Since a Turing machine cannot master human dialogue behaviour, we conclude that a Turing machine also cannot possess what is called ``general'' Artificial Intelligence. We do, however, acknowledge the potential of Turing machines to master dialogue behaviour in highly restricted contexts, where what is called ``narrow'' AI can still be of considerable utility.
Essays on Wittgenstein and Austrian philosophy, 2004
Building on the writings of Wittgenstein on rule-following and deviance, Kristof Nyiri advanced a... more Building on the writings of Wittgenstein on rule-following and deviance, Kristof Nyiri advanced a theory of creativity as consisting in a fusion of conflicting rules or disciplines. Only such fusion can produce something that is both intrinsically new and yet capable of being apprehended by and passed on to a wider community. Creativity, on this view, involves not the breaking of rules, or the deliberate cultivation of deviant social habits, but rather the acceptance of enriched systems of rules, the adherence to which presupposes ...
When economist Hernando de Soto published The Mystery of Capital in 2000, its author made (on pag... more When economist Hernando de Soto published The Mystery of Capital in 2000, its author made (on page 221) a single seemingly insubstantial reference to philosopher John Searle's The Construction of Social Reality, which had appeared in 1995. Nobody need have paid attention to this fleeting citation, particularly because the other philosophers referenced in support of de Soto's claims���Popper, Dennett, Foucault, Derrida���constitute quite a philosophical hodgepodge. It required a catalyst at the time who worked on ...
Zeitschrift f��r philosophische Forschung, Oct 1, 1998
Die Philosophiegeschichte liefert verschiedene Ansatzpunkte f-ur die Behandlung der Frage nach de... more Die Philosophiegeschichte liefert verschiedene Ansatzpunkte f-ur die Behandlung der Frage nach dem Wesen von sozialen Objekten. Auf der einen Seite gibt es holistische und realistische Ontologien des Sozialen. Fur Heidegger z. B. sind Menschen keine isolierten Individuen sondern durch Andere, durch Werkzeuge, durch Traditionen verstrickt in eine Mitwelt, die nur als Ganzes existiert. Eine holistische Auffassung des Sozialen findet man auch bei Wittgenstein, der soziale Institutionen als Teile der Naturgeschiclite des ...
I shall presuppose as undefended background to what follows a position of scientific realism, a d... more I shall presuppose as undefended background to what follows a position of scientific realism, a doctrine to the effect (i) that the world exists and (ii) that through the working out of ever more sophisticated theories our scientific picture of reality will approximate ever more closely to the world as it really is. Against this background consider, now, the following question: 1. Do the empirical theories with the help of which we seek to approximate a good or true picture of reality rest on any non-empirical presuppositions? One can answer this ...
Changes in an upper level ontology have obvious consequences for the domain ontologies that use i... more Changes in an upper level ontology have obvious consequences for the domain ontologies that use it at lower levels. It is therefore crucial to document the changes made between successive versions of ontologies of this kind. We describe and apply a method for tracking, explaining and measuring changes between successive versions of upper level ontologies such as the Basic Formal Ontology (BFO). The proposed change-tracking method extends earlier work on Realism- Based Ontology Versioning (RBOV) and Evolutionary Terminology Auditing (ETA). We describe here the application of this evaluation method to changes between BFO 1.0, BFO 1.1, and BFO 2.0. We discuss the issues raised by this application and describe the extensions which we added to the original evaluation schema in order to account for changes in this type of ontology. The results of our study show that BFO has undergone eight types of changes that can be systematically explained by the extended evaluation schema. Finally, we...
... But of course, this concedes that thinking cannot be simply symbol manipulation. (129 ... me... more ... But of course, this concedes that thinking cannot be simply symbol manipulation. (129 ... mental competence leave off? Harnad believes that symbolic functions must be grounded in robotic ... Many responses to the Chinese Room argument have noted that, as with Leibniz' Mill, the ...
Despite a large and multifaceted effort to understand the vast landscape of phenotypic data, thei... more Despite a large and multifaceted effort to understand the vast landscape of phenotypic data, their current form inhibits productive data analysis. The lack of a community-wide, consensus-based, human- and machine-interpretable language for describing phenotypes and their genomic and environmental contexts is perhaps the most pressing scientific bottleneck to integration across many key fields in biology, including genomics, systems biology, development, medicine, evolution, ecology, and systematics. Here we survey the current phenomics landscape, including data resources and handling, and the progress that has been made to accurately capture relevant data descriptions for phenotypes. We present an example of the kind of integration across domains that computable phenotypes would enable, and we call upon the broader biology community, publishers, and relevant funding agencies to support efforts to surmount…
The book’s core argument is that an artificial intelligence that could equal or exceed human inte... more The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim:
Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence from mathematics, physics, computer science, philosophy, linguistics, and biology, setting up their book around three central questions: What are the essential marks of human intelligence? What is it that researchers try to do when they attempt to achieve "artificial intelligence" (AI)? And why, after more than 50 years, are our most common interactions with AI, for example with our bank’s computers, still so unsatisfactory?
Uploads
Papers by Barry Smith
Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.
Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.
In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence from mathematics, physics, computer science, philosophy, linguistics, and biology, setting up their book around three central questions: What are the essential marks of human intelligence? What is it that researchers try to do when they attempt to achieve "artificial intelligence" (AI)? And why, after more than 50 years, are our most common interactions with AI, for example with our bank’s computers, still so unsatisfactory?