Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Marcin Miłkowski
  • Instytut Filozofii i Socjologii
    ul. Nowy Świat 72
    00-330 Warszawa
    POLAND
In this book, Marcin Milkowski argues that the mind can be explained computationally because it is itself computational—whether it engages in mental arithmetic, parses natural language, or processes the auditory signals that allow us to... more
In this book, Marcin Milkowski argues that the mind can be explained computationally because it is itself computational—whether it engages in mental arithmetic, parses natural language, or processes the auditory signals that allow us to experience music. Defending the computational explanation against objections to it—from John Searle and Hilary Putnam in particular—Milkowski writes that computationalism is here to stay but is not what many have taken it to be. It does not, for example, rely on a Cartesian gulf between software and hardware, or mind and brain. Milkowski's mechanistic construal of computation allows him to show that no purely computational explanation of a physical process will ever be complete. Computationalism is only plausible, he argues, if you also accept explanatory pluralism.

Milkowski sketches a mechanistic theory of implementation of computation against a background of extant conceptions, describing four dissimilar computational models of cognition. He reviews other philosophical accounts of implementation and computational explanation and defends a notion of representation that is compatible with his mechanistic account and adequate vis à vis the four models discussed earlier. Instead of arguing that there is no computation without representation, he inverts the slogan and shows that there is no representation without computation—but explains that representation goes beyond purely computational considerations. Milkowski's arguments succeed in vindicating computational explanation in a novel way by relying on mechanistic theory of science and interventionist theory of causation.
This white paper is part of a series that promotes knowledge about language technology and its potential. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to... more
This white paper is part of a series that promotes knowledge about language technology and its potential. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ. The required actions depend on many factors, such as the complexity of a given language and the size of its community.

META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies in this white paper series. The analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are tremendous deficits in technology support and significant research gaps for each language. The given detailed expert analysis and assessment of the current situation will help maximise the impact of additional research.
Interview, by Carrie Figdor, for New Books in Philosophy
Research Interests:
Naturalism is currently the most vibrantly developing approach to philosophy, with naturalised methodologies being applied across all the philosophical disciplines. One of the areas naturalism has been focussing upon is the mind,... more
Naturalism is currently the most vibrantly developing approach to philosophy, with naturalised methodologies being applied across all the philosophical disciplines. One of the areas naturalism has been focussing upon is the mind, traditionally viewed as a topic hard to reconcile with the naturalistic worldview. A number of questions have been pursued in this context. What is the place of the mind in the world? How should we study the mind as a natural phenomenon? What is the significance of cognitive science research for philosophical debates?

In this book, philosophical questions about the mind are asked in the context of recent developments in cognitive science, evolutionary theory, psychology, and the project of naturalisation. Much of the focus is upon what we have learned by studying natural mental mechanisms as well as designing artificial ones. In the case of natural mental mechanisms, this includes consideration of such issues as the significance of deficits in these mechanisms for psychiatry. The significance of the evolutionary context for mental mechanisms as well as questions regarding rationality and wisdom is also explored. Mechanistic and functional models of the mind are used to throw new light on discussions regarding issues of explanation, reduction and the realisation of mental phenomena. Finally, naturalistic approaches are used to look anew at such traditional philosophical issues as the correspondence of mind to world and presuppositions of scientific research.
Research Interests:
The contributors to this volume engage with issues of normativity within naturalised philosophy. The issues are critical to naturalism as most traditional notions in philosophy, such as knowledge, justification or representation, are... more
The contributors to this volume engage with issues of normativity within naturalised philosophy. The issues are critical to naturalism as most traditional notions in philosophy, such as knowledge, justification or representation, are said to involve normativity. Some of the contributors pursue the question of the correct place of normativity within a naturalised ontology, with emergentist and eliminativist answers offered on neighbouring pages. Others seek to justify particular norms within a naturalised framework, the more surprising ones including naturalist takes on the a priori and intuitions. Finally, yet others examine concrete examples of the application of norms within particular epistemic endeavours, such as psychopathology and design. The overall picture is that of an intimate engagement with issues of normativity on the part of naturalist philosophers – questioning some of the fundamentals at the same time as they try to work out many of the details.
Research Interests:
PRZEWODNIK PO FILOZOFII UMYSŁU jest zbiorem artykułów przedstawiających główne kontrowersje dotyczące natury umysłu. Zaprezentowano w nich współczesne ujęcia klasycznych zagadnień filozofii umysłu oraz najnowsze problemy powstające na... more
PRZEWODNIK PO FILOZOFII UMYSŁU jest zbiorem artykułów przedstawiających główne kontrowersje dotyczące natury umysłu. Zaprezentowano w nich współczesne ujęcia klasycznych zagadnień filozofii umysłu oraz najnowsze problemy powstające na styku filozofii i różnych działów nauki o umyśle i procesach poznawczych. Cechą charakterystyczną większości opracowań, a zarazem znakiem naszych czasów, jest interdyscyplinarne podejście do problemów filozofii umysłu - podejście wykorzystujące wyniki nauk szczegółowych, pogłębione o ich wymiar ściśle filozoficzny.
Research Interests:
Antologia Analityczna ontologia umysłu. Najnowsze kontrowersje składa się z 16 artykułów napisanych przez czołowych przedstawicieli współczesnej analitycznej metafizyki umysłu (m.in. D. Chalmersa, D. Dennetta, D. Davidson, J. Fodora, J.... more
Antologia Analityczna ontologia umysłu. Najnowsze kontrowersje składa się z 16 artykułów napisanych przez czołowych przedstawicieli współczesnej analitycznej metafizyki umysłu (m.in. D. Chalmersa, D. Dennetta, D. Davidson, J. Fodora, J. Kima, D. Lewisa, T. Nagela, H. Putnama, J. Searle’a, R. Stalnakera). Tekstom zebranym w antologii w miarę jednolity charakter nadaje zagadnienie relacji interteoretycznych i międzypoziomowych (redukcja, emergencja, superweniencja, wieloraka realizacja), którego szczególnym przypadkiem jest problem dotyczący natury relacji psychofizycznych. Autorzy poszukują nowych odpowiedzi na fundamentalne pytania metafizyki umysłu. Czym jest i jak istnieje umysł? Jaka relacja zachodzi między umysłem a mózgiem, ciałem i środowiskiem? Czy procesy umysłowe redukują się do procesów neurobiologicznych (a jeśli tak/nie, to w jakim znaczeniu redukcji). Czy umysły to wielorako realizowalne programy implementowane w tworzywie fizycznym? Czy własności i stany umysłowe, takie jak świadomość i treści umysłowe, odznaczają się względną autonomią wobec procesów niższego rzędu, na bazie których zachodzą? Czy treść umysłowa ma charakter nieredukowalnie relacyjny, czy też sprowadza się do wewnętrznych stanów systemu poznawczego? Na czym polega emergencja psychofizyczna i jakie są jej odmiany? Czy istnieją wersje dualizmu psychofizycznego zgodne z aktualnym stanem wiedzy naukowej na temat procesów umysłowo-poznawczych? Na czym polegają metodologiczne ograniczenia analitycznej metafizyki umysłu i w jaki sposób można je przezwyciężyć?

Te i inne zagadnienia podejmują autorzy prac zebranych w niniejszej antologii. Całość poprzedza obszerne wprowadzenie (autorstwa Marcina Miłkowskiego i Roberta Poczobuta) do problemów i metod analitycznej metafizyki umysłu.
Research Interests:
This paper argues that the extended mind approach to cognition can be distinguished from its alternatives, such as embedded cognition and distributed cognition, not only in terms of metaphysics, but also in terms of epistemology. In other... more
This paper argues that the extended mind approach to cognition can be distinguished from its alternatives, such as embedded cognition and distributed cognition, not only in terms of metaphysics, but also in terms of epistemology. In other words, it cannot be understood in terms of a mere verbal redefinition of cognitive processing. This is because the extended mind approach differs in its theoretical virtues compared to competing approaches to cognition. The extended mind approach is thus evaluated in terms of its theoretical virtues, both essential to empirical adequacy and those that are ideal desiderata for scientific theories. While the extended mind approach may have similar internal consistency and empirical adequacy compared to other approaches, it may be more problematic in terms of its generality and simplicity as well as unificatory properties due to the cognitive bloat and the motley crew objections.
This paper presents an argument for the realism about mechanisms, contents, and vehicles of mental representation at both the personal and subpersonal levels, and showcases its role in instrumental rationality and proper cognitive... more
This paper presents an argument for the realism about mechanisms, contents, and vehicles of mental representation at both the personal and subpersonal levels, and showcases its role in instrumental rationality and proper cognitive functioning. By demonstrating how misrepresentation is necessary for learning from mistakes and explaining certain failures of action, we argue that fallible rational agents must have mental representations with causally relevant vehicles of content. Our argument contributes to ongoing discussions in philosophy of mind and cognitive science by challenging anti-realist views about the nature of mental representation, and by highlighting the importance of understanding how different agents can misrepresent in pursuit of their goals. While there are potential rebuttals to our claim, our opponents must explain how agents can be rational without having mental representations. This is because mental representation is grounded in rationality.
In light of the recent credibility crisis in psychology, this paper argues for a greater emphasis on theorizing in scientific research. Although reliable experimental evidence, preregistration, methodological rigor, and new computational... more
In light of the recent credibility crisis in psychology, this paper argues for a greater emphasis on theorizing in scientific research. Although reliable experimental evidence, preregistration, methodological rigor, and new computational frameworks for modeling are important, scientific progress also relies on properly functioning theories. However, the current understanding of the role of theorizing in psychology is lacking, which may lead to future crises. Theories should not be viewed as mere speculations or simple inductive generalizations. To address this issue, the author introduces a framework called "cognitive metascience," which studies the processes and results of evaluating scientific practice. This study should proceed both qualitatively, as in traditional science and technology studies and cognitive science, and quantitatively, by analyzing scientific discourse using language technology. By analyzing theories as cognitive artifacts that support cognitive tasks, this paper aims to shed more light on their nature. This perspective reveals that multiple distinct theories serve entirely different roles, and studying these roles, along with their epistemic vices and virtues, can provide insight into how theorizing should proceed. The author urges a change in research culture to appreciate the variety of distinct theories and to systematically advance scientific progress.
Three special issues of Entropy journal have been dedicated to the topics of “Information-Processing and Embodied, Embedded, Enactive Cognition”. They addressed morphological computing, cognitive agency, and the evolution of cognition.... more
Three special issues of Entropy journal have been dedicated to the topics of “Information-Processing and Embodied, Embedded, Enactive Cognition”. They addressed morphological computing, cognitive agency, and the evolution of cognition. The contributions show the diversity of views present in the research community on the topic of computation and its relation to cognition. This paper is an attempt to elucidate current debates on computation that are central to cognitive science. It is written in the form of a dialog between two authors representing two opposed positions regarding the issue of what computation is and could be, and how it can be related to cognition. Given the different backgrounds of the two researchers, which span physics, philosophy of computing and information, cognitive science, and philosophy, we found the discussions in the form of Socratic dialogue appropriate for this multidisciplinary/cross-disciplinary conceptual analysis. We proceed as follows. First, the proponent (GDC) introduces the info-computational framework as a naturalistic model of embodied, embedded, and enacted cognition. Next, objections are raised by the critic (MM) from the point of view of the new mechanistic approach to explanation. Subsequently, the proponent and the critic provide their replies. The conclusion is that there is a fundamental role for computation, understood as information processing, in the understanding of embodied cognition.
One of the critical issues in the philosophy of science is to understand scientific knowledge. This paper proposes a novel approach to the study of reflection on science, called "cognitive metascience". In particular, it offers a new... more
One of the critical issues in the philosophy of science is to understand scientific knowledge. This paper proposes a novel approach to the study of reflection on science, called "cognitive metascience". In particular, it offers a new understanding of scientific knowledge as constituted by various kinds of scientific representations, framed as cognitive artifacts. It introduces a novel functional taxonomy of cognitive artifacts prevalent in scientific practice, covering a huge diversity of their formats, vehicles, and functions. As a consequence, toolboxes, conceptual frameworks, theories, models, and individual hypotheses can be understood as artifacts supporting our cognitive performance. It is also shown that by empirically studying how artifacts function, we may discover hitherto undiscussed virtues and vices of these scientific representations. This paper relies on the use of language technology to analyze scientific discourse empirically, which allows us to uncover the metascientific views of researchers. This, in turn, can become part of normative considerations concerning virtues and vices of cognitive artifacts.
The predictive processing (PP) account of action, cognition, and perception is one of the most influential approaches to unifying research in cognitive science. However, its promises of grand unification will remain unfulfilled unless the... more
The predictive processing (PP) account of action, cognition, and perception is one of the most influential approaches to unifying research in cognitive science. However, its promises of grand unification will remain unfulfilled unless the account becomes theoretically robust. In this paper, we focus on empirical commitments of PP, since they are necessary both for its theoretical status to be established and for explanations of individual phenomena to be falsifiable. First, we argue that PP is a varied research tradition, which may employ various kinds of scientific representations (from theories to frameworks and toolboxes), differing in the scope of empirical commitments they entail. Two major perspectives on PP qua cognitive theory may then be distinguished: generalized vs. hierarchical. The first one fails to provide empirical detail, and the latter constrains possible physical implementations. However, we show that even hierarchical PP is insufficiently restrictive to disallow incorrect models and may be adjusted to explain any neurocognitive phenomenon-including non-existent or impossible ones-through flexible adjustments. This renders PP a universal modeling tool with an unrestricted number of degrees of freedom. Therefore, in contrast with declarations of its proponents, it should not be understood as a unifying theoretical perspective, but as a computational framework, possibly informing further theory development in cognitive science.
Alan Turing’s influence on subsequent research in artificial intelligence is undeniable. His proposed test for intelligence remains influential. In this paper, I propose to analyze his conception of intelligence by relying on traditional... more
Alan Turing’s influence on subsequent research in artificial intelligence is undeniable. His proposed test for intelligence remains influential. In this paper, I propose to analyze his conception of intelligence by relying on traditional close reading and language technology. The Turing test is interpreted as an instance of conceptual engineering that rejects the role of the previous linguistic usage, but appeals to intuition pumps instead. Even though many conceive his proposal as a prime case of operationalism, it is more plausibly viewed as a stepping stone toward a future theoretical construal of intelligence in mechanical terms. To complete this picture, his own conceptual network is analyzed through the lens of distributional semantics over the corpus of his written work. As it turns out, Turing’s conceptual engineering of the notion of intelligence is indeed quite similar to providing a precising definition with the aim of revising the usage of the concept. However, that is not its ultimate aim: Turing is after a rich theoretical understanding of thinking in mechanical, i.e., computational, terms.
The debate between the defenders of explanatory unification and explanatory pluralism has been ongoing from the beginning of cognitive science and is one of the central themes of its philosophy. Does cognitive science need a grand... more
The debate between the defenders of explanatory unification and explanatory pluralism has been ongoing from the beginning of cognitive science and is one of the central themes of its philosophy. Does cognitive science need a grand unifying theory? Should explanatory pluralism be embraced instead? Or maybe local integrative efforts are needed? What are the advantages of explanatory unification as compared to the benefits of explanatory pluralism? These questions, among others, are addressed in this Synthese’s special issue. In the introductory paper, we discuss the background of the questions, distinguishing integrative theorizing from building unified theories. On the one hand, integrative efforts involve collaboration between various disciplines, fields, approaches, or theories. These efforts could even be quite temporary, without establishing any long-term institutionalized fields or disciplines, but could also contribute to developing new interfield theories. On the other hand, unification can rely on developing complete theories of mechanisms and representations underlying all cognition, as Newell’s “unified theories of cognition”, or may appeal to grand principles, as predictive coding. Here, we also show that unification in contemporary cognitive science goes beyond reductive unity, and may involve various forms of joint efforts and division of explanatory labor. This conclusion is one of the themes present in the content of contributions constituting the special issue.
The focus of this special issue of Theory & Psychology is on explanatory mechanisms in psychology, especially on problems of particular prominence for psychological science such as theoretical integration and unification. Proponents of... more
The focus of this special issue of Theory & Psychology is on explanatory mechanisms in psychology, especially on problems of particular prominence for psychological science such as theoretical integration and unification. Proponents of the framework of mechanistic explanation claim, in short, that satisfactory explanations in psychology and related fields are causal. They stress the importance of explaining phenomena by describing mechanisms that are responsible for them, in particular by elucidating how the organization of component parts and operations in mechanisms gives rise to phenomena in certain conditions. We hope for cross-pollination between philosophical approaches to explanation and experimental psychology, which could offer methodological guidance, in particular where mechanism discovery and theoretical integration are at issue. Contributions in this issue pertain to theoretical integration and unification of psychology as well as the growing importance of causal mechan...
A novel account of semantic information is proposed. The gist is that structural correspondence, analyzed in terms of similarity, underlies an important kind of semantic information. In contrast to extant accounts of semantic information,... more
A novel account of semantic information is proposed. The gist is that structural correspondence, analyzed in terms of similarity, underlies an important kind of semantic information. In contrast to extant accounts of semantic information, it does not rely on correlation, covariation, causation, natural laws, or logical inference. Instead, it relies on structural similarity, defined in terms of correspondence between classifications of tokens into types. This account elucidates many existing uses of the notion of information, for example, in the context of scientific models and structural representations in cognitive science. It is poised to open a new research program concerned with various kinds of semantic information, its functions, and its measurement. 4.4.Cognitive maps 6. Possible objections 4.5.Similarity versus covariation 4.6.Too many correspondences 4.7.No propositional content 7. Conclusions 1.
In his recent book, Daniel Dennett defends a novel account of semantic information in terms of design worth getting (Dennett, 2017). While this is an interesting proposal in itself, my purpose in this commentary is to challenge several of... more
In his recent book, Daniel Dennett defends a novel account of semantic information in terms of design worth getting (Dennett, 2017). While this is an interesting proposal in itself, my purpose in this commentary is to challenge several of Dennett's claims. First, he argues that semantic information can be transferred without encoding and storing it. Second, this lack of encoding is what makes semantic information unmeasurable. However, the argument for both these claims, presented by Dennett as an intuition pump, is invalid.
Predictive processing (PP) has been repeatedly presented as a unificatory account of perception, action, and cognition. In this paper, we argue that this is premature: As a unifying theory, PP fails to deliver general, simple, homogeneous,... more
Predictive processing (PP) has been repeatedly presented as a unificatory account of perception, action, and cognition. In this paper, we argue that this is premature: As a unifying theory, PP fails to deliver general, simple, homogeneous, and systematic explanations. By examining its current trajectory of development, we conclude that PP remains only loosely connected both to its computational framework and to its hypothetical biological underpinnings, which makes its fundamentals
unclear. Instead of offering explanations that refer to the same set of principles, we observe sys-
tematic equivocations in PP-based models, or outright contradictions with its avowed principles.
To make matters worse, PP-based models are seldom empirically validated, and they are fre-
quently offered as mere just-so stories. The large number of PP-based models is thus not evidence
of theoretical progress in unifying perception, action, and cognition. On the contrary, we maintain
that the gap between theory and its biological and computational bases contributes to the arrested
development of PP as a unificatory theory. Thus, we urge the defenders of PP to focus on its critical problems instead of offering mere re-descriptions of known phenomena, and to validate their models against possible alternative explanations that stem from different theoretical assumptions. Otherwise, PP will ultimately fail as a unified theory of cognition.
In this paper, we defend a novel, multidimensional account of representational unification,whichwedistinguishfromintegration.Thedimensionsofunityaresimplicity, generality and scope, non-monstrosity, and systematization. In our account,... more
In this paper, we defend a novel, multidimensional account of representational unification,whichwedistinguishfromintegration.Thedimensionsofunityaresimplicity, generality and scope, non-monstrosity, and systematization. In our account, unification is a graded property. The account is used to investigate the issue of how research traditions contribute to representational unification, focusing on embodied cognition in cognitive science. Embodied cognition contributes to unification even if it fails to offeragrandunificationofcognitivescience.Thestudyofthisfailureshowsthatunification,contrarytowhatdefenders ofmechanisticexplanation claim,isanimportant mechanistic virtue of research traditions.
In this paper, I argue that embodied cognition, like many other research traditions in cognitive science, offers mostly fallible research heuristics rather than grand principles true of all cognitive processing. To illustrate this claim,... more
In this paper, I argue that embodied cognition, like many other research traditions in cognitive science, offers mostly fallible research heuristics rather than
grand principles true of all cognitive processing. To illustrate this claim, I discuss Aizawa’s rebuttal of embodied and enactive accounts of vision. While Aizawa’s argument is sound against a strong reading of the enactive account, it does not undermine the way embodied cognition proceeds, because the claim he attacks is one of fallible heuristics. These heuristics may be helpful in developing models of cognition in an interdisciplinary fashion. I briefly discuss the issue of whether this fallibility actually makes embodied cognition vulnerable to charges of being untestable or non-scientific. I also stress that the historical approach to this research tradition suggests that embodied cognition is not poised to become a grand unified theory of cognition.
In this paper, we focus on the development of geometric cognition. We argue that to understand how geometric cognition has been constituted, one must appreciate not only individual cognitive factors, such as phylogenetically ancient and... more
In this paper, we focus on the development of geometric cognition. We argue that to understand how geometric cognition has been constituted, one must appreciate not only individual cognitive factors, such as phylogenetically ancient and ontogenetically early core cognitive systems, but also the social history of the spread and use of cognitive artifacts. In particular, we show that the development of Greek mathematics, enshrined in Euclid's Elements, was driven by the use of two tightly intertwined cognitive artifacts: the use of lettered diagrams; and the creation of linguistic formulae (namely non-compositional fixed strings of words used repetitively within authors and between them). Together, these artifacts formed the professional language of geometry. In this respect, the case of Greek geometry clearly shows that explanations of geometric reasoning have to go beyond the confines of methodological individualism to account for how the distributed practice of artifact use has stabilized over time. This practice, as we suggest, has also contributed heavily to the understanding of what mathematical proof is; classically, it has been assumed that proofs are not merely deductively correct but also remain invariant over various individuals sharing the same cognitive practice. Cognitive artifacts in Greek geometry constrained the repertoire of admissible inferential operations, which made these proofs inter-subjectively testable and compelling. By focusing on the cognitive operations on artifacts, we also stress that mental mechanisms that contribute to these operations are still poorly understood, in contrast to those mechanisms which drive symbolic logical inference.
Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example... more
Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example is a conventional electrical AND-gate that is often said to compute conjunction, but it can just as well be used to compute disjunction. Despite the pervasiveness of this phenomenon in physical computational systems, it has been discussed in the philosophical literature only indirectly, mostly with reference to the debate over realism about physical computation and computationalism. A welcome exception is Dewhurst’s ([2018]) recent analysis of computational individuation under the mechanistic framework. He rejects the idea of appealing to semantic properties for determining the computational identity of a physical system. But Dewhurst seems to be too quick to pay the price of giving up the notion of computational equivalence. We aim to show that th...
Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example... more
Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example is a conventional electrical AND-gate that is often said to compute conjunction, but it can just as well be used to compute disjunction. Despite the pervasiveness of this phenomenon in physical computational systems, it has been discussed in the philosophical literature only indirectly, mostly with reference to the debate over realism about physical computation and computationalism. A welcome exception is Dewhurst’s ([2018]) recent analysis of computational individuation under the mechanistic framework. He rejects the idea of appealing to semantic properties for determining the computational identity of a physical system. But Dewhurst seems to be too quick to pay the price of giving up the notion of computational equivalence. We aim to show that the mechanist need not pay this price. The mechanistic framework can, in principle, preserve the idea of computational equivalence even between two different enough kinds of physical systems, say, electrical and hydraulic ones.
The purpose of this paper is to argue against the claim that morphological computation is substantially different from other kinds of physical computation. I show that some (but not all) purported cases of morphological computation do not... more
The purpose of this paper is to argue against the claim that morphological computation is substantially different from other kinds of physical computation. I show that some (but not all) purported cases of morphological computation do not count as specifically computational, and that those that do are solely physical computational systems. These latter cases are not, however, specific enough: all computational systems, not only morphological ones, may (and sometimes should) be studied in various ways, including their energy efficiency, cost, reliability, and durability. Second, I critically analyze the notion of " offloading " computation to the morphology of an agent or robot, by showing that, literally, computation is sometimes not offloaded but simply avoided. Third, I point out that while the morphology of any agent is indicative of the environment that it is adapted to, or informative about that environment, it does not follow that every agent has access to its morphology as the model of its environment.
Research Interests:
In this paper, we argue that several recent ‘wide’ perspectives on cognition (embodied, embedded, extended, enactive, and distributed) are only partially relevant to the study of cognition. While these wide accounts override traditional... more
In this paper, we argue that several recent ‘wide’ perspectives on cognition (embodied, embedded, extended, enactive, and distributed) are only partially relevant to the study of cognition. While these wide accounts override traditional methodological individualism, the study of cognition has already progressed beyond these proposed perspectives toward building integrated explanations of the mechanisms involved, including not only internal submechanisms but also interactions with others, groups, cognitive artifacts, and their environment. Wide perspectives are essentially research heuristics for building mechanistic explanations. The claim is substantiated with reference to recent developments in the study of “mindreading” and debates on emotions. We argue that the current practice in cognitive (neuro)science has undergone, in effect, a silent mechanistic revolution, and has turned from initial binary oppositions and abstract proposals toward the integration of wide perspectives with the rest of the cognitive (neuro)sciences.
Replicability and reproducibility of computational models has been somewhat understudied by the replication movement. In this paper, we draw on methodological studies into the replicability of psychological experiments and on the... more
Replicability and reproducibility of computational models has been somewhat understudied by the replication movement. In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code.
Research Interests:
The purpose of this chapter is to sketch the history of mechanistic models of the mental, as related to the technological project of trying to build mechanical minds, and discuss the uses of such models in psychological and cognitive... more
The purpose of this chapter is to sketch the history of mechanistic models of the mental, as related to the technological project of trying to build mechanical minds, and discuss the uses of such models in psychological and cognitive explanations. Initially, they were supposed to show that mechanisms can in principle explain the mental. Today, efforts in cognitive science are focused on building more and more biologically and physically plausible models of mental mechanisms.
Research Interests:
In this paper, computational explanations of episodes of hallucination are analyzed from the perspective of the mechanistic account of explanation. To make the discussion more specific, I focus on visual hallucinations occurring in people... more
In this paper, computational explanations of episodes of hallucination are analyzed from the perspective of the mechanistic account of explanation. To make the discussion more specific, I focus on visual hallucinations occurring in people with Charles Bonnet Syndrome. Even if computational explanations, as I argue, need not be representational, and representations are not reducible merely to computational phenomena , there are numerous features of representations that can be explained computationally. To substantiate this claim, I briefly introduce a recent computational model of this hallucination, which relies on generative models in the brain, and argue that the model is a prime example of a representational and computational explanation. I conclude by arguing that computationalism is a natural ally of explanatory pluralism.
In this paper, the Author reviewed the typical objections against the claim that brains are com­puters, or, to be more precise, information-processing mechanisms. By showing that practi­cally all the popular objections are based on... more
In this paper, the Author reviewed the typical objections against the claim that brains are com­puters, or, to be more precise, information-processing mechanisms. By showing that practi­cally all the popular objections are based on uncharitable (or simply incorrect) interpretations of the claim, he argues that the claim is likely to be true, relevant to contemporary cognitive (neuro) science, and non-trivial.
Research Interests:
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the... more
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary rea‑ sons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compel‑ ling way. Next, I contrast two computational models of working memory to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evi‑ denced by the way in which models are empirically validated. Moreover, the meth‑ odological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deserv‑ edly successful in cognitive (neuro)science. Its successes are related to deep concep‑ tual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year.
Research Interests:
Research Interests:
Is there a field of social intelligence? Many various disciplines approach the subject and it may only seem natural to suppose that different fields of study aim at explaining different phenomena; in other words, there is no special field... more
Is there a field of social intelligence? Many various disciplines approach the subject and it may only seem natural to suppose that different fields of study aim at explaining different phenomena; in other words, there is no special field of study of social intelligence. In this paper, I argue for an opposite claim. Namely, there is a way to integrate research on social intelligence, as long as one accepts the mechanistic account to explanation. Mechanistic integration of different explanations, however, comes at a cost: mechanism requires explanatory models to be fairly complete and realistic, and this does not seem to be the case for many models concerning social intelligence, especially models of economical behavior. Such models need either be made more realistic, or they would not count as contributing to the same field. I stress that the focus on integration does not lead to ruthless reductionism; on the contrary, mechanistic explanations are best understood as explanatorily pluralistic.
Research Interests:
In this paper, I show how semantic factors constrain the understanding of the computational phenomena to be explained so that they help build better mechanistic models. In particular, understanding what cognitive systems may refer to is... more
In this paper, I show how semantic factors constrain the understanding of the computational phenomena to be explained so that they help build better mechanistic models. In particular, understanding what cognitive systems may refer to is important in building better models of cognitive processes. For that purpose, a recent study of some phenomena in rats that are capable of ‘entertaining’ future paths (Pfeiffer and Foster 2013) is analyzed. The case shows that the mechanistic account of physical computation may be complemented with semantic considerations, and in many cases, it actually should.
In this paper, I review the objections against the claim that brains are computers, or, to be precise, information-processing mechanisms. By showing that practically all the popular objections are based on uncharitable (or simply... more
In this paper, I review the objections against the claim that brains are computers, or, to be precise, information-processing mechanisms. By showing that practically all the popular objections are based on uncharitable (or simply incorrect) interpretations of the claim, I argue that the claim is likely to be true, relevant to contemporary cognitive (neuro)science, and non-trivial. The computational theory of mind, or computationalism, has been fruitful in cognitive research. The main tenet of the computational theory of mind is that the brain is a kind of information-processing mechanism, and that information-processing is necessary for cognition; it is non-trivial and is generally accepted in cognitive science. The positive view will not be developed here, in particular the account of physical computation, because it has already been elucidated in book-length accounts (Fresco, 2014; Miłkowski, 2013; Piccinini, 2015). Instead, a review of objections is offered here, as no comprehensive survey is available. The survey suggests that the majority of objections fail just because they make computationalism a straw man. Some of them, however, have shown that stronger versions of the computational theory of mind are untenable, as well. Historically, they have helped to shape the theory and methodology of computational modeling. In particular, a number of objections show that cognitive systems are not only computers, or that computation is not the sole condition of cognition; no objection, however, establishes that there might be cognition without computation.
Research Interests:
The recent work on skin-brain thesis (de Wiljes et al. 2015; F. A. Keijzer 2015; 2013) suggests the possibility that there is an empirical proof that empiricism is false. It implies that early animals need no traditional sensory receptors... more
The recent work on skin-brain thesis (de Wiljes et al. 2015; F. A. Keijzer 2015; 2013) suggests the possibility that there is an empirical proof that empiricism is false. It implies that early animals need no traditional sensory receptors to be engaged in cognitive activity. The neural structure required to coordinate extensive sheets of contractile tissue for motility provides the starting point for a new multicellular organized form of sensing. Moving a body by muscle contraction provides the basis for a multicellular organization that is sensitive to external surface structure at the scale of the animal body. In other words, evolutionary speaking, the nervous system evolved for action, not for receiving the sensory input. So, in other words, sensory input is not required for minimal cognition; only action is. The whole body of an organism, in particular its highly specific animal sensorimotor organization, reflects the bodily and environmental spatiotemporal structure. The skin-brain thesis suggests that in contrast to empiricism that claims that cognition is constituted by sensory systems, cognition is constituted by action-oriented feedback mechanisms. Instead of positing the reflex arc as the elementary building block of nervous systems, it suggests that endogenous motor activity is the crucial part of a cognitive system.
In this paper, an account of theoretical integration in cognitive (neuro)science from the mechanistic perspective is defended. It is argued that mechanistic patterns of integration can be better understood in terms of constraints on... more
In this paper, an account of theoretical integration in cognitive (neuro)science from the mechanistic perspective is defended. It is argued that mechanistic patterns of integration can be better understood in terms of constraints on representations of mechanisms, not just on the space of possible mechanisms, as previous accounts of integration had it. This way, integration can be analyzed in more detail with the help of constraint-satisfaction account of coherence between scientific representations. In particular, the account has resources to talk of idealizations and research heuristics employed by researchers to combine separate results and theoretical frameworks. The account is subsequently applied to an example of successful integration in the research on hippocampus and memory, and to a failure of integration in the research on mirror neurons as purportedly explanatory of sexual orientation.
Research Interests:
In this paper, I review the objections against the claim that brains are computers, or, to be precise, information-processing mechanisms. By showing that practically all the popular objections are either based on uncharitable... more
In this paper, I review the objections against the claim that brains are computers, or, to be precise, information-processing mechanisms. By showing that practically all the popular objections are either based on uncharitable interpretation of the claim, or simply wrong, I argue that the claim is likely to be true, relevant to contemporary cognitive (neuro)science, and non-trivial.
Research Interests:
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms... more
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued that the role of the environment has long been appreciated, in particular in the work of Herbert A. Simon, which has inspired the mechanistic view on explanation. From Simon's perspective, the embodied view on cognition seems natural but it is nowhere near as critical as its proponents suggest. The only point of difference between Simon and embodied cognition is the significance of body-based off-line cognition; however, it will be argued that it is notoriously over-appreciated in the current debate. The new mechanistic view on explanation suggests that even if it is critical to situate a mechanism in its environment and study its physical composition, or realization, it is also stressed that not all detail counts, and that some bodily features of cognitive systems should be left out from explanations.
Research Interests:
In this paper, I describe, from the mechanistic point of view, unification strategies of explanation in cognitive science, as distinct from integration strategies.
Research Interests:
Herbert Simon long recognized the importance of environmental constraints for the study of human problem solving. This is also clearly present in his account of bounded rationality. According to him, some complexity of behavior may be... more
Herbert Simon long recognized the importance of environmental constraints for the study of human problem solving.  This is also clearly present in his account of bounded rationality. According to him, some complexity of behavior may be illusory if one accounts for the complexity of the environment (ant navigation in the 'Sciences of the Artificial'), and this is recently very heavily stressed by proponents of embodied and situated cognition.
At the same time, Simon and Newell seemed to idealize away the environment in their computational models of thinking. So the question is: why production systems and not robots with production systems as models of cognition? Why the environment was missing in their models of thought?
In this paper, I review Simon’s views on the robotic tortoise developed by W. Grey Walter, and the reasons Simon gives for his symbolic account of intelligence in his later papers. All in all, it seems that the role of the environment in Simon’s models is twofold. For one, environmental interventions help constrain the behavior of subjects, effectively limiting the degrees of freedom, which makes modeling feasible in the first place. These interventions are however not included in the models. For another, some parts of the environment are modeled symbolically -- explicitly -- on the par with the internal information processing, rather via robotic devices.
This paper centers around the notion that internal, mental representations are grounded in structural similarity, i.e., that they are so-called S-representations. We show how S-representations may be causally relevant and argue that they... more
This paper centers around the notion that internal, mental representations are grounded in structural similarity, i.e., that they are so-called S-representations. We show how S-representations may be causally relevant and argue that they are distinct from mere detectors. First, using the neomechanist theory of explanation and the interventionist account of causal relevance, we provide a precise interpretation of the claim that in S-representations, structural similarity serves as a " fuel of success " , i.e., a relation that is exploitable for the representation using system. Then, we discuss crucial differences between S-representations and indicators or detectors, showing that—contrary to claims made in the literature—there is an important theoretical distinction to be drawn between the two.
Research Interests:
I argue that there are no plausible non-representational explanations of episodes of hallucination. To make the discussion more specific, I focus on visual hallucinations in Charles Bonnet Syndrome. I claim that the character of such... more
I argue that there are no plausible non-representational explanations of episodes of hallucination. To make the discussion more specific, I focus on visual hallucinations in Charles Bonnet Syndrome. I claim that the character of such hallucinatory experiences cannot be explained away non-representationally, for they cannot be taken as simple failures of cognizing or as failures of contact with external realitysuch failures being the only genuinely non-representational explanations of hallucinations and cognitive errors in general. I briefly introduce a recent computational model of hallucination, which relies on generative models in the brain, and argue that the model is a prime example of a representational explanation referring to representational mechanisms. The notion of the representational mechanism is elucidated, and it is argued that hallucinationsand other kinds of representationscannot be exorcised from the cognitive sciences.
In this paper, I focus on a problem related to teleological theories of content namely, which notion of function makes content causally relevant? It has been claimed that some functional accounts of content make it causally irrelevant, or... more
In this paper, I focus on a problem related to teleological theories of content namely, which notion of function makes content causally relevant? It has been claimed that some functional accounts of content make it causally irrelevant, or epiphenomenal; in which case, such notions of function could no longer act as the pillar of naturalized semantics. By looking closer at biological questions about behavior, I argue that past discussion has been oriented towards an ill-posed question. What I defend is a Very Boring Hypothesis: depending on the representational phenomenon and the explanatory question, different aspects might be important, and it is difficult to say a priori which ones these might be. There are multiple facets to biological functionality and causality relevant for explaining representational phenomena, and ignoring them will lead to unmotivated simplifications. In addition, accounting for different facets of functionality helps dispense with intuition-based specifications of cognitive phenomena.
Multiple realizability (MR) is traditionally conceived of as the feature of computational systems, and has been used to argue for irreducibility of higher-level theories. I will show that there are several ways a computational system may... more
Multiple realizability (MR) is traditionally conceived of as the feature of computational systems, and has been used to argue for irreducibility of higher-level theories. I will show that there are several ways a computational system may be seen to display MR. These ways correspond to (at least) five ways one can conceive of the function of the physical computational system. However, they do not match common intuitions about MR. I show that MR is deeply interest-related, and for this reason, difficult to pin down exactly. I claim that MR is of little importance for defending computationalism, and argue that it should rather appeal to organizational invariance or substrate neutrality of computation, which are much more intuitive but cannot support strong antireductionist arguments.
Explanations in cognitive science and computational neuroscience rely predominantly on computational modeling. Although the scientific practice is systematic, and there is little doubt about the empirical value of numerous models, the... more
Explanations in cognitive science and computational neuroscience rely predominantly on computational modeling. Although the scientific practice is systematic, and there is little doubt about the empirical value of numerous models, the methodological account of computational explanation is not up-to-date. The current paper offers a systematic account of computational explanation in cognitive science and computational neuroscience within a mechanistic framework. The account is illustrated with a short case study of modeling of the mirror neuron system in terms of predictive coding.
Research Interests:
In this paper, I argue that even if the Hard Problem of Content, as identified by Hutto and Myin, is important, it was already solved in naturalized semantics, and satisfactory solutions to the problem do not rely merely on the notion of... more
In this paper, I argue that even if the Hard Problem of Content,
as identified by Hutto and Myin, is important, it was already solved in naturalized semantics, and satisfactory solutions to the problem do not rely merely on the notion of information as covariance. I point out that Hutto and Myin have double standards for linguistic and mental representation, which leads to a peculiar inconsistency. Were they to apply the same standards to basic and linguistic minds, they would either have to embrace representationalism or turn to semantic nihilism, which is, as I argue, an unstable and unattractive position.
Hence, I conclude, their book does not offer an alternative to representationalism.
At the same time, it reminds us that representational talk in cognitive science cannot be taken for granted and that information is different from mental representation. Although this claim is not new, Hutto and Myin defend it forcefully and elegantly.
Research Interests:
The claim defended in the paper is that the mechanistic account of explanation can easily embrace idealization in big-scale brain simulations, and that only causally relevant detail should be present in explanatory models. The claim is... more
The claim defended in the paper is that the mechanistic account of explanation can easily embrace idealization in big-scale brain simulations, and that only causally relevant detail should be present in explanatory models. The claim is illustrated with two methodologically different models: (1) Blue Brain, used for particular simulations of the cortical column in hybrid models, and (2) Eliasmith’s SPAUN model that is both biologically realistic and able to explain eight different tasks. By drawing on the mechanistic theory of computational explanation, I argue that large-scale simulations require that the explanandum phenomenon is identified; otherwise, the explanatory value of such explanations is difficult to establish, and testing the model empirically by comparing its behavior with the explanandum remains practically impossible. The completeness of the explanation, and hence of the explanatory value of the explanatory model, is to be assessed vis-à-vis the explanandum phenomenon, which is not to be conflated with raw observational data and may be idealized. I argue that idealizations, which include building models of a single phenomenon displayed by multi-functional mechanisms, lumping together multiple factors in a single causal variable, simplifying the causal structure of the mechanisms, and multi-model integration, are indispensable for complex systems such as brains; otherwise, the model may be as complex as the explanandum phenomenon, which would make it prone to so-called Bonini paradox. I conclude by enumerating dimensions of empirical validation of explanatory models according to new mechanism, which are given in a form of a “checklist” for a modeler.
The purpose of this paper is to present a general mechanistic framework for analyzing causal representational claims, and offer a way to distinguish genuinely representational explanations from those that invoke representations for... more
The purpose of this paper is to present a general mechanistic framework for analyzing causal representational claims, and offer a way to distinguish genuinely representational explanations from those that invoke representations for honorific purposes. It is usually agreed that rats are capable of navigation (even in complete darkness, and when immersed in a water maze) because they maintain a cognitive map of their environment. Exactly how and why their neural states give rise to mental representations is a matter of an ongoing debate. I will show that anticipatory mechanisms involved in rats’ evaluation of possible routes give rise to satisfaction conditions of contents, and this is why they are representationally relevant for explaining and predicting rats’ behavior. I argue that a naturalistic account of satisfaction conditions of contents answers the most important objections of antirepresentationalists.
Explanations in cognitive science rely predominantly on computational modeling. Though the scientific practice is systematic, and there is little doubt about the empirical value of numerous models, the methodological account of... more
Explanations in cognitive science rely predominantly on
computational modeling. Though the scientific practice is
systematic, and there is little doubt about the empirical value
of numerous models, the methodological account of
computational explanation is not up-to-date. The current
paper offers a systematic account of computational
explanation in cognitive science in a largely mechanistic
framework. The account is illustrated with a short case study
of modeling of the mirror neuron system in terms of
predictive coding.
Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they... more
Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper evaluation of models does not depend on whether they target real biological agents or not; instead, the standards of evaluation depend on the use of models rather than on the reality of their targets. I discuss how models are validated depending on their use and argue that all-encompassing benchmarks for models may be well beyond reach.
In cognitive neurosci­ence – and in systems neuroscience – it is a common complaint that there aren’t many theories, or any theories for that matter at all. The Neural Engineering Framework (NEF) is one of the few approaches that has been... more
In cognitive neurosci­ence – and in systems neuroscience – it is a common complaint that there aren’t many theories, or any theories for that matter at all. The Neural Engineering Framework (NEF) is one of the few approaches that has been defended as a general theory in this field. In this chapter,
I deal with the question of whether the NEF is a unifying approach in the light of Philip Kitcher’s unification account of explanation. Else­where I have defended the view that the NEF is genuinely explanatory according to the principles of the mechanistic account of explanation; so the question is not whether the NEF is explanatory at all but rather whether it exhibits the properties of theories that are best character­ized as involving theoretical unification.
Is there a field of social intelligence? Many various disciplines approach the subject and it may only seem natural to suppose that different fields of study aim at explaining different phenomena; in other words, there is no special field... more
Is there a field of social intelligence? Many various disciplines approach the subject and it may only seem natural to suppose that different fields of study aim at explaining different phenomena; in other words, there is no special field of study of social intelligence. In this paper, I argue for an opposite
claim. Namely, there is a way to integrate research on social intelligence, as long as one accepts the mechanistic account to explanation. Mechanistic integration of different explanations, however, comes at a cost: mechanism requires explanatory models to be fairly complete and realistic, and this does not seem to be the case for many models concerning social intelligence, especially models of economical behavior. Such models need either be made more realistic, or they would not count as contributing to the same field. I stress that the focus on integration does not lead to ruthless reductionism; on the contrary, mechanistic explanations are best understood as explanatorily pluralistic.
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet... more
In most accounts of realization of computational processes by physical mechanisms, it is presupposed that there is one-to-one correspondence between the causally active states of the physical process and the states of the computation. Yet such proposals either stipulate that only one model of computation is implemented, or they do not reflect upon the variety of models that could be implemented physically.

In this paper, I claim that mechanistic accounts of computation should allow for a broad variation of models of computation. In particular, some non-standard models should not be excluded a priori. The relationship between mathematical models of computation and mechanistically adequate models is studied in more detail.
In this article, after presenting the basic idea of causal accounts of implementation and the problems they are supposed to solve, I sketch the model of computation preferred by Chalmers and argue that it is too limited to do full justice... more
In this article, after presenting the basic idea of causal accounts of implementation and the problems they are supposed to solve, I sketch the model of computation preferred by Chalmers and argue that it is too limited to do full justice to computational theories in cognitive science. I also argue that it does not suffice to replace Chalmers’ favorite model with a better abstract model of computation; it is necessary to acknowledge the causal structure of physical computers that is not accommodated by the models used in computability theory. Additionally, an alternative mechanistic proposal is outlined.
I discuss whether there are some lessons for philosophical inquiry over the nature of simulation to be learnt from the practical methodology of reengineering. I will argue that reengineering serves a similar purpose as simulations in... more
I discuss whether there are some lessons for philosophical inquiry over the nature of simulation to be learnt from the practical methodology of reengineering. I will argue that reengineering serves a similar purpose as simulations in theoretical science such as computational neuroscience or neurorobotics, and that the procedures and heuristics of reengineering help to develop solutions to outstanding problems of simulation.
The standard objection against naturalised epistemology is that it cannot account for normativity in epistemology (Putnam 1982; Kim 1988). There are different ways to deal with it. One of the obvious ways is to say that the objection... more
The standard objection against naturalised epistemology is that it cannot account for normativity in epistemology (Putnam 1982; Kim 1988). There are different ways to deal with it. One of the obvious ways is to say that the objection misses the point: It is not a bug; it is a feature, as there is nothing interesting in normative principles in epistemology. Normative epistemology deals with norms but they are of no use in prac-tice. They are far too general to be guiding principles of research, up to the point that they even seem vacuous (see Knowles 2003). In this chapter, my strategy will be different and more in spirit of the founding father of naturalized epistemology, Quine, though not faithful to the letter. I focus on methodological prescriptions supplied by cogni-tive science in re-engineering of cognitive architectures. Engineering norms based on mechanism design weren’t treated as seriously as they should in epistemology, and that is why I will develop a sketch of a framework for researching them, starting from analysing cognitive sci-ence as engineering in section 3, then showing functional normativity in section 4, to eventually present functional engineering models of cogni-tive mechanisms as normative in section 5. Yet before showing the kind of engineering normativity specific for these prescriptions, it is worth-while to review briefly the role of normative methodology and the levels of norm complexity in it, and show how it follows Quine’s steps.

And 19 more

In the presentation, I will demonstrate how to use LanguageTool to automatically check for common mistakes in translation. Also, a short introduction to writing simple rules will be included.
In large computer-aided translation (CAT) projects, especially in software localization, one of the main problems is to maintain the consistent style of the translated text. To tackle this problem, translators have to follow different... more
In large computer-aided translation (CAT) projects, especially in software localization, one of the main problems is to maintain the consistent style of the translated text. To tackle this problem, translators have to follow different guidelines defined in style guides for different translation jobs. Yet, in the case of conflicting guidelines (for example, terminological) for various projects it is very easy to make mistakes, and quite hard to find them because they are not obvious nor glaring errors. Automated translation quality assessment (QA), on the other hand, are usually quite costly compared to other CAT tools and/or do not have any comprehensive natural-language processing features, and their use is not really beneficial for languages other than English. Because of that, the proofreading process is costly and time-consuming, or the translation quality is negatively impacted.
In this talk, I will present the translation QA features available in LanguageTool, an open-source proofreading tool (Miłkowski 2010). LanguageTool currently (as of version 1.2 released on January, 2, 2011) supports 21 languages and is able to use the standard the target language rules to check for the mistakes in the translated text, including false friends in translation, as well as specially designed translation QA rules. These rules may be specially crafted to conform to informal style guides and include the most frequent mistakes found by human proofreaders (using the method specified in Miłkowski, forthcoming). I will show some examples of such bilingual rules. It is hoped that thanks to XLIFF standard support the proofreading tool will be easily introduced to the standard translation QA work flow of translation agencies and individual translators.

References
Miłkowski, Marcin. 2010. Developing an open-source, rule-based proofreading tool. Software: Practice and Experience 40, no. 7: 543-566.
Miłkowski, Marcin, forthcoming. Automating rule generation for grammar checkers, in: S. Góźdź-Roszkowski, Proceedings of PALC 2009.
In many sciences, including cogntive science and biology, it is assumed that certain physical systems process information and effectively realize computation. For example, it is being claimed that DNA is being decoded in a manner that is... more
In many sciences, including cogntive science and biology, it is assumed that certain physical systems process information and effectively realize computation. For example, it is being claimed that DNA is being decoded in a manner that is best described as computational or that brains are analog computers. The skeptics, however, propose that the notion of computation is purely in the eye of the observer and computational properties cannot be hold to be objective.
In this talk, I will discuss the criteria for realistic ascription of computational properties to physical systems. Computational ascriptions will be treated as a kind of abstract mathematical ascriptions, and I will show in what sense these ascriptions are not merely conventional but refer to natural kinds. Along with general criteria that apply to other abstract properties being ascribed in sciences, such as explanatory and predictive    value and implementation of functional properties vs.
instantiation, I will discuss specific problems of computational descriptions such as defining computation via Turing-Church thesis, individuation of computational systems, mapping of causal chains in program states and the level of detail required at the computational level of description of the system. The proposed criteria will cover both analog and digitial computation as kinds of information processing. As a result, the claims in biology about the nature of DNA information decoding will turn out to be empirical and falsifiable, and not decidable a priori in a philosopher's
armchair.
Esej o książce Szymona Wróbla „Filozof i terytorium” – pierwszym dogłębnym studium filozofii warszawskich historyków idei, dla „Przeglądu Politycznego” nr 145/146 (2017), s. 82-86.
Research Interests:
Polemika z książką K. Posłajki
Research Interests:
The paper is a critical review of the book Gödel, Putnam, and Functionalism: A New Reading of Representation and Reality by Jeff Buechner, which is a defense of computational functionalism against arguments formulated by Putnam,... more
The paper is a critical review of the book Gödel, Putnam, and Functionalism:
A New Reading of Representation and Reality by Jeff Buechner, which is
a defense of computational functionalism against arguments formulated by
Putnam, Searle, Fodor, Lucas and others. Buechner, after having meticulously
analyzed these arguments, concludes that all of them fail to show that
computational functionalism is not a viable strategy to model the mind in
cognitive science. As such, it is a defense of a mathematically-informed version
of computational functionalism. We discuss Beuchner’s strategy in quite a bit
of detail and make some comments.
Miłkowski, Marcin. 2008. Manifest kognitywistycznego religioznawstwa (recenzja z: Daniel Dennett, Odczarowanie). „Etyka” 41: 187–191.
Research Interests:
Recenzja ukazała się w „Etyce”":
Miłkowski, Marcin. 2009. Podstawy etyki komputerowej. Wojciech Bober: Powinność w świecie cyfrowym. „Etyka” 42: 171–174.
Research Interests:
The article presents the interdisciplinary approach of Edwin Hutchins, analyzing his conception of distributed cognition as probably the most important and lasting contribution of anthropology to the repertoire of theoretical tools in... more
The article presents the interdisciplinary approach of Edwin Hutchins, analyzing his conception of distributed cognition as probably the most important and lasting contribution of anthropology to the repertoire of theoretical tools in cognitive science. At the same time, this conception resulted in one of the most interesting relationships between cognitive science and social sciences. These relationships are made possible by the assumptions of Hutchins' conception, which directly contribute to interdisciplinary collaboration. His account of distributed cognition has enormous potential, allowing the integration of research into cognitive and social processes. This is also because it breaks with methodological individualism.
Przeciwnicy klasycznego programu badawczego kognitywistyki wieszczą nieustannie jego schyłek i nadejście kolejnej rewolucji poznawczej. Jednym z takich nurtów jest klasyczny enaktywizm. wedle tej odmiany enaktywizmu specyficzna... more
Przeciwnicy klasycznego programu badawczego kognitywistyki wieszczą nieustannie jego schyłek i nadejście kolejnej rewolucji poznawczej. Jednym z takich nurtów jest klasyczny enaktywizm. wedle tej odmiany enaktywizmu specyficzna biologiczna samoorganizacja, rozumiana w kategoriach autopoiesis, stanowi klucz do zrozumienia procesów poznawczych. w tym artykule stawiam następujące tezy. Po pierwsze, klasyczny enaktywizm nie jest teorią procesów poznawczych, lecz jedynie tradycją badawczą o stosunkowo mało sprecyzowanych założeniach. Dlatego też oceniać go należy jako tradycję badawczą, a nie jako teorię. Po drugie, mimo stosunkowo długiej historii nie oddziałał on prawie w ogóle na badania w psychologii i kognitywistyce, chociaż niewątpliwie zwrócił uwagę filozofów. Po trzecie, można sądzić, że ten brak oddziaływania nie jest jedynie przejściową słabością. w klasycznym enaktywizmie brak metod, założeń teoretycznych i wyników eksperymentalnych, które prowadziłyby do rozwiązania problemów klasycznej kognitywistyki czy behawiorystycznej psychologii. w związku z tym nie tylko konstatuję, że enaktywizm nie wywołał rewolucji, lecz także wieszczę, że już jej nie wywoła.
Research Interests:
In this paper, I argue that the supposedly new theory of consciousness proposed recently by David Chalmers is very close to classical functionalism. Indeed, it treats some of the controversial assumptions of functionalism as naturally... more
In this paper, I argue that the supposedly new theory of consciousness proposed recently by David Chalmers is very close to classical functionalism. Indeed, it treats some of the controversial assumptions of functionalism as naturally necessary. This is, however, very unfortunate, as they lead to numerous tensions in his view. In the first part, I analyze the functionalist theory of independence of complex organizations from their material realization. Then, I sketch several functionalist theories of consciousness as a background for Chalmers' own theory. Pace Chalmers, some of them are theories of qualities of experience as well. In the third part, I show that Chalmers, instead of rejecting the functionalist independence claims, retains them as “the principle of organizational invariance”. This, however, leads to the very problems that made functionalism a bad candidate for a theory of consciousness (at least according to Chalmers' own view). Lastly, I argue that he has to either view the hard problem of consciousness as pseudo-problem or reject his own theory as insufficient, as it is mere rebranding of classical computational functionalism and has no serious answer to the hard problem of consciousness.
Research Interests:
In this paper, I argue against the use of the notion of multiple realization to defend a unified account of life, as proposed by Krzysztof Chodasewicz. I show that the notion of multiple realization is itself highly problematic but, most... more
In this paper, I argue against the use of the notion of multiple realization to defend a unified account of life, as proposed by Krzysztof Chodasewicz. I show that the notion of multiple realization is itself highly problematic but, most im­portantly, it cannot warrant antireductionist claims traditionally associated with it. In particular, it is unable to block both traditional reduction and mechanistic causal reduction. To make matters worse, multiple realization is theoretically lad­en, which makes it very dif f icult to defend the claim that life is irreducible because there may, at least in principle, be theoretical contexts, in which it is construed of in a fashion that would even require reduction to its molecular bases. I argue that the appeal to the notion of an abstract type (or universal) can, and should, replace appeals to multiply realized types.
Research Interests:
Nauka w Polsce nigdy nie należała do najszczodrzej finansowanych. Problemy finansowe uczelni wyższych nie są więc zjawiskiem nowym. Obecny stan jest pochodną rozwiązań stosowanych po 1945 roku i wprowadzeniu częściowej odpłatności za... more
Nauka w Polsce nigdy nie należała do najszczodrzej finansowanych. Problemy finansowe uczelni wyższych nie są więc zjawiskiem nowym. Obecny stan jest pochodną rozwiązań stosowanych po 1945 roku i wprowadzeniu częściowej odpłatności za studia w wyniku zmian ustrojowych w III RP. Finanse uniwersytetów publicznych zależą więc od dotacji ministerialnej i od czesnego, pobieranego przez uczelnie od studentów obcojęzycznych, wieczorowych i zaocznych, lecz nie od studentów dziennych, którym konstytucja gwarantuje nieodpłatne studia. Stała dotacja ministerialna wyliczana jest wedle skomplikowanej formuły; jej wysokość zależeć ma przede wszystkim od liczby studentów i kwalifikacji nauczycieli akademickich (mierzonych stopniami naukowymi i od niedawna także publikacjami), lecz także od wysokości dotacji w latach poprzednich. Prócz tego ministerstwo lub agendy rządowe mogą finansować projekty badawcze prowadzone przez uczelnie. Uniwersytety prywatne żyją z kolei wyłącznie z czesnego. Jest ich bardzo wiele i przyciągają często osoby z grup, w których nie ma tradycji studiowania. Przypomina to sytuację w Ameryce Łacińskiej, gdzie w wyniku neoliberalnych reform pojawiło się wiele prywatnych uczelni, których dyplomy są warte mniej niż papier, na których je wydrukowano. Ile czesnego? Taki system jest krytykowany z wielu powodów. Zwiększona skolaryzacja, czyli uczestnictwo młodzieży w wieku lat 19-24 w systemie edukacji wyższej, sprawia, że aby utrzymać nakłady finansowe w takiej samej wysokości, w przeliczeniu na studenta, należałoby zwiększyć dotację ponad czterokrotnie (skolaryzacja wzrosła z ok. 12 proc. w 1990 r. do niemal 40% w 2008/2009 1). Trzeba jednak pamiętać, że 58% studentów obecnie opłaca czesne. W związku z tym proponuje się upowszechnienie opłat za studia. Co prawda, opracowana przez Ernst & Young i Instytut Badań nad Gospodarką Rynkową dla Ministerstwa Nauki i Szkolnictwa Wyższego 2 strategia rozwoju nauki odrzuca to rozwiązanie, ale pojawia się ono często w dyskusjach i jest zgłaszane np. w projektach reform przygotowywanych przez rektorów szkół wyższych 3 , którzy woleliby usunąć z konstytucji zapis mówiący o nieodpłatności studiów wyższych. Warto zatem przyjrzeć się takiej argumentacji, gdyż nawet w rządowej strategii rozstrzygnięcie kwestii czesnego opiera się głównie na istniejącym porządku prawnym i racjach demograficznych, nie zaś na głębszej analizie problemu. Argument przywoływany na poparcie propozycji powszechnego czesnego jest przewrotny: ponieważ na studia bezpłatne przyjmowani są kandydaci z większym kapitałem kulturowym, a nie z grup upośledzonych społecznie, to system jest niesprawiedliwy, gdyż nie pomaga tym, którym naprawdę potrzebna jest pomoc finansowa przy studiowaniu. Kandydaci z mniejszym kapitałem kulturowym zasilają bowiem uczelnie prywatne, wnosząc częstokroć nawet bardzo wysokie opłaty 4. Innymi słowy, dotacja państwa przeznaczana jest na dobro luksusowe, z którego korzystają grupy i 1 Diagnoza stanu szkolnictwa wyższego w Polsce, Ernst & Young i Instytut Badań nad Gospodarką Rynkową, listopad 2009, s. 74 (dokument dostępny w Internecie pod adresem Warto zauważyć, że w argumencie tym miesza się dwa rodzaje kapitału (w sensie Bourdieu): społeczny i kulturowy. Podczas gdy jest oczywiste, że rodzina nauczycielska z małego miasta może wyposażyć przyszłego studenta w większy kapitał kulturowy (wiedzę, obycie z kulturą), to niedorzecznością byłoby sądzić, że jest to tożsame z kapitałem społecznym (relacje społeczne, znajomości). A to kapitał społeczny decyduje przede wszystkim o statusie materialnym.
Research Interests:
Interpretacja postaci Epikura w pracach Nietzschego – od czasów filologicznych juwenaliów po niepublikowane zapiski
Research Interests:
Rozdział „Panoramy współczesnej filozofii” o roli sztucznej inteligencji w filozofii.
Research Interests:
Wstęp do numeru „Przeglądu Filozoficzno-Literackiego” pt. Kognitywistyka. Reprezentacje.
Research Interests:
Świadomość nie daje się w pełni wyjaśnić w sposób obliczeniowy, a więc obliczeniowe wyjaśnienie nie jest wyjaśnieniem wystarczającym do zrozumienia wszystkich funkcji świadomości, jednak jej informacyjna natura sprawia, że jest ono... more
Świadomość nie daje się w pełni wyjaśnić w sposób obliczeniowy, a więc obliczeniowe wyjaśnienie nie jest wyjaśnieniem wystarczającym do zrozumienia wszystkich funkcji świadomości, jednak jej informacyjna natura sprawia, że jest ono konieczne do wyjaś- nienia jednej z jej funkcji. Nie zakładam, że wyjaśnienie funkcji jakiegoś układu jest jednoznaczne z wyjaśnieniem wszystkiego, co jest interesujące w tym układzie, gdyż założenie to jest podważane przez przeciwników komputacjonizmu. Chcę pokazać, że nawet przy uchyleniu tego założenia komputacjonizm jest dzisiaj w teoriach świadomości bezkonkurencyjny.
Research Interests:
Tekst przedstawia założenia koncepcji mechanizmów reprezentacyjnych, czyli ram pojęciowych służących do analizy postulowanych w kognitywistyce reprezentacji umysłowych. Koncepcja ta jest oparta na założeniach neomechanistycznych.... more
Tekst przedstawia założenia koncepcji mechanizmów reprezentacyjnych, czyli ram pojęciowych służących do analizy postulowanych w kognitywistyce reprezentacji umysłowych. Koncepcja ta jest oparta na założeniach neomechanistycznych. Podkreśla się w niej, że wyjaśnienia reprezentacyjne stanowią rodzaj wyjaśnień mechanistycznych, a więc kauzalnych i odnoszących się do mechanizmów funkcjonalnych. Reprezentacje nie są postulowane w oderwaniu od czynności systemu poznawczego, w którym występują; oznacza to m.in., że nie tylko stanowią nośnik informacji semantycznych, lecz także wpływają na zachowanie systemu w swoisty dla reprezentacji sposób. Mechanizmy reprezentacyjne postulowane są jako składnik wyjaśnień zachowania systemu poznawczego, a wykorzystane koncepcja informacji semantycznej wiąże ją ze sterowaniem systemem. Koncepcja ta należy więc do działaniowych (action-oriented) ujęć reprezentacji. Zasadne wydaje się więc pytanie, czy nie przesądza ona zbyt wiele w samych założeniach, wykluczając z góry możliwość poprawności niektórych teorii psychologicznych, w których reprezentacje są stosunkowo oderwane od działania. Przykładem takiego rodzaju reprezentacji są pojęcia abstrakcyjne takie jak PRAWDA czy PRAWDOPODOBIEŃSTWO WARUNKOWE. Pokazuję jednak, że wbrew pozorom koncepcja ta nie wymaga wąsko rozumianej reakcji motorycznej jako bezpośredniego skutku przetwarzania reprezentacji. Zamieszczam krótki przegląd różnych psychologicznych teorii reprezentacji abstrakcyjnych, aby pokazać, że mogą być one zgodne z proponowaną koncepcją mechanizmów reprezentacyjnych.
Research Interests:
Bronię tezy, że podstawowym rodzajem wyjaśniania w kognitywistyce jest wyjaśnianie działania mechanizmów przetwarzania informacji. Mechanizmy te stanowią złożone, zorganizowane układy, których funkcjonowanie zależy od interakcji ich... more
Bronię tezy, że podstawowym rodzajem wyjaśniania w kognitywistyce jest wyjaśnianie działania mechanizmów przetwarzania informacji. Mechanizmy te stanowią złożone, zorganizowane układy, których funkcjonowanie zależy od interakcji ich części i zachodzących w nich procesów. Konstytutywne wyjaśnianie działania każdego takiego mechanizmu musi obejmować zarówno odniesienie do środowiska, w którym mechanizm występuje, jak i roli, jaką w nim odgrywa. Rolę tę tradycyjnie w kognitywistyce określa się mianem „kompetencji”. Aby w pełni wyjaśnić, jak ta rola jest odgrywana, należy wyjaśnić z kolei procesy przetwarzania informacji zachodzące wewnątrz samego mechanizmu osadzonego w środowisku. Zazwyczaj wyjaśnienie na tym poziomie ma postać modelu obliczeniowego, na przykład w postaci programu komputerowego lub wytrenowanej sieci neuropodobnej. Jednak na tym poziomie wyjaśnienie się nie kończy. Do zbadania pozostaje, jak realizowany jest sam program (lub jakie procesy odpowiadają za przetwarzanie informacji w sieci neuronalnej). Na dwóch diametralnie różnych przykładach z historii kognity- wistyki pokazuję, na czym polega wielopoziomowość wyjaśniania kognitywistycznego. Przykładami tymi są wyjaśnienie rozwiązywania problemów proponowane przez Simona i Newella (1972) oraz wyjaśnienie procesu fonotaksji u świerszczy proponowane przez Barbarę Webb (1995).
Research Interests:
Krytyczna analiza narracyjnej koncepcji tożsamości osobowej – w świetle literackiej analizy rozpadu osobowości u Różewicza. Pokazuję, że koncepcja ta jest gołosłowna i nie może być mowy o tym, że narracje w jakimkolwiek niebanalnym sensie... more
Krytyczna analiza narracyjnej koncepcji tożsamości osobowej – w świetle literackiej analizy rozpadu osobowości u Różewicza. Pokazuję, że koncepcja ta jest gołosłowna i nie może być mowy o tym, że narracje w jakimkolwiek niebanalnym sensie są konstytutywne dla tożsamości osobowej.
W artykule przedstawiono racjonalną rekonstrukcję pojęcia wolności w filozofii Hobbesa. Odgrywa ono istotną rolę w strukturze teoretycznej, która ma uprawomocnić zasady racjonalnego ustroju politycznego. Tezą tekstu jest, że właściwe... more
W artykule przedstawiono racjonalną rekonstrukcję pojęcia wolności w filozofii Hobbesa. Odgrywa ono istotną rolę w strukturze teoretycznej, która ma uprawomocnić zasady racjonalnego ustroju politycznego. Tezą tekstu jest, że właściwe pojęcie wolności, o które chodzi Hobbesowi, należy analizować nie w kategoriach mechanistycznej metafizyki, lecz w kategoriach politycznych, a mianowicie w kategoriach uprawnień. Konstrukcja umowy społecznej z suwerenem ma zapewniać zachowanie maksymalnego (i równego dla wszystkich) zakresu uprawnień: uprawnienia można ograniczać tylko tam, gdzie jest to bezwzględnie konieczne dla dalszego obowiązywania umowy i jej zawarcia. W efekcie Hobbesowski wywód okazuje się bliski późniejszej tradycji kantowskiej, a Lewiatan – gwarantem wolności i pomyślności powszechnej.
We współczesnej filozofii – zwłaszcza analitycznej – naturalizm nabrał szczególnego znaczenia. W jednym z poprzednich numerów PF-L starałem się pokazać, że eksplikację tezy naturalizmu ontologicznego ułatwia Dawid Hume. Mieszko... more
We współczesnej filozofii – zwłaszcza analitycznej – naturalizm nabrał szczególnego znaczenia. W jednym z poprzednich numerów PF-L starałem się pokazać, że eksplikację tezy naturalizmu ontologicznego ułatwia Dawid Hume. Mieszko Tałasiewicz postawił wobec mojej eksplikacji kilka poważnych zarzutów. Mój artykuł na temat Hume’a nie miał stanowić
ostatecznej obrony i eksplikacji wszelkich aspektów tez naturalistycznych, a jedynie pokazywać, w jaki sposób przy niewielkiej pomocy Hume’a możemy poradzić sobie z problemem zdefiniowania „fizyki idealnej”. Większość innych kwestii – takich jak ograniczenia poznawcze 2 , zobowiązania
ontologiczne teorii empirycznych, spór realizm-antyrealizm czy wreszcie stosunek między naturalizmem metodologicznym i ontologicznym – odłożyłem na inną okazję, między innymi z braku miejsca. Może poczynione skróty były jednak zbyt głębokie, a może właśnie teraz jest dobra okazja, by do tych spraw wrócić.
Research Interests:
W artykule przedstawiono argumenty, że konfirmacja tezy, iż istnieją moduły umysłowe wyjaśniające cechy umysłu, jest z kilku powodów kłopotliwa. Po pierwsze, istnieje kilka konkurencyjnych teorii modularności, które zresztą nie zawsze się... more
W artykule przedstawiono argumenty, że konfirmacja tezy, iż istnieją moduły umysłowe wyjaśniające cechy umysłu, jest z kilku powodów kłopotliwa. Po pierwsze, istnieje kilka konkurencyjnych teorii modularności, które zresztą nie zawsze się wykluczają, przez co nie można między nimi rozstrzygać eksperymentalnie. Po drugie, tezy na temat modularności często oparte są na bezzasadnym założeniu, iż wyróżnianie specyficznych dziedzin (semantycznych lub składniowych) działania modułów nie jest problematyczne. Po trzecie, analizując znany z literatury moduł wykrywania oszustów, postulowany przez Cosmides w celu wyjaśnienia rzekomej irracjonalności objawiającej się w tzw. zadaniu Wasona, pokazuję, że wyjaśniane zjawisko nie zostało zdefiniowane dostatecznie dokładnie, a przez to nieostra jest funkcjonalna charakterystyka modułów je wyjaśniających. Co więcej, nie ma powodów sądzić, że zjawisko, które ten moduł miał wyjaśniać, w ogóle istnieje. Wskazuję też kilka problemów metodologicznych związanych ze zbieraniem eksperymentalnych świadectw na rzecz modularności, takich jak zaburzanie wyników eksperymentów przez uśrednianie i brak kontroli nad kluczowymi czynnikami wpływającymi na rezultaty uzyskiwane przez uczestników badania.
Research Interests:
Ordinary language contains numerous expressions that presuppose Cartesian dualism. Wittgenstcin found t[iis presumption ungrounded, mainly because his philosophical anatysis indicated to hirn that the meaning of linguistic expressions is... more
Ordinary language contains numerous expressions that presuppose Cartesian dualism. Wittgenstcin found t[iis presumption ungrounded, mainly because his philosophical anatysis indicated to hirn that the meaning of linguistic expressions is ultimately determined by ostensiye defnitions. Such definitions cannot be used to identify mental states or their elements, so psychological expressions has to be rejected as based on a fiction of priyate language. Contemporary Wittgensteinians, following Austin and Ryle, usually propose an attempts haye the merit of basically agreeing with the yiews of Wittgenstein himself whose operational attitude did not allow him to formulate positiye ontological claims and macie him content with his rejection of dualism. At the same tinie, howeyer, these attempts seem unsatisfactory insofar as they tend to refute cognitiye scierice by using purely conceptual or yerbal distinctions. they cannot support a priyate language. Hence, common sense interpretation of operational conception of meaning for ordinary language. Their przykład nie jest jednoznaczny, to metoda późnego tym choćby
Autor artykułu broni tezy, że niektóre systemy obliczeniowe mogą mieć własności semantyczne. Wskazana została klasa systemów obliczeniowych, w których reprezentacje mogą mieć przynajmniej dwie własności: własność odnoszenia się do... more
Autor artykułu broni tezy, że niektóre systemy obliczeniowe mogą mieć własności semantyczne. Wskazana została klasa systemów obliczeniowych, w których reprezentacje mogą mieć przynajmniej dwie własności: własność odnoszenia się do obiektów (desygnowanie) i własność wspomagania rozpoznawania obiektów oznaczanych przez daną reprezentację (konotowanie). Autor argumentuje także, że własności semantyczne reprezentacji nie zależą wyłącznie od architektury systemów obliczeniowych, w których te reprezentacje występują. Konkretna architektura obliczeniowa nie jest czynnikiem kluczowym, a bodaj najmniej istotne są same rodzaje struktur danych, które mają mieć własności desygnowania czy konotowania. Własność desygnowania czy konotowania nie musi być zlokalizowana w samych reprezentacjach, może być własnością wyższego rzędu, powstającą w mechanizmie wyższego poziomu. Własności semantyczne reprezentacji mogą być wielorako realizowane. Systemy klasyczne, koneksjonistyczne czy też hybrydowe mogą równie dobrze mieć własności semantyczne, jak ich nie mieć.
In many sciences, including cognitive science and biology, it is assumed that certain physical systems process information and effectively realize computation. For example, it is being claimed that DNA is being decoded in a manner that is... more
In many sciences, including cognitive science and biology, it is assumed that certain physical systems process information and effectively realize computation. For example, it is being claimed that DNA is being decoded in a manner that is best described as computational or that brains are analog computers. The skeptics, however, propose that the notion of computation is purely in the eye of the observer and computational properties cannot be hold to be objective.

In this talk, I will discuss the criteria for realistic ascription of computational properties to physical systems. Computational ascriptions will be treated as a kind of abstract mathematical ascriptions, and I will show in what sense these ascriptions are not merely conventional but refer to natural kinds. Along with general criteria that apply to other abstract properties being ascribed in sciences, such as explanatory and predictive value and implementation of functional properties vs. instantiation, I will discuss specific problems of computational descriptions such as defining computation via Turing-Church thesis, individuation of computational systems, mapping of causal chains in program states and the level of detail required at the computational level of description of the system. The proposed criteria will cover both analog and digital computation as kinds of information processing. As a result, the claims in biology about the nature of DNA information decoding will turn out to be empirical and falsifiable, and not decidable a priori in a philosopher's armchair.
Dosyć stary tekst na temat problemu ramy. Dziś pewnie mocno się z nim nie zgadzam, a na temat problemu ramy to lepiej czytać "Solving the Frame Problem" Murraya Shannahana.
W artykule argumentuję na rzecz tezy, że stanowisko Chalmersa, określa- ne mianem naturalistycznego dualizmu, jest jedynie pewną wariacją na temat funkcjonalizmu, która jednak pozostaje czysto deklaratywna. Autor przypisuje jej inne... more
W artykule argumentuję na rzecz tezy, że stanowisko Chalmersa, określa- ne mianem naturalistycznego dualizmu, jest jedynie pewną wariacją na temat funkcjonalizmu, która jednak pozostaje czysto deklaratywna. Autor przypisuje jej inne zobowiązania ontologiczne, lecz wydają się one gołosłowne. Nawet śmiały i pozornie ekstrawagancki epifenomenalizm wydaje się pozorem, bo nie daje się go utrzymać jednocześnie z postulowanymi przez Chalmersa pra- wami psychofi zycznymi. Uzasadniony jest wadliwie, bo na podstawie błędnej koncepcji wyjaśniania, gdzie myli się warunek wystarczający z przyczyną. Co gorsza, nawet gdyby istniały niezależne argumenty na rzecz panpsychizmu czy teorii podwójnego aspektu (pozostają one w książce spekulacją, co autor pod- kreśla), to też nie miałyby znaczenia poznawczego, bo świadectwa na rzecz tych zjawisk nie są z natury rzeczy dostępne w intersubiektywny sposób. Twier- dzę, że zmiany w funkcjonalizmie, które wprowadził Chalmers, mają charakter epicykli – komplikacji wprowadzonych ad hoc, a jednocześnie jedynie zwięk- szających złożoność teorii, która niczym istotnym od funkcjonalizmu się nie różni. Stanowisko Chalmersa jest więc dualizmem jedynie z nazwy, a de facto – niespecjalnie nowatorską odmianą klasycznego funkcjonalizmu maszynowe- go w fi lozofi i umysłu.
W naturalistycznych nurtach filozofii umysłu żywa jest kwestia pojmowania intencjonalności. Samą intencjonalność rozumie się tradycyjnie jako posiadanie treści lub bycie na jakiś temat (aboutness). Pod znakiem zapytania stawiane są jednak... more
W naturalistycznych nurtach filozofii umysłu żywa jest kwestia pojmowania intencjonalności. Samą intencjonalność rozumie się tradycyjnie jako posiadanie treści lub bycie na jakiś temat (aboutness). Pod znakiem zapytania stawiane są jednak tradycyjne tezy na temat niematerialnego charakteru własności intencjonalnych, jej obiektów, związku intencjonalności ze świadomością. Trzy pytania wydają nię szczególnie charakterystyczne w tym kontekście: (ł ) Czy intencjonalność jest rzeczywiście przysługującą własnością, czy też własnością przypisywaną tylko ze względów pragmatycznych? (2) Jakiego rodzaju jest to własność? W szczególności. czy jest to własność redukowalna do własności lizycznych? (3) Jakim obiektom należy przypisywać (przysługują) siany intencjonalne? Na jakich zasadach się to odbywa? (4) Czy własności intencjonalne są redukowalne do własności reprezentacji umysłowych i relacji między nimi? Na powyższe pytania odpowiada między innymi koncepcja Danirła Dennetta, który jest uznawany za zwolennika tezy o czysto instrumentalistycznym statusie własności intencjonalnych. Początkowe proce tego autora rzeczywiście tego rodzaju wrażenie mogły sprawiać, tece dziś autor ten bywa zaliczany do obozu realistów. Zamierzam niżej pokazać, iż Dennets w istocie pozostaje realistą codo stanów intencjonalnych, a jego koncepcja zawiera odpowiedzi na pytania, które postawiłem powyżej.
Wstęp do antologii „Analityczna metafizyka umysłu” Metafizyka umysłu stanowi obecnie główny obszar problemowy filozofii umysłu, jednego z najintensywniej rozwijanych działów filozofii współczesnej. Niewiele przesady jest w twierdzeniu,... more
Wstęp do antologii „Analityczna metafizyka umysłu”

Metafizyka umysłu stanowi obecnie główny obszar problemowy filozofii umysłu, jednego z najintensywniej rozwijanych działów filozofii współczesnej. Niewiele przesady jest w twierdzeniu, że każdy problem oraz każde stanowisko z zakresu filozofii umysłu ma swój „składnik metafizyczny”. Iluzją jest przekonanie, że można badać i rozwiązywać problemy filozoficzne, w tym problemy filozofii umysłu, bez żadnych założeń lub konsekwencji metafizycznych . Jeżeli zaś zgodzimy się z W. V. Quinem, iż nie ma teorii naukowych pozbawionych metafizycznych zobowiązań [ontological commitments], do metafizyki umysłu włączymy również metafizyczne aspekty empirycznych teorii umysłu (obliczeniowych, neurobiologicznych, neurokognitywnych, psychologicznych) budowanych w obrębie różnych działów kognitywistyki.
Mimo tej, zadawałoby się oczywistej, konstatacji, stosunkowo rzadko uprawia się metafizykę (ontologię) umysłu w sposób systematyczny . Najczęściej nie traktuje się jej jako pełnoprawnego, odrębnego działu filozofii. Problemy metafizyki umysłu dyskutowane są przy okazji analizy innych zagadnień zaliczanych do ontologii, epistemologii, filozofii języka, antropologii, etyki, filozofii logiki i matematyki, a zwłaszcza filozofii nauk kognitywnych.
W tym rozdziale dokonuję przeglądu różnych współczesnych podejść ewolucyjnych w badaniach społecznych: ekologii behawioralnej, psychologii ewolucyjnej, memetyki i koewolucji genowo-kulturowej, wcześniej krótko wprowadzając podstawy... more
W tym rozdziale dokonuję przeglądu różnych współczesnych podejść ewolucyjnych w badaniach społecznych: ekologii behawioralnej, psychologii ewolucyjnej, memetyki i koewolucji genowo-kulturowej, wcześniej krótko wprowadzając podstawy ewolucjonistycznych programów badawczych w naukach społecznych.
Deleuze uważa, ze nie można pogodzić Hegla i Nietzschego. Hegel jest wedle niego abstrakcyjny, a Nietzsche - konkretny. Tymczasem pojęcia "konkret" i "abstrakcja" należą do ideologicznego arsenału konserwatyzmu. Rozpatruję nie tyle... more
Deleuze uważa, ze nie można pogodzić Hegla i Nietzschego. Hegel jest wedle niego abstrakcyjny, a Nietzsche - konkretny. Tymczasem pojęcia "konkret" i "abstrakcja" należą do ideologicznego arsenału konserwatyzmu. Rozpatruję nie tyle prawdziwość tezy Deleuza, co jej genealogię.

Hegel i Nietzsche kontynuują oświeceniowe poszukiwania "człowieka konkretnego". "Człowiek konkretny" to wytwór drugiej fazy oświecenia (rodzaj "kompensacji" w znaczeniu Marquarda): przekształcenie parenetyki w filozofię historii i kultury (wzgl. społeczną). "Wielki bohater historii" i "nadczłowiek" są próbami ujęcia konkretu społeczno-historycznego. Rzut oka na strukturalną pozycję kategorii Hegla w jego systemie oraz ideału Nietzschego w jego myśli (mediatyzacja a problem wątpliwej konkretności nadczłowieka) uprawdopodabnia następujący wniosek: Nadczłowiek stanowi abstrakcyjny projekt uchwycenia konkretu.
Nie można ukryć trudności w ocenie stosunku Hegla i Nietzschego wobec konserwatyzmu romantycznego. Przyczyna leży m.in. w wykorzystaniu arsenału myśli konserwatywnej przy jednoczesnym jej przekształceniu. Stąd trudno orzec, czy "konserwatyzm romantyczny" zniesiony (Hegel) lub przekształcony w ekstatyczno-chiliastyczną filozofię przyszłości (Nietzsche) jest bardziej konserwatywny, czy raczej liberalny itp.



Jeden przykład z "warsztatu filozofowania" obu myślicieli wykazuje swoiste odniesienie się do konkretu, a także problematyczność tego odniesienia. Co więcej, opozycja konkret-abstrakt przy zmieniającym się jej nacechowaniu ideologicznym obejmuje również "empiryzm transcendentalny" samego Deleuze'a, a także - jak się zdaje - sporą część filozofii kontynuującej Nietzschego w intencji sprzeciwienia się Heglowi i jego metanarracjom. Wieloznaczność problemu konkretu ukrywa to, że jest pseudoproblemem lub - w najlepszym wypadku - mylącym wyróżnikiem stanowiska filozoficznego.
Marek Siemek nie należy do myślicieli mówiących wprost. Jak Hegel, ceni drogę okrężną, która ma prowadzić przez wiele etapów pośrednich do upragnionego punktu docelowego. Jak wielu przedstawicieli lewicy heglowskiej, Siemek zdaje się... more
Marek Siemek nie należy do myślicieli mówiących wprost. Jak Hegel, ceni drogę okrężną, która ma prowadzić przez wiele etapów pośrednich do upragnionego punktu docelowego. Jak wielu przedstawicieli lewicy heglowskiej, Siemek zdaje się bardziej zainteresowany metodą niźli systemem; raczej drogą niż punktem dojścia. Drogą, którą określa mianem „transcendentalnej filozofii społecznej”. Dłuższych wypowiedzi poświęconych drobiazgowemu rozbiorowi, na czym polega owa filozofia, nie znajdziemy wjego pismach wiele. Uwagi na ten temat pozostają zdawkowe, enigmatyczne, na marginesie. Ryzykując, iż dokonuję nadinterpretacji, będę starał się wyłożyć explicite, czym jest ów Siemkowski społeczny - - transcendentalizm. Uczynię to naturalnie drogą okrężną, po kolei odrzucając nieadekwatne określenia.
Zamierzam pokazać, że słynny rozdział O cudach z Badań dotyczących rozumu ludzkiego Dawida Hume’a zawiera zasadę metodologiczną, która umożliwia rozwiązanie pewnej trudności współczesnego naturalizmu ontologicznego. Twierdzę, że... more
Zamierzam pokazać, że słynny rozdział O cudach z Badań dotyczących rozumu ludzkiego Dawida Hume’a zawiera zasadę metodologiczną, która umożliwia rozwiązanie pewnej trudności współczesnego naturalizmu ontologicznego.
Twierdzę, że Hume’owska zasada umożliwia wytyczenie pojęciowej granicy między wyjaśnieniem naturalistycznym danego zjawiska a wyjaśnieniem supranaturalistycznym czy antynaturalistycznym. Ta granica pokrywa się też, moim zdaniem, z granicą między umiarkowanym redukcjonizmem a antyredukcjonizmem metodologicznym.
Zamierzam przedstawić założenia współczesnego introspekcjonizmu kognitywistycznego, naszkicować charakterystyczne zarzuty wobec niego, a następnie przedstawić Dennetta reinterpretację tej metodologii. Dennett pojawia się tu z dwóch... more
Zamierzam przedstawić założenia współczesnego introspekcjonizmu kognitywistycznego, naszkicować charakterystyczne zarzuty wobec niego, a następnie przedstawić Dennetta reinterpretację tej metodologii. Dennett pojawia się tu z dwóch powodów. Jest po pierwsze filozofem, który stara się uwzględniać wyniki nauk współczesnych i przyswajać je filozofii, a po drugie jako filozof umysłu inspirował psychologów, zachęcając ich do forsowania właśnie introspekcyjnych programów badawczych.

Na zakończenie zastanowię się, czy intencjonalne interpretacje procesów świadomych rzeczywiście dają nam dostęp poznawczy do przeżyć subiektywnych. Będę utrzymywał, że współczesna heterofenomenologia nie podziela wielu wad różnych odmian fenomenologii filozoficznych - przede wszystkim zaś jest to metoda intersubiektywnie sprawdzalna. Nie będę tutaj bronił psychologii filozoficznej, szukając w niej miejsca dla introspekcji. Będę przedstawiał introspekcję w jej naukowym kontekście, namawiając raczej do porzucenia pseudoapriorycznej psychologii filozoficznej. Nie interesują mnie tutaj rzeczywiste mechanizmy umysłu, które umożliwiają wydawanie sądów introspekcyjnych - badanie tego rodzaju mechanizmów jest rolą psychologa, a nie filozofa. Zakładam jedynie, jak już wspomniałem, że za pośrednictwem introspekcji mamy przynajmniej częściowy dostęp do pewnych przeżyć świadomych.
Dyskutuję z ujęciem intencjonalności u Dennetta wg Urszuli Żegleń w jej książce „Filozofia umysłu”. Zamierzam zająć się sprawą, której profesor Żegleń w swojej książce nie porusza. Początkowo sądziłem, że nie porusza jej z przyczyn... more
Dyskutuję z ujęciem intencjonalności u Dennetta wg Urszuli Żegleń w jej książce „Filozofia umysłu”. Zamierzam zająć się sprawą, której profesor Żegleń w swojej książce nie porusza. Początkowo sądziłem, że nie porusza jej z przyczyn przyziemnych — brak czasu, miejsca, terminy. Takim przygodnym ograniczeniom podlega przecież każdy autor. Potem jednak zdałem sobie sprawę, iż brak ów może mieć uzasadnienie merytoryczne. Jest to mianowicie podyktowane swoistym rozumieniem naturalizmu. Rozumieniem, które może wydawać się z różnych względów nieadekwatne.