Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Since the second half of the XXth century, researchers in cybernetics and AI, neural nets and connectionism, Artificial Life and new robotics have endeavoured to build different machines that could simulate functions of living organisms,... more
Since the second half of the XXth century, researchers in cybernetics and AI, neural nets and connectionism, Artificial Life and new robotics have endeavoured to build different machines that could simulate functions of living organisms, such as adaptation and development, problem solving and learning. In this book these research programs are discussed, particularly as regard the epistemological issues of the behaviour modelling. One of the main novelty of this book consists of the fact that certain projects involving the building of simulative machine models before the advent of cybernetics are been investigated for the first time, on the basis of little known, and sometimes completely forgotten or unpublished, texts and figures. These pre-cybernetics projects can be considered as steps toward the “discovery” of a modelling methodology that has been fully developed by those more recent research programs, and that shares some of their central goals and key methodological proposals.

More info in Springer link: http://www.springer.com/new+%26+forthcoming+titles+%28default%29/book/978-1-4020-0606-7

This book is the English translation of La scoperta dell'artificiale, Dunod/Masson, Milan, 1998.
In this article I shall examine some of the issues and questions involved in the technology of autonomous robots, a technology which has developed greatly and is advancing rapidly. I shall do so with reference to a particularly critical... more
In this article I shall examine some of the issues and questions involved in the technology of autonomous robots, a technology which has developed greatly and is advancing rapidly. I shall do so with reference to a particularly critical field: autonomous military robotic systems. In recent times, various issues concerning the ethical implications of these systems have been the object of increasing attention from roboticists, philosophers and legal experts. The purpose of this paper is not to deal with these issues, but to show how the autonomy of those robotic systems, by which I mean the full automation of their decision processes, raises difficulties and also paradoxes which are not easy to solve. This is especially so when considering the autonomy of those robotic systems in their decision processes alongside their reliability. Finally, I would like to show how difficult it is to respond to these difficulties and paradoxes by calling into play a strong formulation of the precautionary principle.
Is any unified theory of brain function possible? Following a line of thought dating back to the early cybernetics (see, e.g., Cordeschi, 2002), Clark (in press) has proposed the action-oriented Hierarchical Predictive Coding (HPC) as the... more
Is any unified theory of brain function possible? Following a line of thought dating back to the early cybernetics (see, e.g., Cordeschi, 2002), Clark (in press) has proposed the action-oriented Hierarchical Predictive Coding (HPC) as the account to be pursued in the effort of gaining the “Grand Unified Theory of the Mind”—or “painting the big picture”, as Edelman (2012) put it. Such line of thought is indeed appealing, but to be effectively pursued it should be confronted with experimental findings and explanatory capabilities (Edelman, 2012)
Aaron Sloman remarks that a lot of present disputes on consciousness are usually based, on the one hand, on re-inventing "ideas that have been previously discussed at length by others", and, on the other hand, on debating "unresolvable"... more
Aaron Sloman remarks that a lot of present disputes on consciousness are usually based, on the one hand, on re-inventing "ideas that have been previously discussed at length by others", and, on the other hand, on debating "unresolvable" issues, such as animals having phenomenal consciousness.. For what it is worth I would make a couple of examples, which are related to certain topics that Sloman deals with in his paper, and that might be useful for introducing some comments in the following of this brief note.
Norbert Wiener (1894-1964) is unanimously considered the father of Cybernetics, the discipline that studies control and communication in both animals and machines. As a privileged witness of what he called the “second industrial... more
Norbert Wiener (1894-1964) is unanimously considered the father of Cybernetics, the discipline that studies control and communication in both animals and machines. As a privileged witness of what he called the “second industrial revolution”, he foresaw some of the major challenges that would confront the “information society”, as today’s society is called – a society whose economic and cultural life is heavily dependent on information and communication technologies. As a child prodigy, he was trained as a scientist (he called himself a mathematician, but his fields of research range from control engineering to physics to physiology) as well as a philosopher (at 18 obtaining a Ph.D. in Philosophy under the supervision of Bertrand Russell). This twofold vision and approach allowed him to see the opportunities and threats of the scientific and technological developments that began immediately after World War II.
In this paper we will illustrate some of the technological, ethical and political issues, under discussion at present. These were raised by Wiener from the 1940s on, and include the responsibility of the scientist in war, decisions on the exploitation of technological innovations, copyright issues raised by communication technologies, the social control made possible by digital devices, and the inspirational role of research in some critical areas of social development. We will try to obtain some insights into these issues from Wiener’s viewpoint.
The expression ‘‘artificial intelligence’’ (AI) was introduced by John McCarthy, and the official birth of AI is unanimously considered to be the 1956 Dartmouth Conference. Thus, AI turned fifty in 2006. How did AI begin? Several... more
The expression ‘‘artificial intelligence’’ (AI) was introduced by John McCarthy, and the official birth of AI is unanimously considered to be the 1956 Dartmouth Conference. Thus, AI turned fifty in 2006. How did AI begin? Several differently motivated analyses have been proposed as to its origins. In this paper a brief look at those that might be considered steps towards Dartmouth is attempted, with the aim of showing how a number of research topics and controversies that marked the short history of AI were touched on, or fairly well stated, during the year immediately preceding Dartmouth. The framework within which those steps were taken was the development of digital computers. Earlier computer applications in areas such as complex decision making and management, at that time dealt with by operations research techniques, were important in this story. The time was ripe for AI’s intriguingly tumultuous development, marked as it has been by hopes and defeats, successes and difficulties.
Computer Supported Collaborative learning (CSCL) is nowadays well mature and it is able to set guide-lines to enrich e-learning. On its side, e-learning does not fully exploit its potentialities yet. Suggestions from CSCL could be useful... more
Computer Supported Collaborative learning (CSCL) is nowadays well mature and it is able to set guide-lines to enrich e-learning. On its side, e-learning does not fully exploit its potentialities yet.
Suggestions from CSCL could be useful to empower the presence on the e-learning on the market and would foster a vision of learning as not a simple further "commercial product".
The European Commission funded many projects involving several countries, numerous students, teachers and researchers, and a variety of software. During the presentation a few European projects, considered as best practices, will be shortly discussed.
Through the results gathered, guide-lines are drawn to improve e-learning, concerning: a) understanding of the users; b) blended learning; c) learners' empowerment.
In this paper I put forward a reconstruction of the evolution of certain explanatory hypotheses on the neural basis of association and learning that are the premises of connectionism in the cybernetic age and of present-day connectionism.... more
In this paper I put forward a reconstruction of the evolution of certain explanatory hypotheses on the neural basis of association and learning that are the premises of connectionism in the cybernetic age and of present-day connectionism. The main point of my reconstruction is based on two little-known case studies. The first is the project, published in 1913, of a hydraulic machine through which its author believed it was possible to simulate certain "essential elements" of the plasticity of nervous connections. The author, S. Bent Russell, was an engineer deeply influenced by the neurological hypotheses on nervous conduction of Herbert Spencer, Max Meyer and Edward L. Thorndike. The second is the project, published in 1929, of an electromechanical machine in which the author, the psychologist J.M. Stephens, believed it was possible to embody Thorndike"s law of effect. Thus both Bent Russell and Stephens referred to the principles of learning that Thorndike defined as "connectionist". Their attempt was that of simulating by machines at least certain simple aspects of inhibition, association and habit formation that are typical of living organisms. I propose to situate their projects within the frame of the discovery of a simulative (modelling) methodology which I believe might be considered an important topic of the "Culture of the Artificial". Certain more recent steps toward such a methodology made by both connectionism of the 1950s and present-day connectionism are briefly pointed out in the paper.
Abstract The aim of this paper is to show how JA Robinson's resolution principle was perceived and discussed in the AI community between the mid sixties and the first seventies. During this time the so called "heuristic search... more
Abstract The aim of this paper is to show how JA Robinson's resolution principle was perceived and discussed in the AI community between the mid sixties and the first seventies. During this time the so called "heuristic search paradigm" was still influential in the AI community, and ...
My aim here is to raise a few questions concerning the problem of representation in scientific discovery computer programs. Representation, as Simon says in his paper, "imposes constraints upon the phenomena that allow the mechanisms to... more
My aim here is to raise a few questions concerning the problem of representation in scientific discovery computer programs. Representation, as Simon says in his paper, "imposes constraints upon the phenomena that allow the mechanisms to be inferred from the data". The issue is obviously barely outlined by Simon in his paper, while it is addressed in detail in the book by Langley, Simon, Bradshaw and Zytkow (1987), to which I shall refer in this note. Nevertheless, their analysis would appear to leave open certain issues related to the nature of the "constraints" imposed by representation on problem solving strategies.
In this paper I start from a definition of “culture of the artificial” which might be stated by referring to the background of philosophical, methodological, pragmatical assumptions which characterizes the development of the information... more
In this paper I start from a definition of “culture of the artificial” which might be stated by referring to the background of philosophical, methodological, pragmatical assumptions which characterizes the development of the information processing analysis of mental processes and of some trends in contemporary cognitive science: in a word, the development of AI as a candidate science of mind. The aim of this paper is to show how (with which plausibility and limitations) the discovery of the mentioned background might be dated back to a period preceding the cybernetic era, the decade 1930–1940 at least. Therefore a somewhat detailed analysis of Hull's “robot approach” is given, as well as of some of its independent and future developments.
A number of contributions are been given in recent years to illustrate Herbert Simon’s multidisciplinary approach to the study of behaviour. In this chapter, I give a brief picture of the origins of Simon’s bounded rationality in the... more
A number of contributions are been given in recent years to illustrate Herbert Simon’s multidisciplinary approach to the study of behaviour. In this chapter, I give a brief picture of the origins of Simon’s bounded rationality in the framework of rising AI. I show then how seminal it was Simon’s insight on the unifying role of bounded rationality in different fields, from evolutionary theory to domains traditionally difficult for AI decision-making, such as those of real-life and real-world problems.
The year 1943 is customarily considered as the birth of cybernetics. Artificial Intelligence (AI) was officially born thirteen years later, in 1956. This chapter is about two theories on human cognitive processes developed in the context... more
The year 1943 is customarily considered as the birth of cybernetics. Artificial Intelligence (AI) was officially born thirteen years later, in 1956. This chapter is about two theories on human cognitive processes developed in the context of cybernetics and early AI. The first theory is that of the cyberneticist Donald MacKay, in the framework of an original version of self-organizing systems; the second is that of Allen Newell and Herbert Simon (initially with the decisive support of Clifford Shaw) and is known as information-processing psychology (IPP). The latter represents the human-oriented tendency of early AI, in which the three authors were pioneers. I shall not discuss this topic with the intention, common in this type of reconstruction, of seeking contrasts between opposing paradigms (classical AI vs. cybernetics, symbolic vs. subsymbolic, symbolic vs. situated, and so forth). Randall Beer, referring to one of these battles, the ‘‘battle between computational and dynamical ideologies,’’ decried the fact that the subjects usually examined are not ‘‘experimentally testable predictions, but rather competing intuitions about the sort of theoretical framework that will ultimately be successful in explaining cognition.’’ He concluded, ‘‘the careful study of concrete examples is more likely to clarify the key issues than abstract debate over formal definitions’’. I believe this is a conclusion that should be endorsed.
"Since the early eighties, computationalism in the study of the mind has been “under attack” by several critics of the so-called “classic” or “symbolic” approaches in AI and cognitive science. Computationalism was generically identified... more
"Since the early eighties, computationalism in the study of the mind has been “under attack” by several critics of the so-called “classic” or “symbolic” approaches in AI and cognitive science. Computationalism was generically identified with such approaches. For example, it was identified with both Allen Newell and Herbert Simon’s Physical Symbol System Hypothesis and Jerry Fodor’s theory of Language of Thought, usually without taking into account the fact ,that such approaches are very different as to their methods and aims.  Zenon Pylyshyn, in his influential book Computation and Cognition, claimed that both Newell and Fodor deeply influenced his ideas on cognition as computation.  This probably added to the confusion, as many people still consider Pylyshyn’s book as paradigmatic of the computational approach in the study of the mind. Since then, cognitive scientists, AI researchers and also philosophers of the mind have been asked to take sides on different “paradigms” that have from time to time been proposed as opponents of (classic or symbolic) computationalism. Examples of such oppositions are:

computationalism vs. connectionism,
computationalism vs. dynamical systems,
computationalism vs. situated and embodied cognition,
computationalism vs. behavioural and evolutionary robotics.

Our preliminary claim in section 1 is that computationalism should not be identified with what we would call the “paradigm (based on the metaphor) of the computer” (in the following, PoC). PoC is the (rather vague) statement that the mind functions “as a digital computer”. Actually, PoC is a restrictive version of computationalism, and nobody ever seriously upheld it, except in some rough versions of the computational approach and in some popular discussions about it. Usually, PoC is used as a straw man in many arguments against computationalism. In section 1 we look in some detail at PoC’s claims and argue that computationalism cannot be identified with PoC. In section 2 we point out that certain anticomputationalist arguments are based on this misleading identification. In section 3 we suggest that the view of the levels of explanation proposed by David Marr could clarify certain points of the debate on computationalism. In section 4 we touch on a controversial issue, namely the possibility of developing a notion of analog computation, similar to the notion of digital computation. A short conclusion follows in section 5.
"
Heuristic programming was the first area in which AI methods were tested. The favourite case-studies were fairly simple toy- problems, such as cryptarithmetic, games, such as checker or chess, and formal problems, such as logic or... more
Heuristic programming was the first area in which AI methods were tested. The favourite case-studies were fairly simple toy- problems, such as cryptarithmetic, games, such as checker or chess, and formal problems, such as logic or geometry theorem-proving. These problems are well-defined, roughly speaking, at least in comparison to real-life problems, and as such have played the role of Drosophila in early AI. In this chapter I will investigate the origins of heuristic programming and the shift to more knowledge-based and real-life problem solving.
The rise and some more recent developments of the machine-simulation methodology of living-organism behavior are discussed in this paper. In putting forward these issue, my aim is that of isolating recurring themes which help... more
The rise and some more recent developments of the machine-simulation methodology of living-organism behavior are discussed in this paper. In putting forward these issue, my aim is that of isolating recurring themes which help understanding the development of such a machine-simulation methodology, from its, so to speak, discovery during the first half of the twentieth century up to the present time. The machine designed by the engineer S. Bent Russell in 1913 seems to share the core of at least some points of such a methodology. This machine was designed with the aim of embodying certain hypotheses on the plasticity of nervous connections, pointed out at the time by psychologists in order to explain the physical bases of learning. I would like to suggest that this machine might be viewed as a case-study of the discovery of the above mentioned simulative methodology, further on developed by cyberneticians beginning from the 1940s. Certain present-day steps toward such a methodology are briefly touched upon in the concluding section of the paper.
The early examples of self-directing robots attracted the interest of both scientific and military communities. Biologists regarded these devices as material models of animal tropisms. Engineers envisaged the possibility of turning... more
The early examples of self-directing robots attracted the interest of both scientific and military communities. Biologists regarded these devices as material models of animal tropisms. Engineers envisaged the possibility of turning self-directing robots into new “intelligent” torpedoes during World War I. Starting from World War II, more extensive interactions developed between theoretical inquiry and applied military research on the subject of adaptive and intelligent machinery. Pioneers of Cybernetics were involved in the development of goal-seeking warfare devices. But collaboration occasionally turned into open dissent. Founder of Cybernetics Norbert Wiener, in the aftermath of World War II, argued against military applications of learning machines, by drawing on epistemological appraisals of machine learning techniques. This connection between philosophy of science and techno-ethics is both strengthened and extended here. It is strengthened by an epistemological analysis of contemporary machine learning from examples; it is extended by a reflection on ceteris paribus conditions for models of adaptive behaviours.
The term cybernetics was first used in 1947 by Norbert Wiener with reference to the centrifugal governor that James Watt had fitted to his steam engine, and above all to Clerk Maxwell, who had subjected governors to a general mathematical... more
The term cybernetics was first used in 1947 by Norbert Wiener with reference to the centrifugal governor that James Watt had fitted to his steam engine, and above all to Clerk Maxwell, who had subjected governors to a general mathematical treatment in 1868. Wiener used the word “governor” in the sense of the Latin corruption of the Greek term kubernetes, or “steersman.” Wiener defined cybernetics as the study of “control and communication in the animal and the machine” (Wiener 1948). This definition captures the original ambition of cybernetics to appear as a unified theory of the behavior of living organisms and machines, viewed as systems governed by the same physical laws. The initial phase of cybernetics involved disciplines more or less directly related to the study of such systems, like communication and control engineering, biology, psychology, logic, and neurophysiology. Very soon, a number of attempts were made to place the concept of control at the focus of analysis also in other fields, such as economics, sociology, and anthropology. The original ambition of “classical” cybernetics thus seemed to involve also several human sciences, as it developed in a highly interdisciplinary approach, aimed at seeking common concepts and methods in rather different disciplines. In classical cybernetics, this ambition did not produce the desired results and new approaches had to be attempted in order to achieve them, at least partially. In this chapter, we shall focus our attention in the first place on the specific topics and key concepts of the original program in cybernetics and their significance for some classical philosophic problems (those related to ethics are dealt with in Chapter 5, COMPUTER ETHICS, and Chapter 6, COMPUTER-MEDIATED COMMUNICATION AND HUMAN–COMPUTER INTERACTION). We shall then examine the various limitations of cybernetics. This will enable us to assess different, more recent, research programs that are either ideally related to cybernetics or that claim, more strongly, to represent an actual reappraisal of it on a completely new basis.
The notion of loop seems to be ubiquitous in the study of organisms, the human mind and symbolic systems. With the possible exception of quantum-mechanical approaches, the treatments of consciousness we are acquainted with crucially... more
The notion of loop seems to be ubiquitous in the study of organisms, the human mind and symbolic systems. With the possible exception of quantum-mechanical approaches, the treatments of consciousness we are acquainted with crucially appeal to the concept of loop. The uses of loops in this context fall within two broad classes. In the first one, loops are used to express the control of the organism’s interaction with the environment; in the second one, they are used to express self-reference. Both classes are tied with investigations which aim at accounting for symbolic capabilities, and ultimately for subjectivity. Neurophysiological research detects loops in the animal nervous system since its very inception (e.g., all the work on the reflex arc). Recently, Gray proposed a model purporting to explain the mechanism supporting the contents of consciousness in the human CNS, in ways that are practically indistinguishable from models formulated within the cybernetical point of view. Both types of models apply loops in strikingly similar ways. While there is no conclusive evidence that loops are necessary to support consciousness, they are nonetheless as good a candidate as today can be found for inclusion in the list of essential ingredients for subjectivity to arise. In the first two sections we discuss the above mentioned uses by means of significant examples. In the third section, we compare Gray’s recent model to MacKay’s early cybernetical model of a selfobserving system, in the setting of a broader discussion on loops for consciousness.
In this chapter the early history of Computer Science, Cybernetics and Artificial Intelligence is sketched. More recent developments of AI and the philosophy of Cognitive Science are also discussed. French translation at URL:... more
In this chapter the early history of Computer Science, Cybernetics and Artificial Intelligence is sketched. More recent developments of AI and the philosophy of Cognitive Science are also discussed.

French translation at URL: http://histsciences.univ-paris1.fr/i-corpus-evenement/FabriquedelaPensee/
In this paper several reformulations of William Ross Ashby and Norbert Wiener’s classical claims on purposive behavior are examined. Next restatements of this issue are then discussed, particularly as regards the following question: is it... more
In this paper several reformulations of William Ross Ashby and Norbert Wiener’s classical claims on purposive behavior are examined. Next restatements of this issue are then discussed, particularly as regards the following question: is it possible to extend the concepts and methods of mechanical (physical) explanation to psychological explanation, in order to explain human (and animal) purposive behavior? This question was restated in the 1950s as follows: are negative feedback and homeostatic mechanisms really explanatory of adaptive and purposive behavior, or are they tautological re-formulations of older notions? Another notion of adaptive and purposive behavior is also examined in this paper, which was developed in that very period by Herbert A. Simon.
In this paper some applications and methodological developments of mechanical models in psychology in the 1950s are examined. During that period, a new conception of the theory-model relationship in psychology become evident, which had... more
In this paper some applications and methodological developments of mechanical models in psychology in the 1950s are examined. During that period, a new conception of the theory-model relationship in psychology become evident, which had been proposed earlier by the mechanistic trend in psychology in the 1930s. Such a conception allowed psychologists a new approach to many problems in theoretical psychology, such as the role of hypotheses and neurophysiology in psychological explanation and the positions of psychologists concerning neobehavioristic theories of behaviour and the operational method. The ideas of psychologists who accepted mechanical model (and had also been influenced by the advancement of cybernetics) were not homogeneous in that period. Two trend are examined in this paper: the first (in turn not a homogeneous one), which prevailed during the previous decade and was more influenced by cybernetics, was developed during the first half of the 1950s (Deutsch’ machine with insight, Wyckoff model for learning, Broadbent’s model for attention); the second, which originated around the middle of the 1950’s and was dominant in in the next decades, is Newell and Simon’s Information Processing Psychology. Several methodological principles that characterize those two trends are discussed in this paper, along with their similarities and above all their differences.
In Italia, la cibernetica fa ufficialmente il suo ingresso nella prima metà degli anni Cinquanta del Novecento, in particolare quando, tra il 1952 e il 1954, per iniziativa di Enzo Cambi e Anna Cuzzer, si costituì, presso l’Istituto... more
In Italia, la cibernetica fa ufficialmente il suo ingresso nella prima metà degli anni Cinquanta del Novecento, in particolare quando, tra il 1952 e il 1954, per iniziativa di Enzo Cambi e Anna Cuzzer, si costituì, presso l’Istituto superiore delle poste e telecomunicazioni, il Centro italiano di cibernetica. A esso avevano aderito il pioniere dei linguaggi di programmazione Corrado Böhm, il matematico Bruno De Finetti, l’ingegnere Giorgio Sacerdoti, il fisico e poi filosofo della scienza Vittorio Somenzi. Diversi sono stati poi i centri di cibernetica attivi in Italia nel periodo del suo maggiore sviluppo, all’incirca tra i primi anni Cinquanta e i primi anni Settanta del Novecento, e diversi gli interessi di ricerca prevalenti dei loro promotori. Come ha ricordato Somenzi, “i centri di cibernetica sorti a Napoli con Caianiello e Braitenberg, a Genova con Borsellino e Gamba, a Milano con Ceccato e Maretti, si differenziavano tra loro nettamente, con prevalenza nel primo della neurofisiologia, nel secondo della biofisica e nel terzo della linguistica. Ancora più specializzati erano i centri dedicati al calcolo automatico, ai servomeccanismi e all’automazione industriale". In questo capitolo accenneremo solo di passaggio a questi ultimi, soffermandoci piuttosto sui primi. Nel complesso, le ricerche cibernetiche in Italia del periodo ricordato, rispetto al progetto interdisciplinare originario, hanno progressivamente finito per frammentarsi, quando non arenarsi, per ragioni comunque comuni a quelle di altri Paesi. Ciononostante, queste ricerche hanno contribuito a formare generazioni di ricercatori che nei decenni successivi si sono confrontati con la non facile eredità della cibernetica.
In questo intervento illustriamo il contributo di Vittorio Somenzi (1918-2003) allo sviluppo della filosofia della scienza in Italia.
Within cognitive science, the “concept of concept” results to be highly disputed and problematic. In our opinion, this is due to the fact that the notion itself of concept is in some sense heterogeneous, and encompasses different... more
Within cognitive science, the “concept of concept” results to be highly disputed and problematic. In our opinion, this is due to the fact that the notion itself of concept is in some sense heterogeneous, and encompasses different cognitive phenomena. This results in a strain between conflicting requirements, such as, for example, compositionality on the one side and the need of representing prototypical information on  the other. This has several consequences for the practice of Artificial Intelligence and for computational modelling of cognitive processes. We shall briefly review some paradigmatic examples of this state of affairs: knowledge representation systems of classic Artificial Intelligence, connectionist systems, and various forms of “hybrid” symbolic-connectionist models. We conclude that the problems posed by the representation of concepts are in part still unsolved. Probably, concepts are not a homogeneous phenomenon from the cognitive viewpoint. Thus, different approaches and research practices seem presently to be able to deal with different aspects of this issue.
In questo intervento ci proponiamo di mostrare come alcune nozioni mutuate dal darwinismo, in particolare quelle di adattamento e selezione, siano state usate nel corso di questo secolo nella interpretazione di fenomeni presenti tanto nel... more
In questo intervento ci proponiamo di mostrare come alcune nozioni mutuate dal darwinismo, in particolare quelle di adattamento e selezione, siano state usate nel corso di questo secolo nella interpretazione di fenomeni presenti tanto nel mondo della natura inorganica o non vivente quanto in quello di certi artefatti opera dell’uomo. L'obiettivo è stato spesso quello di ridurre o annullare la distanza, tradizionalmente teorizzata da non pochi filosofi, tra il vivente e il non vivente. In questo intervento mostriamo come tale obiettivo sia presente nella tradizione delle scienze del Novecento anche prima dell'avvento della cibernetica, esaminando come esempi alcuni casi meno noti e seguendone qualche sviluppo in approcci più recenti.
A current view in research community in cognitive science is that the so-called “old” or symbolic cognitive science is radically opposed to a “new” kind of cognitive science, which includes, above all, Artificial Life, dynamical systems,... more
A current view in research community in cognitive science is that the so-called “old” or symbolic cognitive science is radically opposed to a “new” kind of cognitive science, which includes, above all, Artificial Life, dynamical systems, robotics. It is true that these new research areas deal with topics which are crucially important in the study of living organisms. These topics had been only marginally touched on by previous research in cognitive science. Nevertheless, a radical opposition between these different research areas on the basis of incommensurable paradigms is unjustified to a large extent. I’ll try to discuss some issues regarding this topic, with the aim of proposing a different view of the matter.
Il 1956 è unanimemente riconosciuto come la data di nascita dell’IA (Intelligenza Artificiale). L’evento è il seminario organizzato da Marvin Minsky, Nathaniel Rochester, Claude Shannon e John McCarthy nell’estate di quell’anno nel... more
Il 1956 è unanimemente riconosciuto come la data di nascita dell’IA (Intelligenza Artificiale). L’evento è il seminario organizzato da Marvin Minsky, Nathaniel Rochester, Claude Shannon e John McCarthy nell’estate di quell’anno nel Dartmouth College. All’anno precedente risale il documento diffuso dagli stessi autori, in cui si delineavano le basi della futura IA. Lo presentiamo qui in traduzione italiana dopo aver introdotto e discusso i principali temi in esso contenuti.
L'Intelligenza Artificiale (IA) ha una storia recente, e c'è unanimità sulla sua data di nascita ufficiale: il 1956. Non c'è invece nessuna unanimità sulla definizione del suo programma di ricerca come disciplina scientifica. Tra alcuni... more
L'Intelligenza Artificiale (IA) ha una storia recente, e c'è unanimità sulla sua data di nascita ufficiale: il 1956. Non c'è invece nessuna unanimità sulla definizione del suo programma di ricerca come disciplina scientifica. Tra alcuni filosofi, e anche tra alcuni ricercatori del campo, c'è anzi un diffuso scetticismo circa la possibilità stessa di considerare l'IA una scienza. In una sua interpretazione molto debole (usiamo un termine reso canonico da John Searle), essa appare piuttosto una pratica sperimentale, tra l'informatica e l'ingegneria. Suo obiettivo sarebbe la costruzione di artefatti con prestazioni tali da aiutare o assistere l'uomo nel risolvere compiti teorici o pratici di diversa complessità, in qualche caso sostituendosi ad esso.
Secondo un altro punto di vista, l'IA può nutrire l'ambizione di essere una scienza, questa volta dei principi generali dell'intelligenza e della conoscenza (comuni cioè agli esseri umani e alle macchine), ma ha bisogno, per poterlo essere veramente, dell'apporto decisivo della logica: un pò come si dice della fisica, che ha avuto bisogno della matematica per svilupparsi come scienza. Quindi, i problemi dell'IA consistono in primo luogo nel trovare la logica, o le logiche, pertinenti ai suoi scopi.
Per altri ancora l'IA si definisce piuttosto in rapporto alla ricerca sull'intelligenza naturale. Qui le cose si complicano, perchè l'intelligenza naturale non è a sua volta un dominio ben definito, e la stessa psicologia, la disciplina che tradizionalmente se ne occupa, vive in modo alquanto conflittuale il proprio statuto di scienza. Più recentemente, inoltre, ridimensionata l'idea che la mente possa costituire un oggetto di studio indipendente dal cervello, almeno certe tendenze dell'IA interessate alla mente sono portate a fare i conti con i risultati e i metodi di un'altra scienza, la neuroscienza, con la quale la cibernetica aveva già intrattenuto rapporti privilegiati.
Anche se la cibernetica aveva fatto la sua parte nel ridimensionare la contrapposizione tra le nozioni di automatismo e di intelligenza, era stata la costruzione di macchine come i calcolatori digitali general purpose a suggerire un modo per ridiscuterla daccapo. In questo capitolo vogliamo seguire quella che ci sembra la strada maestra che ha portato alle origini dell'IA, quando ai suoi pionieri la contrapposizione tra automatismo e intelligenza sembrò addirittura dissolversi: la strada che è segnata dalle tappe della costruzione del calcolatore, le quali hanno rappresentato altrettanti suggerimenti per pensare ad essi come a macchine intelligenti, coniugando due termini tradizionalmente tanto lontani l'uno dall'altro.
In questo articolo ci proponiamo di mostrare in quale modo la psicologia abbia trovato una collocazione inedita all’interno della tradizionale gerarchia delle scienze proposta, nel quadro della Scienza Unitaria, dal neopositivismo... more
In questo articolo ci proponiamo di mostrare  in quale modo la psicologia abbia trovato una collocazione inedita all’interno della tradizionale gerarchia delle scienze proposta, nel quadro della Scienza Unitaria, dal neopositivismo liberalizzato, passando dallo stato di scienza naturale, prediletto dal fisicalismo e dal comportamentismo, a quello di “scienza dell’artificiale”, proposto dall’IA (Intelligenza Artificiale) fin dalle sue origini.
Inquesto intervento, in un numero di Sistemi Intelligenti dedicato ad Herbert Simon in occasione della sua scomparsa, si fa un bilancio del suo insegnamento e dell'importanza della sua ricerca nell'ambito della scienza cognitiva.
“Classical” AI supports the claim that representations have a central role incognition. This paper deals with some philosophical problems regarding the role of repre-sentations in the modeling of natural agents in their interaction with... more
“Classical” AI supports the claim that representations have a central role incognition. This paper deals with some philosophical problems regarding the role of repre-sentations in the modeling of natural agents in their interaction with the environment. The criticism of “new” or nouvelle AI to the representationalist claim of classical AI is debatedthrough an examination of some situated robots or models, with the aim of evaluating thestrength and the limits of that criticism. A different judgement on the role of symbolicrepresentations in AI precedes some conclusions and suggestions for further research.
Scopo di questo capitolo è di introdurre il lettore ai principali argomenti che, nel corso della sua breve storia, l’Intelligenza Artificiale (IA) ha affrontato sia nella variante applicativa o ingegneristica sia in quella teorica o... more
Scopo di questo capitolo è di introdurre il lettore ai principali argomenti che, nel corso della sua breve storia, l’Intelligenza Artificiale (IA) ha affrontato sia nella variante applicativa o ingegneristica sia in quella
teorica o cognitiva. Alla fine di questo capitolo il lettore dovrebbe
– avere presente l’evoluzione di alcune delle tendenze di ricerca più influenti in IA;
– essere al corrente delle più recenti posizioni nel dibattito attuale all’interno dell’ IA;
– essere brevemente introdotto ad alcuni classici problemi filosofici ed epistemologici affrontati dall’IA.
I progressi della biologia moderna, dalla biologia molecolare alle biotecnologie, hanno reso frequente negli ultimi decenni la richiesta di una maggiore precisione nell'uso dei termini 'naturale' e 'artificiale'. Lo stesso è avvenuto con... more
I progressi della biologia moderna, dalla biologia molecolare alle biotecnologie, hanno reso frequente negli ultimi decenni la richiesta di una maggiore precisione nell'uso dei termini 'naturale' e 'artificiale'. Lo stesso è avvenuto con lo sviluppo, a partire dalla cibernetica negli anni Quaranta, di quella branca dell'informatica che cerca di dotare i calcolatori di programmi meritevoli della qualifica di 'intelligenza artificiale' (cioè capaci di competere con determinate manifestazioni dell'intelligenza 'naturale' umana), e più recentemente con la 'vita artificiale', che si propone di simulare le caratteristiche della vita prendendo come modelli i sistemi dinamici complessi (nei quali risultano cruciali i processi di autorganizzazione). In questo intervento discuteremo alcuni esempi significativi del dibattito intorno alla dicotomia naturale/artificiale, con l'obiettivo di mostrare come tale dicotomia sia meno scontata di quanto non appaia a prima vista.
In questo intervento mi propongo di mostrare come venivano valutate dai pionieri della prima IA (Intelligenza Artificiale) certe caratteristiche dei calcolatori, in quale forma del gioco dell'imitazione di Turing si riteneva che i... more
In questo intervento mi propongo di mostrare come venivano valutate dai pionieri della prima IA (Intelligenza Artificiale) certe caratteristiche dei calcolatori, in quale forma del gioco dell'imitazione di Turing si riteneva che i calcolatori potessero cominciare a competere con gli esseri umani, e fino a che punto il gioco dell'imitazione veniva effettivamente accettato come test per l'intelligenza dei calcolatori. Come vedremo, il gioco dell'imitazione da una parte venne divulgato ben presto in una forma cosiddetta «ristretta», dall'altra non venne sempre accettato come test sufficiente per l'intelligenza delle nuove macchine. L'una e l'altra cosa non è stata priva di conseguenze nello sviluppo di alcune scelte metodologiche all'interno della comunità dell'IA.
Si tratta di una introduzione alla seconda edizione ampliata dell'antologia "La filosofia degli automi", pubblicata da Vittorio Somenzi nel 1965. Verso la metà degli anni cinquanta del Novecento, la cibernetica di Wiener, Turing e von... more
Si tratta di una introduzione alla seconda edizione ampliata dell'antologia "La filosofia degli automi", pubblicata da Vittorio Somenzi nel 1965.
Verso la metà degli anni cinquanta del Novecento, la cibernetica di Wiener, Turing e von Neumann lasciava il campo all'IA, l'Intelligenza Artificiale. Ai saggi dei "padri" della cibernetica, si affiancano in questa nuova edizione del volume i fondamentali contributi di Simon, Samuel, McCarthy e Minsky, i fondatori dell'IA, che ne illustrano le prime realizzazioni e la portata anche nell'ambito della filosofia della mente. I temi trattati coprono un arco temporale che va dagli anni a cavallo della seconda guerra mondiale alla fine degli anni sessanta, e in questo saggio introduttivo si documentano anche i successivi sviluppi dell'IA e i suoi rapporti con le discipline psicologiche, nonché la ripresa delle ricerche nel campo delle reti neurali e del connessionismo.
In questo capitolo si discute la  teoria dell’Elaborazione Umana dell’Informazione proposta da A. Newell, J.C. Shaw e H.A. Simon nella metà degli anni 50 del Novecento, alle soglie della nascita dell'Intelligenza Artificiale.
La Information Processing Psychology di A. Newell, J.C. Shaw e H.A. Simon è stata un esplicito tentativo di superare la “polarizzazione”, come essi la definirono, tra comportamentismo e gestaltismo, e può essere vista come l'antesignana... more
La Information Processing Psychology di A. Newell, J.C. Shaw e H.A. Simon è stata un esplicito tentativo di superare la “polarizzazione”, come essi la definirono, tra comportamentismo e gestaltismo, e può essere vista come l'antesignana della futura scienza cognitiva. In questo articolo si ricostruiscono alcuni dei suoi principali contributi.