Minds and Machines First Essay
Minds and Machines First Essay
Minds and Machines First Essay
201515794
A cynic argues whether machines can think or not
Can machines think? I should address this rather obliquely by introducing some implicit
questions first: what is a machine and what is thinking? It would commonly seem that if
one knows what those are then one will be able to answer the first question. Additionally,
answer these ones will hopefully help me ground the discussions that I will mention later
and that have given various answers to the first question.
First of all, as I will proof later, answer what machines are is much easier that
answer what thinking is, partly due to our wildly vast experience with machines—such as
our telephones, our laptops and so on—and partly because the common ground knowledge
of what thinking is has not yet being solved, that is to say, there is not a conceptual
consensus (I will try to show exactly why later in this essay). Another reason to this is
because I will focus solely on computational machines, on computers. This is rather an
obvious decision because one might find oneself doubting on the intelligence of a computer
but not of a washing machine, so it is not like if I have had dissected the “Can machines
think?” question in two separate parts; in order to show why computers came to be
associated with thinking, which is what our brain does, we should be able to define both
computers and thinking, and thinking as a process of the brain. Superficially, they have
been associated because the processes that computers carry are very similar to some of the
brain’s processes. Depending on the level of similarity is that the first question can be
answered. Cognitive science, for example, has shown the fruitfulness of this association
between minds and computers.
I will concisely explain what exactly computers are and what processes are similar
to that of the brain. For this, I will introduce three different computational machines: the
Turing Machine, the Physical Symbol System (PSS) and the Artificial Neural Networks. I
will not repair too much on their functionality, but on those processes that are relevant for
this essay. For this I will use the book Cognitive Science: introduction to the Science of the
Mind by José Luis Bermúdez. As the reader might have notice, this part will consist solely
in explanations. As for the “What is thinking?” I will be looking at two papers: “Is the
Brain’s Mind a Computer Program?” by John Searle and “Could a Machine Think?” by
Paul and Patricia Churchland.
To begin with, key to the understanding of computers is the Turing Machine, a
model envisaged by Alan Turing, and the one that is said to be the first computer. So, what
does a Turing Machine do? It goes through an input to an output in a purely mechanical
process. Technically speaking, this purely mechanical process is an algorithmic process,
which can be understood as a finite number of steps that can be followed without further
insight, or as an easy recipe to get to the output given an initial input. So why an
algorithmic process is relevant in the understanding of the brain processes? As Bermúdez
explains, “as theorists moved closer to the idea that cognition involves processing
information it was an easy step to think about information processing as an algorithmic
process” (16). In other words, the algorithmic process was important because scientist
started to think the cognitive process of the mind as a processing of information of the
environment, and this processing itself to be constituted algorithmically, since the brain
also breaks down complex cognitive processes in simpler ones. This led to believe that
computation, which is what Turing Machines do, is a process proper for both the machines
and the brains.
In parallel, it would be with the PSS that the processing of information would
acquire most importance in the minds and machines analogy, and therefore in cognitive
science. The PSS is a hypothesis formulated by Allen Newell and Hebert Simon in the
lecture for the annual Turing Award in 1975. It states a general and fundamental basis for
understanding intelligent thinking, both for brains and computers: “thinking is simply the
transformation of symbol structures according to rules. Any system that can transform
symbol structures in a sophisticated enough way will qualify as intelligent” (Bermúdez
150), in other words, and intelligent thinking involves the manipulation of symbols. At first
sight, this hypothesis is connected to computation because both seem to operate
algorithmically. Furthermore, because computation involves not only an algorithmic
process but also can be understood as a process of symbol manipulation; in order to the
Turing Machine to go from an input to an output, it has to manipulate symbols according to
some rules. In this case those symbols are the language of the computer, those are 0s and
1s. If we continue this argument it will follow that brains process information by means of
symbol manipulation. So far we can pin down a definition of what machines are: a
computational device that works by means of symbol manipulation.
In addition to the contribution to the processing of information in computation
theory, the PSS has also managed to offer a definition of intelligent thinking. It is not
sufficient that this manipulation occurs; it also has to be in a sophisticated way. This
sophistication is needed in order to solve problems, which is the ability that would proof
that this symbol manipulation is intelligent (Bermúdez 150). In summa, an intelligent
thinking, according to the PSS, is the ability to solve problems by means of manipulating
symbols.
Now let’s put into perspective Searle’s and the Churchlands’ arguments. To clarify
briefly, Searle and the Chuchlands establish two different points of view in the question of
whether machines can think or not. Or so at least it seems. While the title of the paper of
the Churchlands addresses this question directly, Searle’s does not as much. Actually, the
latter’s paper is a question of analogy. Can we say that A equals B? In the first pages of his
essay, then, Searle tackles this relation by rejecting what he calls the strong AI approach to
this, which consists in believing that “by designing the right programs with the right inputs
and outputs, they are literally creating minds” (Searle 26). I say is a problem of analogy
because imagine we stretch further the argument and say the right programs are minds and
computers equals brains. Then the analogy would be sterile. If the specifics of each one of
them are omitted there is nothing that would be interesting in comparing them.
Nonetheless, Searle tackles also the premise that is behind this conclusion: that computers
must be thinking because the program manipulates symbols and that is all there is to
thinking (27). So, what else involves thinking? According to Searle human minds also
involve meaning in their manipulation of symbols. In this respect, the Churchlands I
believe are more cautious, and argue that since there is still a lot that we don’t know about
the cognitive function of meaning we should not exploit the understanding of these for
argue whether machines can think or not, since that would be exploit rather in our own
ignorance (Churchland and Churchland 35; Bermúdez 217). However, I do think Searle
points in the only direction possible: since our brains are the only model of which we have
certainty of intelligent thinking, we should compare always the computers to them. Or
should this, rather than be the reason for considering our brains the only model possible is
reasoning for considering other models of intelligent thinking? The Churchlands consider
this last option to be the right answer. But even if we contain our explanation of thinking to
the model of the brain, there is a reason to believe that some modern machines think
beyond the symbol manipulation. Those would be the Artificial Neural Networks: “[they]
are distinctive in how they store information, how they retrieve it, and hoy they process it.
And even those models that are not biological driven remain neutrally inspired” (Bermúdez
218). In other words, Neural Networks not only manipulate symbols, they also process
information in other ways. And here is where the Causation Problem is introduced. Shortly,
I name this the problem of the biological causality needed to thinking; since brains are
biologically casual, if we aspire to make a machine intelligent, it must be causal too.
Although the Churchlands have put together an answer to this by giving an example of a
possible machine that works causally, Searle would answer that if it is programed, since
programs work by manipulating symbols, it is not sufficient to establish their intelligence.
In other words, simulation in a program does not equal duplication. But if we were to make
a thinking machine how would we recognize the difference? Perhaps the most suggestive
fact is that there are things already in the Neural Networks that we don’t know how exactly
work—just as we ignore many in the brain. One philosopher once said: if dogs could talk
we would not understand them. The question is now: are machines talking?
Bibliography
Bermúdez, José Luis. Cognitive Science. An Introduction to the Science of the Mind.
Cambridge University Press, 2010.
Churchland, Paul M., and Patricia Smith Churchland. “Could a Machine Think?: Classical
AI Is Unlikely to Yield Conscious Machines; Systems That Mimic the Brain Might.”
Machine Intelligence: Perspectives on the Computational Model, 2012, pp. 102–07.
Searle, John R. “Is the Brain’s Mind a Computer Program?” Scientific American, 1990,
doi:10.1038/scientificamerican0190-26.