A Computational Foundation For The Study of Cognition
A Computational Foundation For The Study of Cognition
David J. Chalmers
Australian National University
New York University
chalmers@anu.edu.au
* This paper was written in 1993 but never published (although section 2 was
included in “On Implementing a Computation”, published in Minds and Machines
in 1994). Because the paper has been widely cited over the years, I have not made
any changes to it apart from adding one footnote, instead saving any further
thoughts for my reply to commentators. In any case I am still largely sympathetic
with the views expressed here, in broad outline if not in every detail.
1. Introduction
there be the intimate link between computation and cognition that the the-
ses suppose? In this paper, I will develop a framework that can answer these
questions and justify the two foundational theses.
In order for the foundation to be stable, the notion of computation itself
has to be clarified. The mathematical theory of computation in the abstract
is well-understood, but cognitive science and artificial intelligence ulti-
mately deal with physical systems. A bridge between these systems and the
abstract theory of computation is required. Specifically, we need a theory of
implementation: the relation that holds between an abstract computational
object (a “computation” for short) and a physical system, such that we can
say that in some sense the system “realizes” the computation, and that the
computation “describes” the system. We cannot justify the foundational
role of computation without first answering the question: What are the con-
ditions under which a physical system implements a given computation?
Searle (1990) has argued that there is no objective answer to this question,
and that any given system can be seen to implement any computation if
interpreted appropriately. He argues, for instance, that his wall can be seen
to implement the Wordstar program. I will argue that there is no reason
for such pessimism, and that objective conditions can be straightforwardly
spelled out.
Once a theory of implementation has been provided, we can use it to
answer the second key question: What is the relationship between compu-
tation and cognition? The answer to this question lies in the fact that the
properties of a physical cognitive system that are relevant to its implement-
ing certain computations, as given in the answer to the first question, are
precisely those properties in virtue of which (a) the system possesses mental
properties and (b) the system’s cognitive processes can be explained.
The computational framework developed to answer the first question
can therefore be used to justify the theses of computational sufficiency and
computational explanation. In addition, I will use this framework to answer
various challenges to the centrality of computation, and to clarify some dif-
ficult questions about computation and its role in cognitive science. In this
way, we can see that the foundations of artificial intelligence and computa-
tional cognitive science are solid.
328
2. A Theory of Implementation
This is still a little vague. To spell it out fully, we must specify the class of
computations in question. Computations are generally specified relative to
some formalism, and there is a wide variety of formalisms: these include
Turing machines, Pascal programs, cellular automata, and neural networks,
among others. The story about implementation is similar for each of these;
only the details differ. All of these can be subsumed under the class of com-
binatorial-state automata (CSAs), which I will outline shortly, but for the
purposes of illustration I will first deal with the special case of simple finite-
state automata (FSAs).
An FSA is specified by giving a set of input states I1, ..., Ik, a set of inter-
nal states S1,...,Sm, and a set of output states O1,...,On, along with a set of
state-transition relations of the form (S, I) → (S’, O’), for each pair (S, I)
of internal states and input states, where S’ and O’ are an internal state and
an output state respectively. S and I can be thought of as the “old” internal
state and the input at a given time; S’ is the “new” internal state, and O’ is
the output produced at that time. (There are some variations in the ways this
can be spelled out — e.g. one need not include outputs at each time step,
329
1
I take it that something like this is the “standard” definition of implementation
of a finite-state automaton; see, for example, the definition of the description of a
system by a probabilistic automaton in Putnam (1967). It is surprising, however,
how little space has been devoted to accounts of implementation in the literature
in theoretical computer science, philosophy of psychology, and cognitive science,
considering how central the notion of computation is to these fields. It is remarkable
that there could be a controversy about what it takes for a physical system to
implement a computation (e.g. Searle 1990, 1991) at this late date.
330
alternatives. The same goes for the complex structure of inputs and outputs.
The system implements a given CSA if there exists such a vectorization of
states of the system, and a mapping from elements of those vectors onto
corresponding elements of the vectors of the CSA, such that the state-tran-
sition relations are isomorphic in the obvious way. The details can be filled
in straightforwardly, as follows:
Once again, further constraints might be added to this definition for various
purposes, and there is much that can be said to flesh out the definition’s var-
ious parts; a detailed discussion of these technicalities must await another
forum (see Chalmers 1996a for a start). This definition is not the last word
in a theory of implementation, but it captures the theory’s basic form.
One might think that CSAs are not much of an advance on FSAs. Finite
CSAs, at least, are no more computationally powerful than FSAs; there is a
natural correspondence that associates every finite CSA with an FSA with
the same input/output behavior. Of course infinite CSAs (such as Turing
machines) are more powerful, but even leaving that reason aside, there are
a number of reasons why CSAs are a more suitable formalism for our pur-
poses than FSAs.
First, the implementation conditions on a CSA are much more constrained
than those of the corresponding FSA. An implementation of a CSA is
required to consist in a complex causal interaction among a number of
separate parts; a CSA description can therefore capture the causal organiza-
tion of a system to a much finer grain. Second, the structure in CSA states
can be of great explanatory utility. A description of a physical system as a
332
CSA will often be much more illuminating than a description as the corre-
sponding FSA.2 Third, CSAs reflect in a much more direct way the formal
organization of such familiar computational objects as Turing machines,
cellular automata, and the like. Finally, the CSA framework allows a uni-
fied account of the implementation conditions for both finite and infinite
machines.
This definition can straightforwardly be applied to yield implementa-
tion conditions for more specific computational formalisms. To develop an
account of the implementation-conditions for a Turing machine, say, we
need only redescribe the Turing machine as a CSA. The overall state of a
Turing machine can be seen as a giant vector, consisting of (a) the internal
state of the head, and (b) the state of each square of the tape, where this
state in turn is an ordered pair of a symbol and a flag indicating whether
the square is occupied by the head (of course only one square can be so
occupied; this will be ensured by restrictions on initial state and on state-
transition rules). The state-transition rules between vectors can be derived
naturally from the quintuples specifying the behavior of the machine-head.
As usually understood, Turing machines only take inputs at a single time-
step (the start), and do not produce any output separate from the contents of
the tape. These restrictions can be overridden in natural ways, for example
by adding separate input and output tapes, but even with inputs and outputs
limited in this way there is a natural description as a CSA. Given this trans-
lation from the Turing machine formalism to the CSA formalism, we can
say that a given Turing machine is implemented whenever the correspond-
ing CSA is implemented.
A similar story holds for computations in other formalisms. Some for-
malisms, such as cellular automata, are even more straightforward. Others,
such as Pascal programs, are more complex, but the overall principles are
the same. In each case there is some room for maneuver, and perhaps some
arbitrary decisions to make (does writing a symbol and moving the head
count as two state-transitions or one?) but little rests on the decisions we
make. We can also give accounts of implementation for nondeterministic
and probabilistic automata, by making simple changes in the definition of
2
See Pylyshyn 1984, p. 71, for a related point.
333
wall) will satisfy these constraints. It is true that while we lack knowledge
of the fundamental constituents of matter, it is impossible to prove that
arbitrary objects do not implement every computation (perhaps every pro-
ton has an infinitely rich internal structure), but anybody who denies this
conclusion will need to come up with a remarkably strong argument.
Can a given system implement more than one computation? Yes. Any
system implementing some complex computation will simultaneously be
implementing many simpler computations — not just 1-state and 2-state
FSAs, but computations of some complexity. This is no flaw in the current
account; it is precisely what we should expect. The system on my desk is
currently implementing all kinds of computations, from EMACS to a clock
program, and various sub-computations of these. In general, there is no
canonical mapping from a physical object to “the” computation it is per-
forming. We might say that within every physical system, there are numer-
ous computational systems. To this very limited extent, the notion of imple-
mentation is “interest-relative.” Once again, however, there is no threat of
vacuity. The question of whether a given system implements a given com-
putation is still entirely objective. What counts is that a given system does
not implement every computation, or to put the point differently, that most
given computations are only implemented by a very limited class of physi-
cal systems. This is what is required for a substantial foundation for AI and
cognitive science, and it is what the account I have given provides.
If even digestion is a computation, isn’t this vacuous? This objection
expresses the feeling that if every process, including such things as diges-
tion and oxidation, implements some computation, then there seems to be
nothing special about cognition any more, as computation is so pervasive.
This objection rests on a misunderstanding. It is true that any given instance
of digestion will implement some computation, as any physical system
does, but the system’s implementing this computation is in general irrel-
evant to its being an instance of digestion. To see this, we can note that the
same computation could have been implemented by various other physi-
cal systems (such as my SPARC) without it’s being an instance of diges-
tion. Therefore the fact that the system implements the computation is not
responsible for the existence of digestion in the system.
With cognition, by contrast, the claim is that it is in virtue of implement-
335
ing some computation that a system is cognitive. That is, there is a certain
class of computations such that any system implementing that computation
is cognitive. We might go further and argue that every cognitive system
implements some computation such that any implementation of the com-
putation would also be cognitive, and would share numerous specific men-
tal properties with the original system. These claims are controversial, of
course, and I will be arguing for them in the next section. But note that it is
precisely this relation between computation and cognition that gives bite to
the computational analysis of cognition. If this relation or something like it
did not hold, the computational status of cognition would be analogous to
that of digestion.
What about Putnam’s argument? Putnam (1988) has suggested that on
a definition like this, almost any physical system can be seen to implement
every finite-state automaton. He argues for this conclusion by demonstrat-
ing that there will almost always be a mapping from physical states of a
system to internal states of an FSA, such that over a given time-period
(from 12:00 to 12:10 today, say) the transitions between states are just as the
machine table say they should be. If the machine table requires that state A
be followed by state B, then every instance of state A is followed by state
B in this time period. Such a mapping will be possible for an inputless FSA
under the assumption that physical states do not repeat. We simply map the
initial physical state of the system onto an initial formal state of the compu-
tation, and map successive states of the system onto successive states of the
computation.
However, to suppose that this system implements the FSA in question is
to misconstrue the state-transition conditionals in the definition of imple-
mentation. What is required is not simply that state A be followed by state
B on all instances in which it happens to come up in a given time-period.
There must be a reliable, counterfactual-supporting connection between the
states. Given a formal state-transition A → B, it must be the case that if the
system were to be in state A, it would transit to state B. Further, such a con-
ditional must be satisfied for every transition in the machine table, not just
for those whose antecedent states happen to come up in a given time period.
It is easy to see that Putnam’s system does not satisfy this much stronger
requirement. In effect, Putnam has required only that certain weak material
336
conditionals be satisfied, rather than conditionals with modal force. For this
reason, his purported implementations are not implementations at all.
(Two notes. First, Putnam responds briefly to the charge that his system
fails to support counterfactuals, but considers a different class of counter-
factuals — those of the form “if the system had not been in state A, it would
not have transited to state B.” It is not these counterfactuals that are relevant
here. Second, it turns out that Putnam’s argument for the widespread real-
ization of inputless FSAs can be patched up in a certain way; this just goes
to show that inputless FSAs are an inappropriate formalism for cognitive
science, due to their complete lack of combinatorial structure. Putnam gives
a related argument for the widespread realization of FSAs with input and
output, but this argument is strongly vulnerable to an objection like the one
above, and cannot be patched up in an analogous way. CSAs are even less
vulnerable to this sort of argument. I discuss all this at much greater length
in Chalmers 1996a.)
What about semantics? It will be noted that nothing in my account of
computation and implementation invokes any semantic considerations,
such as the representational content of internal states. This is precisely as
it should be: computations are specified syntactically, not semantically.
Although it may very well be the case that any implementations of a given
computation share some kind of semantic content, this should be a conse-
quence of an account of computation and implementation, rather than built
into the definition. If we build semantic considerations into the conditions
for implementation, any role that computation can play in providing a
foundation for AI and cognitive science will be endangered, as the notion of
semantic content is so ill-understood that it desperately needs a foundation
itself.
The original account of Turing machines by Turing (1936) certainly had
no semantic constraints built in. A Turing machine is defined purely in
terms of the mechanisms involved, that is, in terms of syntactic patterns and
the way they are transformed. To implement a Turing machine, we need
only ensure that this formal structure is reflected in the causal structure of
the implementation. Some Turing machines will certainly support a system-
atic semantic interpretation, in which case their implementations will also,
but this plays no part in the definition of what it is to be or to implement
337
a Turing machine. This is made particularly clear if we note that there are
some Turing machines, such as machines defined by random sets of state-
transition quintuples, that support no non-trivial semantic interpretation.
We need an account of what it is to implement these machines, and such an
account will then generalize to machines that support a semantic interpreta-
tion. Certainly, when computer designers ensure that their machines imple-
ment the programs that they are supposed to, they do this by ensuring that
the mechanisms have the right causal organization; they are not concerned
with semantic content. In the words of Haugeland (1985), if you take care
of the syntax, the semantics will take care of itself.
I have said that the notion of computation should not be dependent on
that of semantic content; neither do I think that the latter notion should
be dependent on the former. Rather, both computation and content should
be dependent on the common notion of causation. We have seen the first
dependence in the account of computation above. The notion of content has
also been frequently analyzed in terms of causation (see e.g. Dretske 1981
and Fodor 1987). This common pillar in the analyses of both computation
and content allows that the two notions will not sway independently, while
at the same time ensuring that neither is dependent on the other for its anal-
ysis.
What about computers? Although Searle (1990) talks about what it
takes for something to be a “digital computer,” I have talked only about
computations and eschewed reference to computers. This is deliberate, as it
seems to me that computation is the more fundamental notion, and certainly
the one that is important for AI and cognitive science. AI and cognitive sci-
ence certainly do not require that cognitive systems be computers, unless
we stipulate that all it takes to be a computer is to implement some compu-
tation, in which case the definition is vacuous.
What does it take for something to be a computer? Presumably, a com-
puter cannot merely implement a single computation. It must be capable of
implementing many computations - that is, it must be programmable. In the
extreme case, a computer will be universal, capable of being programmed
to compute any recursively enumerable function. Perhaps universality
is not required of a computer, but programmability certainly is. To bring
computers within the scope of the theory of implementation above, we
338
could require that a computer be a CSA with certain parameters, such that
depending on how these parameters are set, a number of different CSAs can
be implemented. A universal Turing machine could be seen in this light, for
instance, where the parameters correspond to the “program” symbols on the
tape. In any case, such a theory of computers is not required for the study
of cognition.
Is the brain a computer in this sense? Arguably. For a start, the brain can
be “programmed” to implement various computations by the laborious
means of conscious serial rule-following; but this is a fairly incidental abil-
ity. On a different level, it might be argued that learning provides a certain
kind of programmability and parameter-setting, but this is a sufficiently
indirect kind of parameter-setting that it might be argued that it does not
qualify. In any case, the question is quite unimportant for our purposes.
What counts is that the brain implements various complex computations,
not that it is a computer.
The above is only half the story. We now need to exploit the above account
of computation and implementation to outline the relation between compu-
tation and cognition, and to justify the foundational role of computation in
AI and cognitive science.
Justification of the thesis of computational sufficiency has usually been
tenuous. Perhaps the most common move has been an appeal to the Turing
test, noting that every implementation of a given computation will have a
certain kind of behavior, and claiming that the right kind of behavior is suf-
ficient for mentality. The Turing test is a weak foundation, however, and
one to which AI need not appeal. It may be that any behavioral descrip-
tion can be implemented by systems lacking mentality altogether (such as
the giant lookup tables of Block 1981). Even if behavior suffices for mind,
the demise of logical behaviorism has made it very implausible that it suf-
fices for specific mental properties: two mentally distinct systems can have
the same behavioral dispositions. A computational basis for cognition will
require a tighter link than this, then.
Instead, the central property of computation on which I will focus is one
339
that we have already noted: the fact that a computation provides an abstract
specification of the causal organization of a system. Causal organization is
the nexus between computation and cognition. If cognitive systems have
their mental properties in virtue of their causal organization, and if that
causal organization can be specified computationally, then the thesis of
computational sufficiency is established. Similarly, if it is the causal organi-
zation of a system that is primarily relevant in the explanation of behavior,
then the thesis of computational explanation will be established. By the
account above, we will always be able to provide a computational specifica-
tion of the relevant causal organization, and therefore of the properties on
which cognition rests.
exchange with an electrical link); and (e) any other changes that do not alter
the pattern of causal interaction among parts of the system.
Most properties are not organizational invariants. The property of flying
is not, for instance: we can move an airplane to the ground while preserving
its causal topology, and it will no longer be flying. Digestion is not: if we
gradually replace the parts involved in digestion with pieces of metal, while
preserving causal patterns, after a while it will no longer be an instance of
digestion: no food groups will be broken down, no energy will be extracted,
and so on. The property of being tube of toothpaste is not an organizational
invariant: if we deform the tube into a sphere, or replace the toothpaste by
peanut butter while preserving causal topology, we no longer have a tube of
toothpaste.
In general, most properties depend essentially on certain features that are
not features of causal topology. Flying depends on height, digestion depends
on a particular physiochemical makeup, tubes of toothpaste depend on
shape and physiochemical makeup, and so on. Change the features in ques-
tion enough and the property in question will change, even though causal
topology might be preserved throughout.
these two systems, N and S, which are identical except in that some circuit
in one is neural and in the other is silicon.
The key step in the thought-experiment is to take the relevant neural cir-
cuit in N, and to install alongside it a causally isomorphic silicon back-up
circuit, with a switch between the two circuits. What happens when we flip
the switch? By hypothesis, the system’s conscious experiences will change:
say, for purposes of illustration, from a bright red experience to a bright
blue experience (or to a faded red experience, or whatever). This follows
from the fact that the system after the change is a version of S, whereas
before the change it is just N.
But given the assumptions, there is no way for the system to notice these
changes. Its causal topology stays constant, so that all of its functional states
and behavioral dispositions stay fixed. If noticing is defined functionally
(as it should be), then there is no room for any noticing to take place, and
if it is not, any noticing here would seem to be a thin event indeed. There is
certainly no room for a thought “Hmm! Something strange just happened!”,
unless it is floating free in some Cartesian realm.3 Even if there were such a
thought, it would be utterly impotent; it could lead to no change of process-
ing within the system, which could not even mention it (If the substitution
were to yield some change in processing, then the systems would not have
the same causal topology after all. Recall that the argument has the form
of a reductio). We might even flip the switch a number of times, so that red
and blue experiences “dance” before the system’s inner eye; it will never
notice. This, I take it, is a reductio ad absurdum of the original hypothesis:
if one’s experiences change, one can potentially notice in a way that makes
some causal difference. Therefore the original assumption is false, and phe-
3
In analyzing a related thought-experiment, Searle (1991) suggests that a subject
who has undergone silicon replacement might react as follows: “You want to cry
out, `I can’t see anything. I’m going totally blind’. But you hear your voice saying
in a way that is completely out of your control, `I see a red object in front of me’”
(pp. 66-67). But given that the system’s causal topology remains constant, it is
very unclear where there is room for such “wanting” to take place, if it is not in
some Cartesian realm. Searle suggests some other things that might happen, such
as a reduction to total paralysis, but these suggestions require a change in causal
topology and are therefore not relevant to the issue of organizational invariance.
343
4
I am skeptical about whether phenomenal properties can be explained in wholly
physical terms. As I argue in Chalmers 1996b, given any account of the physical or
computational processes underlying mentality, the question of why these processes
should give rise to conscious experience does not seem to be explainable within
physical or computational theory alone. Nevertheless, it remains the case that
phenomenal properties depend on physical properties, and if what I have said
earlier is correct, the physical properties that they depend on are organizational
properties. Further, the explanatory gap with respect to conscious experience is
compatible with the computational explanation of cognitive processes and of
behavior, which is what the thesis of computational explanation requires.
345
5
Of course there is a sense in which it can be said that connectionist models
perform “computation over representation”, in that connectionist processing
involves the transformation of representations, but this sense is to weak to cut the
distinction between symbolic and subsymbolic computation at its joints. Perhaps
the most interesting foundational distinction between symbolic and connectionist
systems is that in the former but not in the latter, the computational (syntactic)
primitives are also the representational (semantic) primitives.
346
cates a given property comes down to the question of whether or not the
property is an organizational invariant. The property of being a hurricane is
obviously not an organizational invariant, for instance, as it is essential to
the very notion of hurricanehood that wind and air be involved. The same
goes for properties such as digestion and temperature, for which specific
physical elements play a defining role. There is no such obvious objection
to the organizational invariance of cognition, so the cases are disanalogous,
and indeed, I have argued above that for mental properties, organizational
invariance actually holds. It follows that a model that is computationally
equivalent to a mind will itself be a mind.
Syntax and semantics. Searle (1984) has argued along the following
lines: (1) A computer program is syntactic; (2) Syntax is not sufficient for
semantics; (3) Minds have semantics; therefore (4) Implementing a com-
puter program is insufficient for a mind. Leaving aside worries about the
second premise, we can note that this argument equivocates between pro-
grams and implementations of those programs. While programs themselves
are syntactic objects, implementations are not: they are real physical sys-
tems with complex causal organization, with real physical causation going
on inside. In an electronic computer, for instance, circuits and voltages push
each other around in a manner analogous to that in which neurons and
activations push each other around. It is precisely in virtue of this causation
that implementations may have cognitive and therefore semantic properties.
It is the notion of implementation that does all the work here. A program
and its physical implementation should not be regarded as equivalent —
they lie on entirely different levels, and have entirely different properties. It
is the program that is syntactic; it is the implementation that has semantic
content. Of course, there is still a substantial question about how an imple-
mentation comes to possess semantic content, just as there is a substantial
question about how a brain comes to possess semantic content. But once
we focus on the implementation, rather than the program, we are at least in
the right ball-park. We are talking about a physical system with causal heft,
rather than a shadowy syntactic object. If we accept, as is extremely plau-
sible, that brains have semantic properties in virtue of their causal organiza-
tion and causal relations, then the same will go for implementations. Syntax
may not be sufficient for semantics, but the right kind of causation is.
347
The Chinese room. There is not room here to deal with Searle’s famous
Chinese room argument in detail. I note, however, that the account I have
given supports the “Systems reply”, according to which the entire system
understands Chinese even if the homunculus doing the simulating does not.
Say the overall system is simulating a brain, neuron-by-neuron. Then like
any implementation, it will share important causal organization with the
brain. In particular, if there is a symbol for every neuron, then the patterns
of interaction between slips of paper bearing those symbols will mirror pat-
terns of interaction between neurons in the brain, and so on. This organiza-
tion is implemented in a baroque way, but we should not let the baroque-
ness blind us to the fact that the causal organization — real, physical causal
organization — is there (The same goes for a simulation of cognition at
level above the neural, in which the shared causal organization will lie at a
coarser level).
It is precisely in virtue of this causal organization that the system pos-
sesses its mental properties. We can rerun a version of the “dancing qualia”
argument to see this. In principle, we can move from the brain to the Chi-
nese room simulation in small steps, replacing neurons at each step by little
demons doing the same causal work, and then gradually cutting down labor
by replacing two neighboring demons by one who does the same work.
Eventually we arrive at a system where a single demon is responsible for
maintaining the causal organization, without requiring any real neurons at
all. This organization might be maintained between marks on paper, or it
might even be present inside the demon’s own head, if the calculations are
memorized. The arguments about organizational invariance all hold here —
for the same reasons as before, it is implausible to suppose that the system’s
experiences will change or disappear.
Performing the thought-experiment this way makes it clear that we
should not expect the experiences to be had by the demon. The demon is
simply a kind of causal facilitator, ensuring that states bear the appropriate
causal relations to each other. The conscious experiences will be had by the
system as a whole. Even if that system is implemented inside the demon
by virtue of the demon’s memorization, the system should not be confused
with demon itself. We should not suppose that the demon will share the
implemented system’s experiences, any more than it will share the experi-
348
ences of an ant that crawls inside its skull: both are cases of two computa-
tional systems being implemented within a single physical space. Mental
properties arising from distinct computational systems will be quite distinct,
and there is no reason to suppose that they overlap.
What about the environment? Some mental properties, such as knowl-
edge and even belief, depend on the environment being a certain way. Com-
putational organization, as I have outlined it, cannot determine the environ-
mental contribution, and therefore cannot fully guarantee this sort of mental
property. But this is no problem. All we need computational organization
to give us is the internal contribution to mental properties: that is, the same
contribution that the brain makes (for instance, computational organization
will determine the so-called “narrow content” of a belief, if this exists; see
Fodor 1987). The full panoply of mental properties might only be deter-
mined by computation-plus-environment, just as it is determined by brain-
plus-environment. These considerations do not count against the prospects
of artificial intelligence, and they affect the aspirations of computational
cognitive science no more than they affect the aspirations of neuroscience.
Is cognition computable? In the preceding discussion I have taken for
granted that computation can at least simulate human cognitive capacity,
and have been concerned to argue that this counts as honest-to-goodness
mentality. The former point has often been granted by opponents of AI (e.g.
Searle 1980) who have directed the fire at the latter, but it is not uncontro-
versial.
This is to some extent an empirical issue, but the relevant evidence is
solidly on the side of computability. We have every reason to believe that
the low-level laws of physics are computable. If so, then low-level neuro-
physiological processes can be computationally simulated; it follows that
the function of the whole brain is computable too, as the brain consists in a
network of neurophysiological parts. Some have disputed the premise: for
example, Penrose (1989) has speculated that the effects of quantum gravity
are noncomputable, and that these effects may play a role in cognitive func-
tioning. He offers no arguments to back up this speculation, however, and
there is no evidence of such noncomputability in current physical theory (see
Pour-El and Richards (1989) for a discussion). Failing such a radical devel-
opment as the discovery that the fundamental laws of nature are uncomput-
349
able, we have every reason to believe that human cognition can be compu-
tationally modeled.
What about Gödel’s theorem? Gödel’s theorem states that for any con-
sistent formal system, there are statements of arithmetic that are unprov-
able within the system. This has led some (Lucas 1963; Penrose 1989) to
conclude that humans have abilities that cannot be duplicated by any com-
putational system. For example, our ability to “see” the truth of the Gödel
sentence of a formal system is argued to be non-algorithmic. I will not deal
with this objection in detail here, as the answer to it is not a direct applica-
tion of the current framework. I will simply note that the assumption that
we can see the truth of arbitrary Gödel sentences requires that we have the
ability to determine the consistency or inconsistency of any given formal
system, and there is no reason to believe that we have this ability in general
(For more on this point, see Putnam 1960, Bowie 1982 and the commentar-
ies on Penrose 1990).
Discreteness and continuity. An important objection notes that the CSA
formalism only captures discrete causal organization, and argues that some
cognitive properties may depend on continuous aspects of that organization,
such as analog values or chaotic dependencies.
A number of responses to this are possible. The first is to note that the
current framework can fairly easily be extended to deal with computation
over continuous quantities such as real numbers. All that is required is that
the various substates of a CSA be represented by a real parameter rather
than a discrete parameter, where appropriate restrictions are placed on
allowable state-transitions (for instance, we can require that parameters are
transformed polynomially, where the requisite transformation can be condi-
tional on sign). See Blum, Shub and Smale (1989) for a careful working-out
of some of the relevant theory of computability. A theory of implementation
can be given along in a fashion similar to the account I have given above,
where continuous quantities in the formalism are required to correspond
to continuous physical parameters with an appropriate correspondence in
state-transitions.
This formalism is still discrete in time: evolution of the continuous states
proceeds in discrete temporal steps. It might be argued that cognitive orga-
nization is in fact continuous in time, and that a relevant formalism should
350
But artificial intelligence and computational cognitive science are not com-
mitted to the claim that the brain is literally a Turing machine with a mov-
ing head and a tape, and even less to the claim that that tape is the environ-
ment. The claim is simply that some computational framework can explain
and replicate human cognitive processes. It may turn out that the relevant
computational description of these processes is very fine-grained, reflect-
ing extremely complex causal dynamics among neurons, and it may well
turn out that there is significant variation in causal organization between
individuals. There is nothing here that is incompatible with a computational
approach to cognitive science.
In a similar way, a computationalist need not claim that the brain is a von
Neumann machine, or has some other specific architecture. Like Turing
machines, von Neumann machines are just one kind of architecture, par-
ticularly well-suited to programmability, but the claim that the brain imple-
ments such an architecture is far ahead of any empirical evidence and is
most likely false. The commitments of computationalism are more general.
Computationalism is occasionally associated with the view that cogni-
tion is rule-following, but again this is a strong empirical hypothesis that
is inessential to the foundations of the fields. It is entirely possible that the
only “rules” found in a computational description of thought will be at a
very low level, specifying the causal dynamics of neurons, for instance, or
perhaps the dynamics of some level between the neural and the cognitive.
Even if there are no rules to be found at the cognitive level, a computational
approach to the mind can still succeed. Another claim to which a computa-
tionalist need not be committed are “the brain is a computer”; as we have
seen, it is not computers that are central but computations).
The most ubiquitous “strong” form of computationalism has been what
we may call symbolic computationalism: the view that cognition is compu-
353
tation over representation (Newell and Simon 1976; Fodor and Pylyshyn
1988). To a first approximation, we can cash out this view as the claim that
the computational primitives in a computational description of cognition
are also representational primitives. That is to say, the basic syntactic enti-
ties between which state-transitions are defined are themselves bearers of
semantic content, and are therefore symbols.
Symbolic computationalism has been a popular and fruitful approach
to the mind, but it does not exhaust the resources of computation. Not all
computations are symbolic computations. We have seen that there are some
Turing machines that lack semantic content altogether, for instance. Perhaps
systems that carry semantic content are more plausible models of cognition,
but even in these systems there is no reason why the content must be car-
ried by the systems’ computational primitives. In connectionist systems, for
example, the basic bearers of semantic content are distributed representa-
tions, patterns of activity over many units, whereas the computational prim-
itives are simple units that may themselves lack semantic content. To use
Smolensky’s term (Smolensky 1988), these systems perform subsymbolic
computation: the level of computation falls below the level of representa-
tion.6 But the systems are computational nevertheless.
6
[Note added 2011.] In order to make them compatible with the views of
consciousness in Chalmers 1996b, the thesis of computational sufficiency and the
claim that mental properties are organizational invariants must be understood in
terms of nomological rather metaphysical necessity: the right kind of computation
suffices with nomological necessity for possession of a mind, mental properties
supervene nomologically on causal topology. These claims are compatible with
the metaphysical possibility of systems with the same organization and no
consciousness. As for the thesis of computational explanation: if one construes
cognitive processes to include arbitrary intentional or representational states,
then I think these cannot be explained wholly in terms of computation, as I think
that phenomenal properties and environmental properties play a role here. One
might qualify the thesis by understanding “cognitive processes” and “behavior” in
functional and nonintentional terms, or by saying that computational explanation
can undergird intentional explanation when appropriately supplemented, perhaps
by phenomenal and environmental elements. Alternatively, the version of the thesis
most directly supported by the argument in the text is that computation provides
a general framework for the mechanistic explanation of cognitive processes and
354
behavior. That is, insofar as cognitive processes and behavior are explainable
mechanistically, they are explainable computationally.
355
7
It is common for proponents of symbolic computationalism to hold, usually
as an unargued premise, that what makes a computation a computation is the
fact that it involves representations with semantic content. The books by Fodor
(1975) and Pylyshyn (1984), for instance, are both premised on the assumption that
there is no computation without representation. Of course this is to some extent a
terminological issue, but as I have stressed in 2.2 and here, this assumption has no
basis in computational theory and unduly restricts the role that computation plays
in the foundations of cognitive science.
8
Some other claims with which computationalism is sometimes associated
include “the brain is a computer”, “the mind is to the brain as software is to
hardware”, and “cognition is computation”. The first of these is not required, for
the reasons given in 2.2: it is not computers that are central to cognitive theory but
computations. The second claim is an imperfect expression of the computationalist
position for similar reasons: certainly the mind does not seem to be something
separable that the brain can load and run, as a computer’s hardware can load and
run software. Even the third does not seem to me to be central to computationalism:
perhaps there is a sense in which it is true, but what is more important is that
computation suffices for and explains cognition. See Dietrich (1990) for some
related distinctions between computationalism, “computerism”, and “cognitivism”.
356
that it can capture almost any kind of organization, whether the causal rela-
tions hold between high-level representations or among low-level neural
processes. Even such programs as the Gibsonian theory of perception are
ultimately compatible with minimal computationalism. If perception turns
out to work as the Gibsonians imagine, it will still be mediated by causal
mechanisms, and the mechanisms will be expressible in an appropriate
computational form. That expression may look very unlike a traditional
computational theory of perception, but it will be computational neverthe-
less.
In this light, we see that artificial intelligence and computational cogni-
tive science do not rest on shaky empirical hypotheses. Instead, they are
consequences of some very plausible principles about the causal basis of
cognition, and they are compatible with an extremely wide range of empiri-
cal discoveries about the functioning of the mind. It is precisely because
of this flexibility that computation serves as a foundation for the fields in
question, by providing a common framework within which many different
theories can be expressed, and by providing a tool with which the theories’
causal mechanisms can be instantiated. No matter how cognitive science
progresses in the coming years, there is good reason to believe that compu-
tation will be at center stage.
References
Armstrong, D.M. 1968. A Materialist Theory of the Mind. Routledge and Kegan
Paul.
Block, N. 1981. Psychologism and behaviorism. Philosophical Review 90: 5-43.
Blum, L., Shub, M., and Smale, S. 1989. On a theory of computation and complexity
over the real numbers: NP-completeness, recursive functions, and universal
machines. Bulletin (New Series) of the American Mathematical Society 21(1):
1-46.
Bowie, G. 1982. Lucas’ number is finally up. Journal of Philosophical Logic 11:
279-85.
Chalmers, D.J. (1995). Absent qualia, fading qualia, dancing qualia. In (T.
Metzinger, ed) Conscious Experience. Ferdinand Schoningh.
Chalmers, D.J. (1996a). Does a rock implement every finite-state automaton?
Synthese.
358