Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence
Harnad responds
Agreement is boring. New ideas arise from challenges to current ones.
Necessity is the mother of invention. So it is with relief that I see
that Searle and I have plenty to disagree about:
Syntax vs. semantics
No, it wasn't obvious at all that a dynamic implementation of a purely
syntactic system could not generate semantics. In fact, nothing is
obvious in the area of semantics. Grandma ``knew'' all along that
computers couldn't be thinking, Searle can't imagine how a gymful of
boys or a robot could be thinking, I can't imagine how a lump of
neurons could (and Father O'Grady, with a benign smile, knows
they couldn't, if they hadn't had some help). Nothing faintly obvious
here. So computationalism was worth a go -- until some of us started
thinking about it (in no small measure thanks to Searle 1980), and the
rest is still history in the making. But the force of Searle's
argument certainly is not that it's obvious that there's no
way to get from syntax to semantics; surely there's a bit more to it
than that. Besides, I'll wager that as obvious it is that there's no
way to get semantics from syntax, it'll be just as obvious that you
can't get it from any other candidate that's clearly enough in focus
so you can give it a thorough lookover. That's what's called the
mind/body problem.
Mind and meaning
Let me state something pretty plainly, something that Searle has only
been making tentative gestures toward (although the relevance and
credibility of his testimony in the Chinese room -- to the effect that
he does not understand Chinese -- relies on it completely):
There's only one kind of meaning: the kind that my thoughts have when
I think something, say, ``the cat is on the mat,'' and in thinking it,
I have something in mind. Put even more bluntly, there's something
it's like to think that the cat is on the mat, and the kind of
thing that that is like is the essential feature of thinking,
and meaning. Take time to mull it over. I'm saying that only minds
have meaningful states, and that their meaningfulness is derived
entirely from their subjective quality. That's intrinsic semantics.
Everything else, if it's interpretable as meaning anything at all, is
just extrinsic semantics, derived intentionality, or what have you --
as in the pages of a book, the output of a computerized dictionary or
the portent of a celestial configuration. If this is true, it is bad
news for ``unconscious thoughts,'' worse news for ``unconscious
minds'' and even worse news for systems in which there is nobody home
at all (as opposed to just someone sleeping) if such systems
nevertheless aspire to have intrinsic semantics. Having extrinsic
semantics just means being ``interpretable as if it were meaningful''
by a system that has intrinsic semantics.
Grounding and meaning
Is there a third possibility? Can something be more than just
``interpretable as if it meant X'' but less than a thought in a
conscious mind? This is the mind/modeller's counterpart of the
continuum hypothesis or $p=np$, but it is both empirically and
logically undecidable. The internal states of a grounded TTT system
are not just formally interpretable as if they meant what they mean;
the system itself acts in full accordance with the interpretation.
Causal interaction with the objects that the symbols are interpretable
as being about is not just syntax any more; syntax is just the formal
relations among the symbols. But is that enough to guarantee that the
semantics are now intrinsic? Or is grounded semantics just a
``stronger'' form of extrinsic semantics?
Ontology and epistemology
I don't make the ontic/epistemic confusions Searle thinks I make (I am
kept too much on my toes pointing them out in others!). I am fully
aware that not only the TT, but the TTT and even the TTTT are
incapable of guaranteeing the presence of mind, and hence intrinsic
meaning. But I'm also aware that the T-hierarchy is not just a series
of behavioristic digressions from the correct empirical path, they
are the empirical path; in fact, the TTTT exhausts the
empirical possibilities (Harnad 1992a; 1994). Searle himself is an
advocate of the TTTT. He can't imagine settling for less. Yet he
admits that we only want relevant TTTT powers. How are we to
know which ones those are?
Let us admit that we're doing reverse engineering rather than ``basic
science'' and hope that the constraint of finding out what is needed
to make a system that can do everything the brain can do will
allow us to pick out its relevant powers. No guarantees, of course,
but worrying too much about the outcome is tantamount to believing
that (1) TTT-indistinguishable Zombies could have made it in the world
just as successfully as we could, but we just don't happen to be
Zombies (but then how could evolution tell the difference, favoring
us, since it's not a mind-reader either?) that and (2) the degrees of
freedom for successfully building TTT-scale systems are large enough
to admit radically different solutions, some Zombies and some not. I
think it is more likely that the TTT is just the right relevance
filter for the TTTT. Otherwise we're stuck with modelling of lot of
what might be irrelevant TTTT properties.
Is the Other-Minds Problem Irrelevant?
As a form of skepticism -- worrying because we can't be sure other
people have minds -- the other-minds problem is not particularly
useful. But it is unavoidable when it comes to empirical work on
other organisms, artificial mind-modelling or the brain itself. The
question comes up naturally: How are we to ascertain whether or not
this system has a mind? There are no guarantees, but there are some
``dead end'' signs (like the Chinese Room Argument), and, one hopes,
some positive guides too, such as the TTT and groundedness. By
Searle's lights, there is only one: the TTTT.
Is Transduction Unmotivated ``Speculative
Neurophysiology''
I think there is plenty of evidence that a large portion of the
nervous system is devoted to sensory and motor transduction and their
multiple internal analog projections (e.g. Chamberlain & Barlow,
1982). Transduction is also motivated a priori by the logical
requirements of a TTT robot, the real/virtual robot/world distinction,
and immunity to the Chinese Room Argument. Besides, it's no kind of
neurophysiology if one's empirical constraint is the TTT rather than
the TTTT, as mine is.
A few loose ends
- Contrary to Searle's suggestion, there is (of course) a
causal connection between the hardware of a machine and the software
it is executing; it's just that those physical details are not
relevant to the computation, and the causal connection is the wrong
kind if a mind was what one was hoping to implement. (I think this
is the same conclusion Searle wanted to draw.)
- The cocaine example is a red herring, because nets are not being
proposed as models for pharmacological function but for
physiological function. But the gym example continues to be just a
caricature rather than an argument.
- My hybrid grounding program is not committed to computationalism
(I would be content to see most of the cognitive groundwork done
nonsymbolically), but I do think the internal substrate of language
will turn out to have something symbolic about it. Besides, the
Chinese Room Argument and the Symbol Grounding Problem show only
that cognition can't all be just computation, not that cognition
can't be computation at all. On the other hand, it's not clear
whether a grounded symbol system, with it's second layer of analog
constraints, is still really much of a symbol system, in the formal
syntactic sense, at all.