Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

Harnad responds

MacLennan suggests that analog computers also have symbols and symbol grounding problems. What I'm not altogether sure of is what he means by ``continuous meaning assignments.'' I know what discrete symbols (like ``chair'' or ``3'') and their corresponding meanings are. But what are continuous symbols and meanings? Or is it ``meaning assigned to a continuum of values of a physical variable,'' as in interpreting the height of the mercury as proportional to the real temperature? The case is instructive, because where there is a true isomorphism between an internal continuum and an external one, it is much easier to put them into causal connection (as in the case of the thermometer), in which case of course the internal ``symbol'' is ``grounded.''

But I take Maclennan's meaning that the interpretations of analog computers' states are just as ungrounded (interpretation-mediated) in their ordinary uses as those of digital computers. I imagine that an analog computer would be ungrounded even if it could pass the TT (despite being, like PAR, immune to the Chinese Room Argument), but to show this one would have to individuate its symbols, for without those there is no subject of the grounding problem! And if Fodor & Pylyshyn (1988) are right, then under those conditions that analog computer would probably have to be implementing a systematic, compositional, language-of-thought-style discrete symbol system (in which case its analog properties would be irrelevant implementational details) and we would be back where we started. In any case, the TTT would continue to be the decisive test, and for this the analog computer (because of its ready isomorphism with sensorimotor transducer activity), may have an edge in certain respects.

MacLennan does take a passing shot at the Chinese Room Argument (with the ``multiple virtual minds'' version of the ``system reply'') to which I can't resist replying that, once one subtracts the interpretation (i.e., once one steps out of the hermeneutic circle), a symbol system, no matter how many hierarchical layers of interpretation it might be amenable to, has about as much chance of instantiating minds (whether one, two, or three) as a single, double, or triple acrostic, and for roughly the same reasons (``virtual-worlds'' enthusiasts would do well to pause and ponder this point for a while).

MacLennan also falls into the epistemic/ontic confusion when he writes about how ``psychologists and ethologists routinely attribute `understanding' and other mental states to other organisms on the basis of external tests,'' or how this psychologist ``defines'' them behaviorally or that one does it operationally. The ontic question (of whether or not a system really has a mind) in no way depends on, nor can it be settled by, what we've agreed to attribute to the system; it depends only on whether something's really home in there. That's not answerable by definitions, operational or otherwise.