While I agree in general with Stevan Harnad's symbol grounding proposal, I do not believe ``transduction'' (or ``analog process'') per se is useful in distinguishing between what might best be described as different ``degrees'' of grounding and, hence, for determining whether a particular system might be capable of cognition. By `degrees of grounding' I mean whether the effects of grounding go ``all the way through'' or not. Why is transduction limited in this regard? Because transduction is a physical process which does not speak to the issue of representation, and, therefore, does not explain how the informational aspects of signals impinging on sensory surfaces become embodied as symbols or how those symbols subsequently cause behavior, both of which, I believe, are important to grounding and to a system's cognitive capacity. Immunity to Searle's Chinese Room (CR) argument does not ensure that a particular system is cognitive, and whether or not a particular degree of groundedness enables a system to pass the Total Turing Test (TTT) may never be determined.
It is clear that transduction is necessary to realize robotic capacity and for grounding, as Harnad emphasizes. But how would the symbols in Harnad's ``hybrid analog/symbolic robot'' be any less arbitrary than symbols as bitmaps of the objects and categories to which they refer --- perhaps projected onto a digital medium (e.g., a laser disk) through a camera --- in a ``core'' symbol manipulation system? Though Harnad would most likely consider this arrangement of computer and camera a ``computational core-in-a-vat'' (Section 6.4) system, digitizing the camera's analog input is certainly an example of ``processing analog sensory input'' (Section 2.2). Thus, not only would such a system be immune to Searle's CR argument, but bitmaps are nonarbitrary in relation to what they are bitmaps of, especially if they are produced directly from analog images. Could such a system pass the TTT? With efferent decoders and transducers, there is no a priori reason to suppose it cannot; that is, no reason to assume that its symbols would not ``cohere systematically with its robotic transactions with the objects, events and states of affairs that its symbols are interpretable as being about'' (Section 7.1). Yet, intuitively, such a system could hardly be said to lay claim to mentality any more than the CR system. And based on transduction alone, neither could Harnad's robot, because transduction does not distinguish the degree to which symbols in his system and the computer-plus-camera are grounded.
What would determine the degree of grounding is how symbols are causal. For example, even though the computer-plus-camera system is grounded, and the CR is not, these two systems are physically similar with respect to how their symbols cause change. In other words, the symbols in the computer-plus-camera system are not grounded ``all the way through''; once these ``bitmap symbols'' are input to the computer via the camera, the two systems become identical insofar as the forms of the symbols in both are arbitrary with respect to how they cause change. Why? Because in digital computers, symbols cause change through the physical process of pattern matching (Boyle, 1990; 1991; 1992); like pieces in a puzzle, they ``fit'' (match) the left-hand sides (matchers) of rules whose right-hand side actions are subsequently triggered, so that as long as there is a fit and the matcher is associated with the appropriate action --- that is, an action which conforms with a systematic interpretation of the symbols --- it does not matter what the symbols look like (e.g. whether they are bitmaps or propositional representations of their referents). Furthermore, it does not matter through what physical processes --- what sorts of encodings or transductions --- the symbols originated. Thus, if Harnad's ``symbols and symbolic activity'' function through pattern matching, then regardless of how they are connected to the sensory projections of the objects to which they refer (e.g. by connectionist networks, pointers, etc.), his robot would be little more than a computational system with a particular peripheral transduction mechanism, even though he sees the presence of a ``second constraint, that of the nonarbitrary `shape'of the sensory invariants that connect the symbol to the analog sensory projection of the object to which it refers'' (Section 7.5) as a significant difference.
But how could this ``second constraint'' really be a constraint if it does not affect how the symbols cause change (it cannot be a constraint just because a connection exists between analog forms and symbols)? If the symbols effect change through pattern matching (PM), then the nonarbitrary shape of the sensory invariants is superfluous and Harnad's robot would be cognitively equivalent to the computer-plus-camera system. Therefore, the only way for the nonarbitrary shape of the sensory invariants to affect behavior is through a causal mechanism other than PM, one which would presumably ground the system ``all the way through''. Harnad actually alludes to this mechanism when he notes that discrimination could be accomplished by ``superimposing analog projections of objects'' (Section 7.2; my italics) and that category structures could be generated through ``analog reduction'' (Harnad 1990). Such processes involve what I call ``structure-preserving superposition'' or SPS (Boyle, 1991; 1992), which is fundamentally different than PM. Thus, if Harnad wants to distinguish his robot from so-called computational core-in-a-vat systems, he should consider the category structures themselves as symbols and have them effect change via SPS, which would ground the system ``all the way through''. Obviously, SPS is an analog process, but, more importantly, it is a causal mechanism that enables symbols to be causal according to what they represent in a physically principled way.