Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

A Note on the Symbol Grounding Problem and its Solution

Vasant Honavar

Hilbert's quest for a purely syntactic framework for all of mathematics is what led to what we now call formal systems or symbol systems. The meaningless statements of a formal system are finite sequences of abstract symbols. A finite number of such statements are taken as axioms of the system. A finite number of transformation rules that specify how a string of symbols can be converted into another such string. What do these symbols have to do with anything? The answer is: they don't --- unless there is some way to interpret such symbols. But how? No such interpretation can be complete if it is performed by another formal (and hence equally meaningless) system that translates the symbols into other symbols using a codebook or a dictionary because even the simplest (atomic) symbols are necessarily meaningless (unless an external observer reads meaning into such symbols for his/her own purposes). As Harnad points out, it is this essential meaninglessness of symbols in a symbol system that Searle exploited in his critique of strong AI in the form of the Chinese room argument.

Harnad's proposal is to imbue the symbols in a symbol system with meaning by ensuring that they are physically grounded in the environment. I will call this proposal Harnad's symbol-grounding thesis (HSG thesis). In particular, Harnad proposes a hybrid model in which a certain class of neural networks that learn to categorize analog sensory projections into symbols (category names) serves to establish bottom-up grounding of the symbols in a symbol system. Presumably similar processes would be at work at the motor interface.

Let us examine the HSG thesis a little bit more closely. HSG thesis is relevant if we concede that a symbol system is a necessary component of a cognitive architecture. In that case, the critical task is to make the otherwise meaningless symbols in a symbol system meaningful (to the system, not just some external observer doing the interpretation) via physical grounding of symbols by employing processes that (causally) associate the energy states in the physical world with symbols of a symbol system (through transducers), and conversely, symbols into energy states (through effectors).

Analog sensory projections seem to be a critical component of Harnad's solution to the symbol grounding problem. But the trouble is --- analog process, like computation, is a vague term. If analog means continuous (as opposed to discrete), the centrality of analog sensory projections appears questionable. Physicists have found no resolution of the controversy as concerning whether the physical world is truly analog or discrete. Insisting on analog sensory projections is tantamount to suggesting that the physical world is fundamentally analog --- a proposition that is of questionable validity if we were to accept wave-particle duality. For symbol-grounding, the critical distinction is the one between a formal symbol system (which in its ungrounded form has no causal powers) and a physical system (with causal powers) --- not the distinction between continuous and discrete processes. (Of course, this does not mean that analog processes embodied in physical systems do not subserve important --- perhaps even necessary role in intelligent behaviour.)

What appears to be essential for symbol grounding is energy transfer across the interface between the system and its environment because it is such energy transfer that causes activation of symbol structures in response to transduced environmental states and changes in environmental states with the mediation of effectors activated by the states of the symbol system. (This discussion is equally applicable to the internal physical environment of the system (the physico-chemical basis of pain, pleasure etc.), which can provide grounding for symbols just like the external environment.) Meaning of symbol structures is a consequence of their role in the causal loop which connects the system with its environment.

Harnad's proposal also implies that the shapes of symbols in a grounded symbol system are not arbitrary --- i.e., because of the intrinsic meaning that a grounded symbol embodies (by virtue of its grounding) the system cannot interpret the symbols in an arbitrary fashion. In other words, the shape of a symbol might be a consequence of the category of sensory inputs that it has come to represent (via grounding). However, it is far from clear that learning is essential for symbol grounding (this does not mean that learning is not important for a variety of other reasons). It is easy to imagine (as Harnad concedes) systems in which the necessary grounding of symbols is hard-wired at birth. In this case, one might argue that the grounding of symbols was discovered by evolutionary processes. But then it is not clear whether such symbols can be attributed the same sort of meaning as the symbols that are grounded in learned categories. Extrapolation of this line of thought leads to some intriguing questions concerning the locus of semantics --- is it the organism (system)? the species? the gene? the environment? the cosmos?

Harnad's arguments for the need for grounding of symbols also raises additional questions about the working hypothesis of strong AI --- Cognition is Computation. If indeed by computation we mean the formal notion of computation put forth by Turing, and we take cognition to mean something beyond merely formal symbol manipulation, it questions the adequacy of our current notions of computation for realizing intelligent systems.


Harnad's response

Harnad's target article

Next article

Table of contents