An analog watch is an analog of the rotation of the earth (times two, i.e., two rotations per day). The position of the hour hand, for example, revolves exactly twice for each rotation of the earth. It preserves continuity between times in continuity between positions. That is, for every degree the earth rotates, the hour hand rotates two degrees; two times that surround a third time have corresponding hand positions that surround a third position (e.g. 1:00 and 3:00 have 2:00 between them both in time and in hand position), and similarity between times corresponds proportionally to similarity between hand positions (within 12-hour limits).
A digital watch also represents the passage of time, but there are no analogs between the parts of the digital display and the passage of time. Symbols appear twice during each 24 hours. The symbols depend on the time of day, but the similarity between two displays does not correspond to the similarity between the two times. For example, 9:58 and 9:59 differ by one minute. 9:59 and 10:00 differ by the same amount, yet the first two times share many display features, while the second two times do not.
These two kinds of watches demonstrate clearly the difference between analog and symbolic representations of a continuous variable, but what does one make of those watches that are entirely computational, but display an analog watch face on their liquid crystal display (LCD)? Is this watch analog or digital, analog or symbolic? From the point of view of behavioral consequences, the LCD watch might as well be analog, if all we wish is to tell the time. Its inner workings, however, are digital. Similarly, is a binary code for an integer symbolic or analog? If a binary code seems too symbolic, then how about a gray code in which the similarity between representations preserves the similarity between the numbers represented? My point is that it is often difficult to decide whether a system is analog or digital/symbolic even if one knows the architecture. One can implement the other freely with no functional consequence. Hence, it is difficult to argue that one system is prey to certain criticisms and the other is automatically immune.
At some level of operation a computer is entirely syntactic. Its memory registers contain binary numbers, its operation registers perform different functions dependent on the instruction that is numerically coded in specific locations, etc. and it does not matter what the numbers that it manipulates represent. At this level of description, the patterns of numbers in the registers, the number of registers, the operations to be performed in response to the various codes, etc., are highly machine specific, so by the definition, these operations may not be computation.
At another level the computer implements programming language instructions that are more generalizable across machines (we can neglect machine-specific dialect differences), and symbols can be interpreted. At this level, however, the purity of the syntactic process is, I believe, lost. Symbols in a computer, are represented as numbers -- patterns of 1's and 0's. There is nothing else to serve the symbolic function. A given number could stand for anything, but within the context of a computer program the meaning of the number is constrained or the program is nonfunctional. The number 613, for example, could stand for the speed of a jet, the weight of the fuel, the number of its current mission, etc. The shape of the symbol in each case is arguably the same, but the way in which it is used is not determined by the shape the symbol, but by its function relative to the instructions of the program. It is used in different ways depending on its meaning and its context. For example, if the program were to determine a safe route from the plane's current position to its designated landing field, then substituting the number of its current mission for the distance to the landing field could be disastrous if they do not happen by accident to agree.
A counter argument to this assertion is that the shape of the symbol is given by more than its value, but also by its location, etc. This is a plausible counter argument, but it would appear to demand that part of the shape of the symbol is determined by the meaning that is attached to it, because its meaning is the means by which it is entered into a particular position, for example (the computer might have access to the inertial guidance system which places the calculated location of the plane into a specific location in memory), and not in locations reserved for other kinds of information. Acceptance of this counter argument is acceptance that the relation between symbols and their meaning is not arbitrary. Hence, it implies that a pure syntactic system is not an adequate model of computation.
To continue this line of argument, one might then claim that the plane uses grounded symbols because of the causal connections between the inertial guidance system and the computer. One might claim that the computer in the plane passes a kind of limited total Turing test (pardon the apparent oxymoron) as a synthetic creature capable of navigating through the sky. The same argument, however, applies to other computer programs that do not have direct access to transducers. A computer program cannot behave systematically if it is provided with inconsistent data, no matter what the program is intended to compute. My argument is that all effective programs must have grounded symbols, that is, established premises, if they are to function systematically.
Some people argue that when the inputs are themselves verbal and symbolic as in the standard Turing test, then the grounding may be in the head of the interviewer, not in the computer program. Although Searle tells a good story about a Chinese room, it is not clear to me that a computer program could actually implement the kind of system that Searle discusses without some notion of the semantics. The notion of semantics could come from having an infinite code book that represents each word in each context (i.e., a unique symbol for each utterance) or it could come in some more compact form, but it must include semantic information, for example, to know that the bird flew out to the right fielder and that the batter flied out to the right fielder. Many other examples are available to demonstrate that semantics plays an important role even in determining appropriate syntax. The difficulty of machine translation, even given an attempt at a rich semantic lexicon, also suggests that semantics plays an ineluctable role in communication.
I certainly cannot see that one person alone with the whole code book could possibly understand less than a room full of people, each of which had only a part of the code book. A related question that has been lurking in my thoughts recently is: what do we say if we replace Searle with someone who does understand Chinese, but we use some kind of cipher to prevent that person from recognizing that it is Chinese that is being passed? Can the person both understand Chinese and the system including the person not understand Chinese?
If I am correct, that semantics plays an essential role in both computation and in language usage, then this position does not undermine Harnad's claims that symbol grounding is necessary, rather it suggests the variety of ways in which symbols may be grounded, and its suggests that such grounding is not only necessary to minds, but to basic computer programs as well. The total Turing test may be a sufficient condition to establish that we have no reason to doubt the presence of mind in the robot, but it is far from guaranteeing that the robot has a mind. In any event, the tests we devise for ourselves and our systems are of no use unless we have the theoretical and technological bases to attempt to pass those tests. Lacking such theories these tests are nothing more than operational definitions. A mind, then, is what a total Turing test tests. Without an underlying conceptualization, alternative operational definitions are incommensurable because each is a definition of the term. What we need is a theory of mind. Harnad's investigations of these issues are likely to help the development of such theories.