In Cognitive Science, people with different backgrounds have definitional systems grounded via different metaphors. This is perhaps most obvious in relation to discussion of analog systems and signals, but also in terms of definition of computation or mind. There is also a problem that some bandy terms like ``Turing equivalence'' and ``(Universal) Turing Machine'' with only a very vague (and often incorrect) notion of what lies behind them (and even that they have nothing to do with the Turing Test). Even Harnad 5.5 is unfortunately expressed so as to gloss over the distinction between the programmed behaviours being identical and the capabilities of the underlying machine (substrate) being the same, irrespective of program.
So I would like to address some of these definitional problems here, picking up on Harnad's discussion in Sections 1 and 2.
Harnad clarifies what is meant by computation as follows:
``manipulation of physical `symbol tokens' on the basis of syntactic rules that operate only on the `shapes' of the symbols (which are arbitrary ...)'' (here);
``implementation independent: whatever properties or capabilities a system has purely in virtue of its computational properties will be shared by every implementation ...'' (here);
``the symbols and symbol manipulations in a symbol system are systematically interpretable... they can be assigned a semantics ...'' (here).This is an important foundation for the discussion, although expressed in non-computationalist terms. Moreover, there are riders: ``operate only on the shapes'', ``purely in virtue of its computational properties'', ``can be assigned a semantics''. These provide possible loopholes which lead us into controversy. First, does the substance or the nature of a neural net add capability beyond its computational properties? Two possible answers emerge in Harnad's article: parallelism and analog processing (here and here). Second, can nets operate on anything other than the shapes which the representation permits? Harnad's suggestion is that the mechanisms of interaction with the outside world, the transduction, may somehow be qualitatively different for the nets (see e.g. here, here and here). Finally, can a systematic interpretation, a semantics, be totally internal, in terms of relationships between one representation or language and another, all represented, in the last analysis, with the same symbols (e.g. bits)? Symbol Grounding says no (by definition interpretation implies representing relationships across different systems), so we're back to the transduction question!
I wonder whether the differences on these questions relates to what we mean by `capability' or `can'? These hide theoretical and pragmatic questions which I would like to elucidate by distinguishing between efficacy and efficiency. Parallel and analog systems may be faster, and thus able to achieve something digital computers may not, simply because a million neurons working in parallel may be able to achieve more than a single CPU working a million times faster with operations a thousand time less complex or less relevant. Such a computer can simulate the neural net, at a cost of a thousand operations times a million neurons. A couple of billion microseconds is of the order of an hour, a couple of thousand milliseconds is just seconds. But in terms of achieving the required result, they are equivalent, and if we could build a serial computer a billion times faster, we could achieve the same result in real time.
In neural and electronic cases precision is lost at each level of transduction or processing. The intercorrelations in the physical vision processes may possibly occur more remotely from the transduction in robot vision, or in more limited fashion, but there is no reason why it cannot be emulated adequately to achieve comparable behaviour, forgetting about practicalities of size and speed (viz. efficiency).
Statistical techniques are also proving surprisingly effective in relation to Natural Language, Speech Recognition and Machine Translation (see Powers, 1992c). These techniques seek out correlations in a way which is very similar to the correlating effect in neural nets. A lattice of possible choices for individual words may admit a multitude of possible parses, which are evaluated in parallel.
Harnad mentions parallelism, real parallelism, as one aspect of connectionism which has been proposed as a candidate to explain the expectation that connectionism will succeed in cognition whereas symbolism must fail.
I would rather point the finger at the distributional aspects, and the logical parallelism which I will call concurrency, as it is of no account whether it implemented with real or simulated parallelism. The real neural networks which do our computation for us implement distributed concurrent processing with real parallelism. But concurrency and distributed processing are being investigated in the context of symbolic processing too. I have been working with Concurrent Logic Programming Systems (one based on a connection graph theorem prover) and have implemented both connectionist and conventional programs for machine learning of natural language in this context (Powers, 1988; 1989).
If there are multiple interactive tasks to perform which are naturally concurrent, that is necessarily overlap, then parallelism, real or simulated, is absolutely necessary to meet the specifications. Of course such parallel simulation capability is built into every timesharing operating system or envirnonment (like UNIX, or Windows, or X-Windows). Moreover it is becoming a standard part of conventional languages (e.g. it was designed into the languages SIMULA and ADA, and is possible in most modern PROLOGs). Furthermore, there is virtually no computing environment today that doesn't allow or require peripheral processing to occur in parallel with central processing: that is, the peripherals interrupt the current process with a priority dictated by their speed and the urgency with which they need to be serviced (e.g. even PC's and Macintoshes have this sort of concurrency as standard).
In practice, my experience is that neural correlations do admit a symbolic analysis, in terms of which particular neurons or synapses can be identified as labeling particular patterns or implementing particular rules (Powers, 1989). Moreover, an information theoretic analysis, and a consideration of the cognitive mechanisms available, does suggest that this should be expected (Powers, 1992b; 1992c).
I believe the neural implementation can't beat the conventional because (as they say in chess circles, ``assuming best play from black''):
But there is more to the change of wording than this. Mind focusses on consciousness in a far more direct way than thought. Turing addressed thought in terms of whether the computer could hold its own with people in terms of particular sort of problem solving, and was indeed far ahead of his time in recognizing that it was the `simple' aspects of intelligence, like language, that were going to be more difficult than the `advanced' intelligence reflected in, for example, chess playing. Searle changes the question to whether the computer is conscious of itself, and maps this down to an assumed homunculus.
The shift towards making a digital-analog distinction also hides some misconceptions and changes in definition. In Turing's days, we had analog computers which `reasoned' in ways which contrasted with digital computers in two respects.
My conclusion is that this is a total red herring. In any case, present neural net implementations are overwhelmingly digital.
Note that the TTT can directly test comprehension of a word like `in', but the TT can only test it indirectly. But this advantage disappears rapidly as we move from the concrete to the abstract domains of application. Of course the TT is a subset of the TTT (which was proposed by Harnad as a generalization of the TT), and as a special case can be passed by any system capable of passing the TTT. From the point of view of symbol grounding, the point is really that TTT capabilities are necessary to pass the TT. Of course, such a system when disconnected from all its sensory-motor periphery, and allowed just teletype communication, is no longer a TTT-capable system. Similarly, grounding in a virtual reality system (like MAGRATHEA (Powers, 1989)) can theoretically produce a TTT- capable system. Given
the virtual reality is accurate enough, it should be possible to unplug it from the virtual reality and connect it up to real reality and have it pass the TTT, or disconnect it entirely and have it pass the TT, or have it pass some sort of Virtual TT (VTT) in which it is pitted against a human in the same virtual reality.
Again, technically, a system can be grounded by including in the system a Searle who in this case simulates not the CPU but the PPU, the Peripheral Processing Unit. This Searle translates between his sensory-motor experience and some representation languages understood by the program. This is the mode which Natural Language researchers have traditionally worked in. While it is theoretically possible, it is practically impossible, not only because of the sheer information load on the Searle (or the team of programmers/Searles), but because of the dynamic nature of our environment. The traditional programming approach is not adaptive, and hence incapable of producing systems capable of passing either the TTT or the TT, which allow, for example, my teaching an English TT Chinese!
The virtual reality approach is also, in practise, only a bootstrapping convenience because it is easier to program certain laws of reality than it is to provide by hand scenario after scenario. Thus while a VTT-passing robot should also be capable of passing TTT and TT (given the appropriate replugging), it will in practise eventually come unstuck somewhere along the line simply because the virtual reality simulation isn't accurate enough (but theoretically there is no reason why it couldn't be -- the trivial observation that it has to be implemented on a computer of finite size located in real reality is irrelevant, because the experience of the human opponents is also gained in a subset of real reality which they succeed in modeling adequately).