Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

People Are Infinitary Symbol Systems; No Sensorimotor Necessary

Selmer Bringsjord

Stevan Harnad and I seem to be thinking about many of the same issues. Sometimes we agree, sometimes we don't; but I always find his reasoning refreshing, his positions sensible, and the problems with which he's concerned to be of central importance to cognitive science. His ``Grounding Symbols in the Analog World with Neural Nets'' (= GS) is no exception. And GS not only exemplifies Harnad's virtues, it also provides a springboard for diving into Harnad-Bringsjord terrain:

The Harnad-Bringsjord agreement looks like this:

That's what we agree on. On the other hand, the Harnad-Bringsjord clash looks like this: That, then, is what Harnad-Bringsjord terrain looks like. The topography seems interesting enough, but --- who's right, who's wrong, and are they ever both right or both wrong? Isn't that the question? We haven't sufficient space to take informed positions on all (Ai) and (Ci) --- but I will endeavor to substantiate a significant part of (C2), since this issue falls right at the heart of Harnad's GS.

As is well known, Turing (1964) holds that if a candidate AI can pass TT, then it is to be declared a conscious agent. His position is apparently summed up by the bold proposition that

TT-P If x passes TT, then x is conscious.

[Turing Harnadishly said --- in my opinion incorrectly --- that the alternative to TT-P was solipsism, the view that one can be sure only that oneself has a mind. See Turing's discussion of Jefferson's ``Argument from Consciousness'' in Turing (1964).] Is TT-P tenable? Apparently not, not only because of Searle, but because of my much more direct ``argument from serendipity'' (Bringsjord, in press): It seems obvious that there is a non-vanishing probability that a computer program P incorporating a large but elementary sentence generator could fool an as-clever-as-you-like human judge within whatever parameters are selected for a running of TT. I agree, of course, that it's wildly improbable that P fool the judge --- but it is possible. And since such a ``lucky'' case is one in which TT-P's antecedent is true while its consequent is apparently false, we have a counter-example.

This sort of argument, even when spelled out in formal glory, and even when adapted to target different formal renditions of Turing's conditional [all of which is carried out in (Bringsjord, in press)], isn't likely to impress Harnad. For he thinks Turing's conditional ought to be the more circumspect ``none the wiser''

TT-P' If a candidate passes TT we are no more (or less) justified in denying that it has a mind then we are in the case of real people.
Hence, TTT's corresponding conditional, which encapsulates GS' heart of hearts, would for Harnad read
TTT-P If a candidate passes TTT we are no more (or less) justified in denying that it has a mind then we are in the case of real people.
Unfortunately, this conditional is ambiguous between a proposition concerning a verdict on two TTT-passers, one robotic, one human, and a proposition concerning a verdict on a TTT-passer matched against a verdict on a human person in ordinary circumstances. The two construals, resp., are:
TTT-P1 If h, a human person, and r, a robot, both pass TTT, then our verdict as to whether or not h and r are conscious must be the same in both cases.
TTT-P2 If a robot r passes TTT, then we are no more (or less) justified in denying that r is conscious then we are justified in denying that h, a human, observed in ordinary circumstances, is conscious.
But these propositions are problematic:

First, it must be conceded that both conditionals are unacceptable if understood to be English renditions of formulae in standard first- order logic --- because both would then be vacuously true. After all, both antecedents are false, since there just aren't any robotic TTT-passers around (the domain of quantification, in the standard first-order case, includes, at most, that which exists); and the falsity of an antecedent in a material conditional guarantees vacuous truth for the conditional itself. The other horn of the dilemma is that once these propositions are formalized with help from a more sophisticated logic, it should be possible to counter-example them with armchair thought-experiments [like that upon which my argument from serendipity is based --- an argument aimed at a construal of TT-P that's stronger than a material conditional]. Harnad is likely to insist that such propositions are perfectly meaningful, and perfectly evaluable, in the absence of such formalization. The two of us will quickly reach a methodological impasse here.

But --- there is a second problem with TTT-P1: Anyone disinclined to embrace Harnad/Turing testing would promptly ask, with respect to TTT-P1, whether the verdict is to be based solely on behavior performed in TTT. If so, someone disenchanted with this proposition at the outset would simply deliver a verdict of ``No'' in the case of both h and r --- for h, so the view here goes, could be regarded conscious for reasons not captured in TTT. In fact, these reasons are enough to derail not only TTT-P1, but TTT-P2 as well, as will now be shown.

TTT-P2 is probably what Harnad means to champion. But what is meant by the phrase ``ordinary situations,'' over and above ``outside the confines of TTT''? Surely the phrase covers laic reasons for thinking that other human persons are conscious, or have minds. Now, what laic reasons do I have for thinking that my wife has a mind? Many of these reasons are based on my observation that her physiognomy is a human one, on my justified belief that her sensory apparatus (eyes, ears, etc.), and even her brain, are quite similar to mine. But such reasons --- and these are darn good reasons for thinking that my spouse has a mind --- are not accessible from within TTT, since, to put it another way, if I put my wife in TTT I'll be restricted to verifying that her sensorimotor behavior matches my own. The very meaning of the test rules out emphasis on (say) the neurophysiological properties shared by Selmer and Elizabeth Bringsjord. The upshot of this is that we have found a counter- example to TTT-P2 after all: we are more justified in denying that TTT-passing r is conscious than we are in denying that Elizabeth is. And as TTT-P2 goes, so goes the entire sensorimotor proposal that is GS.

In response to my argument Harnad may flirt with supplanting TTT with TTTT, the latter a test in which a passer must be neurophysiologically similar to humans [see Harnad's excellent discussion of TT, TTT, TTTT (1991)]. Put barbarically for lack of space, the problem with this move is that it gives rise to yet another dilemma: On the one hand, if a ``neuro-match'' is to be very close, TTTT flies in the face of functionalism, the view that mentality can arise in substrates quite different than our own carbon-based one; and functionalism is part of the very conrnerstone of AI and Cognitive Science. On the other hand, if a ``neuro-match'' is relaxed so that it need only be at the level of information, so that robotic and human ``brains'' match when they embody the same program, we face in an attempt to administer TTTT what may well be an insurmountable mathematical hurdle: it's in general an uncomputable problem to decide, when given two finite argument- value lists, whether the underlying functions are the same.


Harnad's response

Harnad's target article

Next article

Table of contents