Harnad, S. (1982) Consciousness: An afterthought.
Cognition and Brain Theory 5: 29 - 47.

CONSCIOUSNESS: AN AFTERTHOUGHT

Stevan Harnad
Department of Psychology
Princeton University
Princeton NJ 08544
harnad@cogsci.soton.ac.uk

1. The Mind/Brain Barrier

There are many possible approaches to the mind/brain problem. One of the most prominent, and perhaps the most practical, is to ignore it. This expedient is widely practiced, for example, by neuroscientists; an optimistic and articulate instance may be found in a paper by Kornhuber (1978). This author views the transduction of sensory data, from distal stimulus to successively more proximal representations, including conscious awareness and voluntary response, to be simply a series of stages in information processing by the nervous system. There is no mind/brain problem; the only problem confronting neuroscience is to determine how the information processing is actually accomplished by the brain. This kind of approach I would like to characterize as "capitulating to the right" -- the "right" being the right side of the mind/brain barrier. It can be contrasted with "capitulating to the left," the approach represented by Eccles (1978; and Popper & Eccles, 1977), which essentially denies that consciousness can be accounted for without enriching our ontology with supernumerary worlds. Capitulating to the left is also known as dualism.

In many respects, capitulating to the left, although seemingly at odds with the aspirations of practicing empirical scientists, who believe all of nature's secrets to be basically material ones, nevertheless accords more with our subjective experience and intuitions; and it certainly does not leave anyone with the impression of having begged the question of the mind/brain barrier. But it is a capitulation nonetheless, in that it declares, without positive evidence or proof, the impossibility of dealing with the mind/brain problem purely mechanistically, i.e., with recourse exclusively to the material contents and causal interactions of a single physical world. Capitulating to the right, on the other hand, is consistent with the overall mechanistic program of empirical science, but gives one the uneasy feeling of having self-servingly passed over something real and nontrivial in silence; it leaves a nagging legacy of unresolved intuitions.

In this chapter I will examine how far one can get if one attempts a direct assault on the mind/brain barrier -- eschewing capitulations of either chirality. Because the leftist position (that no material solution is possible) seems more directly to confront the problem, it will be treated as a kind of null hypothesis; an attempt will be made to produce arguments and evidence for rejecting it. I take it for granted that the mere statement of the rightist position does not constitute evidence or an argument. Instead, a "gedanken-mechanism" will be sketched in an attempt to simulate the mind/brain barrier, both internally and externally. If this hypothetical device were to manage to capture our intuitions about consciousness, in addition to being mechanically feasible, then, in principle, the leftist contention that no such solution is possible can be rejected (although of course this would still not demonstrate that we are ourselves such devices). If the device turns out to fail in either respect, then this will represent yet another piece of (negative) evidence in favor of capitulating to the left.

The following constraints will be adopted in this exercise: (1) The account must be purely material and mechanistic; all components must be physical and must interact causally: no "third-world" forces (Popper & Eccles 1977). (2) The subjective phenomenology of conscious experience must be accounted for to our intuitive satisfaction: no "emergent" potentialities of the mechanism should be proffered on faith.

Let us begin by stating, plainly and candidly, just what the mind/brain barrier is, in its contemporary guise: It is a (so far insuperable) conceptual and intuitive difficulty we have in equating the subjective phenomenology of conscious experience with the workings of a physical device. This is not just a matter of underestimating machines -- a parallel but independent tendency of which we are daily disabused by astonishing technological advances. It is a more direct and compelling sense we each have that there seems to be something about consciousness that will always elude mechanism's grasp. It is more than our unwillingness to attribute feelings to a robot, no matter how cleverly it may simulate our behavior (see Chapter X): That species of anthropocentricity is more a moral than an intellectual matter, and could be overcome (at least to the degree that other similar centricities have been overcome) if only a conceptual argument were forthcoming, as to how it is that the workings of a physical device could generate awareness, as opposed to just responsiveness -- how a device could have experiences rather than merely behavior. This chapter will propose and develop a candidate conceptual argument of this sort, and then examine whether it has had any success in penetrating the mind/brain barrier.

2. The "Hot/Cold" Problem

One respect in which there has been some definite progress with respect to the mind/brain problem is that we are now, it appears, ready to grant that it is no longer inconceivable that machines "think," i.e., engage in complex information processing. Hence it no longer surprises us when not only are our movements and responsiveness to stimuli mechanically simulated, but even some significant portions of our sensory and cognitive capacity turn out to be replicable by machines. The fact that this invasion of our "higher" cognitive preserves has not done palpable violence to intuition is probably indicative of three things: (1) The concession is small, and in itself constitutes no significant penetration of the mind/brain barrier. (2) We don't really have very good intuitions about how we think, and hence a mechanism that does it, to some extent, is no direct threat (this point will be returned to later). (3) Cognition, per se, is a basically "cold" activity with respect to the phenomenology of consciousness: Unlike the "hot" qualia of which we are directly aware, such as pain, redness, or any immediate experiential object of our attention, the processes that underlie cognition are relatively opaque to introspection. We are not aware of how we perform the perceptual analysis necessary to recognize sentences or faces, to recall names, or even to perform simple numerical addition.[1] Since these processes do not obtrude into consciousness, the fact that they are simulable by machines is not disturbing.

Hence it can be said that not only is a sophisticated stimulus/response automaton who imitates our differential reactivity to objects now quite conceivable, without representing any intuitive threat to the mind/brain barrier, but its further possession of powerful capacities for sensory and cognitive analysis can now be countenanced as well, just so long as such a machine lays no claim to experiencing any of the phenomenology we experience while doing the same thing.[2] As long as its processes are "cold" and insentient, not "hot" and aesthesiogenic like our own, no problem is posed. Indeed, the mind/brain problem could just as well be dubbed the "hot/cold" problem in this terminology.

The first tool for the present enterprise must accordingly be developed out of a close look at a "cold," and hence innocent, cognitive activity, namely, discrimination. As described in Chapters X and X, discrimination performance consists of differential responding to differential inputs. Two distinct inputs are discriminable in virtue of a receiver's capacity to respond differentially to them; and two inputs that are responded to differentially are, by that token, discriminable. One can make a minor additional distinction between inputs that are discriminable on the basis of properties they independently possess and those discriminable on the basis of properties "added" to them by the receiver. For example, red objects may be behaviorally discriminated from green ones exclusively by stimulus properties (wave-length differences); but an organism discriminates the first instance of a particular red object that it has seen from a later identical one by means of internal information (in this case, memory), which it "adds" to the immediate information contained in the red stimulus itself.[3]

This distinction between external and internal discriminations (let us call them) will perhaps seem elementary and trivial, but it is crucial for the argument below. What is most important to note at this point is that the basis for the behavioral expression of both types of discrimination can in principle be "cold," and hence could, without prejudice, be ascribed to a machine. That is, one can imagine an insentient mechanism sorting and manipulating objects, learning from experience, and forming adaptive preferences -- all exclusively on the basis of cold information, be it external or internal.

3. Pseudoconation

Before we can apply the tool of "cold" discrimination, external and internal, to the mind/brain problem, we must first consider a kindred conundrum, the "free will" problem. In its objective form, namely, the question of physical determinism and causality in the world,[4] the problem is not relevant to this discussion. However, it is a critical feature of the present approach that a successful confrontation with the subjective version of the free will problem[5] -- namely, that conscious decision appears to be an "extra" causal factor in determining our actions -- may furnish some tools for penetrating the mind/brain barrier directly. I will now attempt to sketch a mechanistic means of simulating free will by a process to be called pseudoconation. The hypothetical device that would embody this feature I will refer to by the acronym PCON ("pee-kawn").

First, what is the phenomenology of free will? No generality appears to be lost if I say that it is the experience I have when faced with, say, a right-left choice, to the effect that (1) what I ultimately do, I choose to do; (2) my choice precedes my action; and (3) I could have done otherwise (i.e., I could have changed my mind in the last minute).

Now the PCON mechanism would simply be based on creating an illusion of agency (pseudoconation). The phenomenology of free will is, after all, predominantly cognitive (and hence "cold"). Objectively, I do certain things, in certain sequential orders, sometimes for rational reasons, sometimes irrationally, impulsively or by chance (or compelled by some external mechanical intercession). This scenario is accompanied by a "sense" that I am the agent in some of the sequences and not the agent in others. This is essentially a discrimination, an internal one, in which the pure input/output sequence is missing something that is supplied subjectively, namely, whether or not I had "willed" the action.

Note that this subjective sense of agency is not really such a reliable experience, compared to experiences such as love, pain, or even redness. Our experience of agency is just a reasonably reliable concomitant of a certain subset of our actions: those we feel we perform voluntarily. In any event, let us defer the question of agency's "hot" qualia to a later section (5), addressing here only its "cold" cognitive aspect. Thus circumscribed, the distinction between willed and unwilled action amounts to a discrimination among certain special inputs, namely, those derived from feedback from one's own outputs. To simulate this, PCON will have two kinds of outputs: nonconative and pseudoconative. The former will roughly correspond to actions we consider involuntary, such as maintaining muscle tone and posture, performing reflexive movements, being knocked over, "spontaneous impulses," etc.; and the latter will correspond roughly to actions we consider voluntary, such as deliberate choices, speech, planned behavior, etc. -- I say "roughly" because I think that, even from a phenomenological standpoint, the boundaries are often blurred.

4. Retrospective Internal Discrimination

It is important to note that, internally, all of PCON's actions will actually be equivalent, in the sense that outputs will be mechanically "set" in the device temporally prior to (and in fact causally triggering) their actual overt execution. So in this mechanistic sense, all PCON's actions will be nonconative. The pseudoconative class, however, will be accompanied by an internally generated (nonconative) feedback signal indicating "agency." When such an act is performed by the mechanism, it will be retrospectively discriminated as having been "voluntary." This internal discrimination will be referred back in time to coincide approximately with the instant of initiation of the action, but this pseudoconation will of course be an illusion.[6] The class of pseudoconative actions will also have a temporal buffer, allowing pre-set actions to be aborted, if this is done within a sufficiently short latency. But of course this "voluntary" pre-emption will also have been pre-set in the pseudoconative way.[7]

Figure 1 here

Figure 1 illustrates a pseudoconative "go/no-go" sequence. Output X is pre-set. Then there is a short temporal buffer. If X is not over-ridden, it is executed, with a retrospective sense of pseudoconation referred to time T1 (or, based on Libet's 1986 data, to time T1 + 350 msec). If, on the other hand, X is over-ridden by another output (or no output at all) within X's buffer time, and the over-ride is itself sustained in its own buffer time, then the output (or no output at all) will be executed, again with the appropriate temporal sense of "agency." (Note that in all cases, pre-setting is nonconative.)

It must be emphasized that all that is needed in order to implement pseudoconation is a discrimination between the two classes of action (nonconative and pseudoconative), a discrimination with as objective a basis (though from an internal and nonconative source) as a discrimination of red from green. The rest is handled by the consistency with which PCON internally sorts out "voluntary" and "involuntary" actions. The result is a module that simulates (the "cold" side of) free will. Such a device would even have the capacity, by means of various mechanistic dissociations, to display some of the pathologies of free will that humans exhibit (e.g., by faultily attaching pseudoconation to "involuntary" acts, or even to the acts of others; or omitting pseudoconation from what are normally considered "voluntary" behaviors, etc.).

Now, if it is granted that PCON has the germs of what we recognize as the phenomenology of free will (or at least its "cold" cognitive side, which is, in my view, the most important part, and all that is needed at this point), then I can now proceed to deploy pseudoconation, as well as "cold" retrospective internal discrimination, in a direct assault on the mind/brain barrier.

5. Pseudoconative Scanning

The following are the building blocks out of which I will attempt to simulate consciousness: (1) actions (nonconative and pseudoconative), (2) sensory information, (3) memory, (4) attention and (5) time (real and virtual). All these are fairly "cold" and objective cognitive factors and, on the face of it, would seem to pertain only to the "right" side of the mind/brain barrier, the side of the mechanistic processing of information. The "hot" phenomenological side (qualitative subjective experience) -- with the possible partial exception of conation -- still seems at this point to be quite inaccessible. The null hypothesis, recall, is that a sophisticated cognitive mechanism may simulate "behavior" (in terms of the processing of input information and the generation of outputs), but it could not simulate its underlying subjective phenomenology as we experience it; this is what I am trying to test whether one can refute using only these materials.

Let us first examine what prima facie threat these materials may pose to the mind/brain barrier: Presumably sensory input information (apart from its all-important "raw feel") is not a problem; it could constitute a "message" for a nonconscious automaton who behaves as we do. Nor is memory worrisome; it simply concerns the storage and availability of prior information. Actions (including their sensory feedbacks) are presumably innocent (although they, coupled with pseudoconation, may open a Pandora's box whose contents ultimately threaten the mind/brain barrier). Attention -- by which I mean no more at this point than the fact that input data are demonstrably filtered and selected among, rather than being "swallowed" whole[8] -- can be safely countenanced (cf., for example, Jung 1978; Rougeul-Buser, Bouyer, & Buser, 1978). And finally, external and internal time, which has already been used to advantage above, constitutes the last, seemingly innocuous variable.

To divulge without further ceremony the core concept of this approach: I want to explore the possibility that the phenomenology of consciousness is essentially a function of pseudoconative scanning of inputs, memories, ongoing internal process and output intentions. [For this second hypothetical device's acronym I choose (self-servingly): PCONSC ("pee-kawnsh").]

Now the argument. As with "free will," I will first attempt to itemize the phenomenology for which I am answerable; that old trouper, "pain," ought to do as well as any: First, we have the noxious sensory event itself (C), and then our "awareness" of experiencing that event (H). Note that without the hindsight vouchsafed by H, the sensory episode C alone could be accounted for as a purely cognitive, "cold" event with a certain message (e.g., "tissue-damage") -- all unproblematically simulable by an insentient automaton. It is the actual exercise (or the availability for exercise) of H that seems to make this problem -- the problem of experience, consciousness, subjectivity -- such a special one.

So let us speak of "stages" of consciousness. The C stage would be a sensory event that was merely responded to (such as pupil dilation in response to increasing darkness), whereas the H stage would obtain if and when one bathed in the phenomenal particulars ("I am aware that it is getting darker"). These two stages of consciousness (or, since C need never become H at all: these two classes of internal event) are distinguished in PCONSC by a complex of pseudoconative constructive acts, amounting to internally scanning, selecting among, remembering, reconstructing, rescanning, etc., the sensory information, C. All this occurs pseudoconatively, i.e., with the impression that "I am doing this." Since it does not constitute an overt motor activity, or plans for such, but rather a way of treating input information, the underlying internal discriminations in PCONSC will not be simply nonconation-versus-pseudoconation, as in the case of PCON's overt acts, but actually C-versus-H, as above. What we are "aware" of is what we pseudoconatively scan, and this is distinguished from what we are not aware of (but perhaps responsive to). So again, it is a matter of discrimination, albeit a highly internalized autoreferential[9] sort of discrimination.

Let us do a little more phenomenology in order to flesh things out more fully: What amounts to an "instant," and what to an "experience?" It may well all be just a matter of subtle hindsight, an afterthought, just as in pseudoconation. What happens in "real time" is not the real object (or should we call it the subject?) of consciousness, but a slightly dissociated construction, smeared across time; a selecting among and pasting together of inputs (which are actually held as a kind of "wake" issuing from their real-time instant of occurrence), expectations, and memories, yielding a coherent and somewhat continuous package, which can then be referred to, reflected upon, stored, etc. Our sense that we are conscious "of" all this business is, I think, due to the fact that a significant part (though by no means all) of this processing is done pseudoconatively, infusing it with a sense of agency that C alone lacks.

Note that discriminations among qualia can be accounted for "cold," by means of differential information (red versus green, etc.), so the fundamental problem is really to account for the discrimination between qualia and nonqualia, i.e., "hot" (H) versus "cold" (C). All I have said, then, is that what makes H discriminably "hot" may be the pseudoconative "control" we have over the structuring (n.b.: not the raw data) of conscious experience.[10]

6. Advantages of Autoreferentiality

There are certainly many strong objections that must be met by my proposal. Some of these will be discussed in a separate section below. For now, however, let me proceed as if I had, in principle, succeeded in simulating consciousness with PCONSC. A very appropriate question (which could be formulated both functionally and ecologically) would be: Why have a distinction between C and H experiences at all? A device that operates purely on the basis of C-type experiences, i.e., totally insentiently, "coldly," from the phenomenological point of view, would seem to be able to do the job perfectly well. Why the surplus bonus of H-type experience? In fact, one could have said the same of PCON, namely, why segregate actions into pseudoconative and nonconative instead of simply doing what needs to be done, without any fancy retrospective self-deception?

These functional/ecological question are undeniably good ones: nontrivial, definitely calling for answers (and I will indeed venture some conjectures by way of response below, but conjectures they will be, with no claim to the face-valid force of some of the simulation arguments above). Yet note that, at least at one level, one need not necessarily have an answer to the "what for" questions if all one has undertaken to do is to simulate consciousness mechanistically. If one were to succeed in that, then that would be enough. And if consciousness then seemed to be only an unparsimonious, surplus frill in one's device, one could reply: "So be it, and the same goes for your device!"[11] But perhaps considering how consciousness might optimize such a device would be interesting in its own right, and might even cast more functional light on the nature of consciousness.

So, as a first approximation, let me conjecture that it may be that the "gedanken-mechanism" that putatively behaves exactly as we do in terms of all its input/output (learning, talking, socializing, etc.), but without experiencing the phenomenology of consciousness, would, for purely systems-theoretic reasons, not be realizable, any more than a trisected angle or a squared circle could be. That is, it may be necessary to build in autoreferential features of the kind I have described in order to get any machine to do the job at all: the compleat "cold" device may be a modal (i.e., "[im]possible world") fantasy (cf. Harnad, 1976). But in order even to guess at why a machine lacking such features could not do the job, one would have to be immeasurably more specific technically than I have been in sketching the features of PCON and PCONSC above. Contemporary cognitive science -- including artificial intelligence, cognitive modeling and neural modeling -- if it has the ambition and ingenuity, and perseveres long enough, may be headed in a direction that can furnish more rigorous answers to such questions; and it could conceivably turn out that, in order to successfully generate higher and higher-level and more and more general behavior, theorists must begin to incorporate pseudoconative scanning and autoreferentiality into their models.

My own frail conjecture as to the advantage -- the survival value -- of the autoreferential features of PCON and PCONSC would involve the following scenario: First, there is no need to view such features as at all unique to talking/thinking man. Autoreferentiality is perfectly amenable to a quantitative continuum with respect to what processes are accomplished pseudoconatively and what processes are not. Phylogenetically, it could first have begun with the discrimination of more and more behaviors as being voluntary; and gradually, as a fellow-traveller, pseudoconative scanning would join the growing fold with respect to more and more external and internal events.[12] But why, one still seems impelled to ask, would this gratuitous - - the cynic, vexed by my introduction, will call it "supernumerary" -- discrimination, among perfectly unconscious and deterministic processes, be any sort of advantage at all?

Hence parallel-processing models are no exceptions to the principle that hierarchical integration is necessary. Whether "connectionist" models (parallel-distributed processing, McClelland et al. 1986) represent exceptions remains to be seen (and will depend, of course, on what their performance capacities and limitations turn out to be, and perhaps even on their neural veridicality). [footnote end] they entail serial decision-making, and since devices like ourselves do and experience many things at once (relatively speaking), this must all be integrated at various levels of convergence. Moreover, the outputs of one level must be treated as the inputs to others; not to mention that successive "chunks" of temporal information must somehow be integrated in this flow. So, at the very least, with more and more complex organisms, a Quinean mechanism of "responding to one's responses" (both external and internal, not to mention responding to one's sensations, external and internal) must become necessary. But of course "responding to one's responses" is not tantamount to consciousness (as just the cerebellar righting-reflex to an accidental slip demonstrates!).

The degree of hierarchicality, autoresponsiveness (as opposed to autoreferentiality), feedback, etc., that must accrue before pseudoconation becomes necessary for stability, I cannot guess. In lower organisms pseudoconation may initially amount to no more than transfers of information and control from lower-order muscular and stereotyped action pattern generators to higher-order "cognitive" regions that modulate lower activity on the basis of a greater sensory analysis and information from past experience. The internal pseudoconative tag may be added (retrospectively) to those actions which arise as a consequence of such higher-order intercession; and indeed the sense of "agency" may perhaps act as a motivator (i.e., a homeostatic control) to keep the upper regions primed to interact with the lower ones. (It will be evident that specificity is at this stage a calculated risk on my part, with respect to the overall credibility of my proposal!) Suffice it to say that such an increasingly intimate and complex hierarchical relationship may require relatively greater partial autonomy of the higher-order processes from the lower-order ones; this might be purchased, without loss of unity and integrative action, by the pseudoconative illusion of agency with respect to the activity of the higher regions. And as the device becomes capable of more and more sophisticated sensory analysis, hypothesis-testing, memory-scanning, etc., the repertoire of autoreferential processes would increase.

But lest it be wrongly concluded that the "higher centers" are now, with their increasing autonomy, decision capacity, etc., assuming "man-in-the-head" proportions, threatening merely to carry the mind/brain problem upstairs, let me hasten to reaffirm that pseudoconation is, in this scheme, an illusion, indeed a post-hoc surplus process with no direct "causal" efficacy in its own right.[13] And in fact, that may even be the reply to the "what for" question: that in order to keep an increasingly autonomous and powerful "man-in-the-head"-type hierarchical level integrated with the rest of the device, an illusion of agency must be generated, while in reality the reins are kept tightly under lower-order (or shall we say "middle-order"), un" conscious control; and given that the latter is not conscious, with no phenomenology to account for, etc., we of course successfully avert a "middle-man-in-the-head" problem.

7. Objections

[14] For the following telling objections, and for her extensive, insightful criticism of this entire chapter, I wish to acknowledge my great indebtedness to Judith Economos (along with the usual disclaimer as to any conative culpability on her part for all blunders in which I have, despite wiser counsel, benightedly persisted).

7.1. Isn't this just epiphenomenalism?

Not quite. Or perhaps not "just." Something that could be simulated by a machine can hardly be called merely epiphenomenal. Autoreferential processes are fully integrated causally in the functioning of a device like PCONSC. They may also perform some critical functional role in the hierarchical information processing of such devices (although this putative function has not been adequately specified). It is only with respect to their causal status as derived subjectively that these processes are illusory. The picture they provide of their own causal relation to reality is not veridical. We (that is, our subjective selves, our wills, our sense of agency) do not cause anything; nor does our conscious experience really play the direct causal role it appears to play in representing and operating on reality. It doesn't even portray the temporal order of internal and external events, let alone their causal arrows, faithfully.[15] So, although the objective physical processes that we are here interpreting as the physical substrates of conscious experience do play a causal role, and hence are not epiphenomenal, the contents of the conscious experience are epiphenomenal: They are represented as causal, but they are not causal.[16]

7.2. Does "pseudoconative scanning" imply that I can choose whether or not to feel pain?

No; but this important objection requires a carefully reply:

To begin with, there is a considerable voluntary (i.e. pseudoconative) element in normal perception in terms of the stimuli toward which we direct our sense organs and attention. Moreover, there are reports of phenomena such as self-hypnosis and yogic control, which, if reliable, are likewise relevant. In a sense we take the privileged status of ordinary "states of consciousness" a little too much for granted, relative to its various possible neurological, psychiatric, pharmacological, and fatigue-inducible alterations and dissociations on the one hand, and reputed heightened states of "control" of consciousness on the other. But aside from these not-so-rare variants of ordinary consciousness, among which the voluntary control of pain may indeed number, I have not claimed that all perception amounts to an exercise of choice: First, there are the raw, "cold" data themselves, which are given, and to which any selection that is exercised is necessarily restricted. And second, it must be emphasized that pseudoconative scanning is not just tantamount to pseudoconation; it is not simply another class of voluntary acts. It involves pseudoconation as a mechanism, but in a very different, non-overt-act context. Both involve a discrimination, but in the case of pseudoconative scanning, the discrimination is a second-order ("autoreferential") one, whereas in ordinary pseudoconation it is simply the first-order discrimination among classes of actions. Things are not conscious simply because they are "voluntary," otherwise "willed" acts would be equivalent to consciousness. The present claim is that the process of pseudoconative scanning and structuring of input is the conscious process, and that pseudoconation contributes that "hot" flavor. Of course we cannot normally ignore pain, but staying awake and enduring it involves an active interaction with it: one that we cannot avoid, granted, but one we engage in actively nonetheless.

Pain appears to be a particularly compelling kind of case because it is so overpowering and ineluctable; but the same is true of a more neutral stimulus such as purple (given that I keep my eyes open and fixed on it); and yet, so long as I remain conscious, I am still mentally scanning, structuring, selecting, and rescanning as my thought processes continue. If nothing else, temporal continuity and coherence are maintained, together with conscious, and moderately variable, sampling of the input. Can anyone really claim that his consciousness of any steady-state input is uniform across time? And the absence of uniformity does not just consist of the waxing and waning of wakefulness and alertness, but of the fact that it is not possible to dwell upon the self-same content steadily. Not only do other concurrent processes interfere, combine and distract, but even the "form" of one's awareness of any steady content is constantly fluctuating: one might speak of a "cognitive nystagmus" by way of analogy with the physiological nystagmus of eye-movements. And since both are concerned with the scanning of input, there may be deeper mechanistic correspondences between the two processes.

7.3. Is it being claimed that everything that goes on in our heads while we are conscious occurs pseudoconatively?

Decidedly not, for if this were indeed the case, we would know immeasurably more about how our minds work, by simply introspecting. In fact, just enough is pseudoconative to confer that "hot," conscious flavor; but of course the real heavy work going on at the time (which amounts to most of it) is not conscious at all.

Consider activities such as breathing, or the execution of routine overlearned tasks. These can be handled, all or in part, automatically and unconsciously; yet we know (and "feel") that we can potentially take these functions over "at will" (although not necessarily more efficiently). This potentiality is also an important component of the phenomenology of consciousness. But, according to the present scheme, this is all an illusion. Who really knows the extent to which I am at this moment time-sharing control with not one, but probably dozens of automatic and unconscious processes, in order to do all I am doing now? This illusion I have of running the show is a luxury that I have been vouchsafed for a variety of reasons, but if I tried to sit down and prove that I was actually doing all the things that are going on, I couldn't possibly even begin. Not only do I not know how I do all these things, but I don't even know "where" my consciousness is flitting from instant to instant. I only have the vague conviction that, if not for me, the show would not be running. And, like pinching myself to be sure I'm not dreaming, I occasionally "over-ride" the goings-on (pseudoconatively) to "prove" that I am not just a passive fellow-traveller.

Not only is much of the filtering, integration, and other constructive activity that contributes to making perception coherent and continuous in fact accomplished "downstairs" (nonconatively), as all else is, but it does not even trouble to humor us upstairs with a false sense of our having done it. Our pseudoconative processes are just the tip of an iceberg: one shudders at what our mental lives would be like if our psyches had a more exacting, engineering mentality, demanding that we have privileged (pseudoconative) access to all the significant heavy work, enough to give us a sense of all the factors, necessary and sufficient, underlying our conscious performance, i.e., the capacity to scan the underlying program, at least. If such were the case, psychology and neuroscience would be out of business, and, from the vantage point of our armchairs, we would have long since designed super-computers modeled on the real, introspectively accessible brain. I doubt, however, whether all these extra "privileges" would have bought us commensurate evolutionary gains. For one thing, our brains would probably have to have been orders of magnitude bigger (not to mention von Neumann [1958] considerations of limits on self-representational capacity).

So whereas we are denied these mechanistic privileges, we are compensated by a full-blown sense of captaincy anyway, like a figure-completion phenomenon. I benightedly feel I know "how" I add; that "I" retrieve names, memories, etc. Only cases of significant delay, such as tip-of-the-tongue phenomena and "creativity after incubation," suffice to daunt our self-assuredness, and they only serve to impel us to multiply our fictions by positing yet another mind, very much like our own clever and masterful one (only this time "unconscious") to do these tricks for us (see Gru\*:nbaum 1986)! And yet we do not seem troubled by the fact that even in those easy, predictable, short latency cases, we are not really doing the tricks at all!

7.4. How can such a temporally delayed, post-hoc phenomenon as pseudoconative scanning have any causal efficacy or positive function at all?

This is a well-justified question, but it should be kept in mind that the pseudoconative feature (or module) is just one portion of a hierarchical information processor: the part responsible for our subjective experience. I described it as scanning lower-order sense data, but being somewhat "after-the-fact" about this, in the sense that the heavy work is done elsewhere, coordinated by a middle-man, etc. (all unconscious), and then "retrorationalized" upstairs. The selection actually gets done for this higher processor before the pseudoconative illusion of awareness of outcome is generated. The fact that the outcome is itself the object of this processor -- i.e., that it constitutes its input -- is what is meant by autoreferentiality, and I have conjectured that this autoreferentiality acts as a stabilizer, contributing (causally, not epiphenomenally) to the coherent integrated function of this hierarchical mechanism.

To put it another way, one is in error to suppose that it is "I" who do the selecting and integrating. That is all done for me (unconsciously). I simply get an indirect and post facto glimpse of a negligible portion of the iceberg, together with an illusory sense of myself doing the whole job. And even this version is too dualistic, for what one ought to say is that "I" and "my experiences" are pseudoconatively constructed out of ongoing sensory information, stored information and delayed feedback from processes going on elsewhere. Yesterday's feedback is tomorrow's predictive memory in this continuous, ongoing constructive process.[17] Autoreferentiality is eminently bound to sequential information processing, with consciousness a buffer, not for the immediate present, not for the just-past to which it pertains, but for the immediate future: psychological instant T0 (whatever its length) is processed and integrated unconsciously; then it is kicked upstairs at time T1 (and retrorationalized to T0); then whatever is actually happening at T1 can draw on the conscious buffer information (which is actually from T0) for continuity, etc.

To persevere and ask why all the above could not be done completely unconsciously (with essentially the same hierarchical division of labor) is to ask a systems-theoretic question on which I have already declared my conceptual bankruptcy. If there is a positive reason, it will be a technical one, and probably a rather compelling one, otherwise our systems would not have optimized that way. On the other hand, there may be no positive reason, and we may have evolved as conscious creatures rather than equally well-adapted turing-identical unconscious ones merely because of chance or initial conditions.

7.5. Just what is the illusion anyway? Surely it is not an illusion that I am aware?

Of course not (when one is indeed aware): but many of its particulars are illusory, such as the sense of agency, the sense of doing the whole job, etc. Moreover, the effect is accomplished by means of a temporal illusion. The overall causal picture derived from our consciousness is basically wrong; and that we are in fact prey to the illusion is evidenced by the counterintuitivity of the present account!

7.6. Am "I" a pseudoconative illusion as well?

It depends how one sees oneself. If one is careful and Cartesian, one can perhaps recognize that one is a (pseudoconative) construction, just like all the other objects in consciousness. Perhaps it is better to think of oneself as a necessary fiction in the service of autoreferential processes.

7.7. Is the "mind" then not just another construction that can be eliminated, leaving no "mind/brain barrier" to speak of?

To arrive at the point where one can even contemplate eliminating mental entities as fictions one must first have accounted for how an information processor can have fallen prey to such fiction. For this, one needs a mechanistic model accounting for the inescapable features of our behavior, cognitive competence, and subjective phenomenology. In the present model, "I," my "mind," "my" actions, etc., turn out to be (possibly useful) constructions that such an information processor generates (pseudoconatively); indeed, if they are necessary (rather than accidental) fictions, then perhaps they must be generated by any mechanism that fully simulates our capacities.

7.8. Isn't language necessary for consciousness?

Language is a universal, all-purpose communicative code that we deploy in interaction with other individuals (see Steklis & Harnad, 1976). There seem to be two reasons why language or language capacity have been wrongly supposed to be prerequisite to (or even synonymous with) consciousness: (1) The linguistic code itself shares many of the constraints of the mind/brain barrier; it can by symbolic description approximate our experiential lives (see chapter X), but it cannot directly communicate qualia; in a similar vein, like any other communicative code (although all others are inferior), languages carries information, and it is informational content that consciousness conveys. (2) More compelling is the fact that conscious experiences are reportable experiences, and somehow the capacity to report seems to be a necessary condition for our belief (not to mention the experimentalist's) that a candidate is conscious (see Chapter X). According to the view described here, language has more to do with matters of intellect and intelligence than consciousness, which must precede the former, temporally and technically.

It's fine to speak of "afterthought," but what about "forethought"? If all our willful planning is just so much wishful thinking, why bother?

Because we have no choice (but to make choices): Irrespective of whether pseudoconation is a necessary systems constraint in any mechanism that has our full behavioral capacity or it is merely an accidental fellow-traveller, we can go no further in eliminating the attendant fictions it generates than has been done in the present account. That is, it can be pointed out that such mechanisms have such illusions, but being such mechanisms ourselves, we still remain prey to those illusions. Anything I decide to do, including to abandon all further decisions, I decide pseudoconatively.

8. Conclusions

Let me now abandon the battlefield rhetoric of "assaults" on "mind-brain barriers" to state what actually seems to have emerged from the approach explored in this chapter:

1. For the skeptic concerning the mental life of an automaton that simulates us completely (i.e. passes the Turing test for every performance of which a human subject with, say, an IQ of 100 is capable) I have provided one less prima facie reason for being skeptical, should the automaton turn out to possess the PCON and PCONSC modules: Since it is apparent that the function of these modules, if any, must be an "inner-directed" one, rather than being merely another generator of outward performance, there would perhaps be marginally less reason to be skeptical about the consciousness of a "compleat automaton" that had such modules than about one that did not.

As to the residual skepticism that remains, I would be content to have helped cut doubts about automata down to the size of doubts about other minds in general. That would certainly be a big step forward for the robot community.[18]

2. I have suggested that the "compleat automaton" may not even be implementable unless it has modules like PCON and PCONSC. Although this argument is nondemonstrative, the modal (i.e., possible-world) fantasy of philosophers that it calls into question (that of an unconscious compleat automaton) is even less so, providing not even the merest hint of the nature of the putative mechanism. We, after all, are compleat automata with consciousness. Being the only successfully realized entities of that sort, we provide 100% inductive weight to the inference that all compleat automata (i.e., automata that can do everything we can do) will likewise have to be, like us, conscious.

3. I have suggested how brains and machines could play sophisticated tricks with virtual time and virtual causality, and how these could contribute to the phenomenology of free will and consciousness.

4. Finally, I have tried to cut the phenomenology of free will and consciousness down to size somewhat, suggesting that our nervous systems are playing tricks on us, giving us a sense of mastery and control that are illusory. We have privileged access only to a tiny tip of the information-processing iceberg, but tricks with time and causality generate a figure-completion effect that convinces us that we do and know a great deal more.

What weight should be given in cognitive modeling to concerns about "capturing" consciousness such as those running through this chapter? This is hard to judge in advance, but since it seems to be a logical fact that an unconscious device that is behaviorally indistinguishable from ourselves will accordingly be indistinguishable from a conscious device with the identical behavioral capacity,[19] it would appear that, if we are not to be dualists about the causal role of consciousness, we must be methodological epiphenomenalists: We must settle for designing "as-if" models and stop worrying about the problem of whether or not they really capture consciousness. In any case, as chapter X will show, we have our hands abundantly full with neoconstructive performance modeling.

REFERENCES

Buser, P. A. & Rougeul-Buser, A. (1978) (eds.) Cerebral correlates of conscious experience. Amsterdam: North Holland, 1978.

Eccles, J. C. (1978) A critical appraisal of brain-mind theories. In: Buser & Rougeul-Buser (1978, 347 - 355).

Fodor, J. A. (1975) The language of thought New York: Thomas Y. Crowell

Fodor, J. A. (1980) Methodological solipsism considered as a research strategy in cognitive psychology. Behavioral and Brain Sciences 3: 63 - 109.

Fodor, J. A. (1981) RePresentations. Cambridge MA: MIT/Bradford.

Fodor, J. A. (1985) Precis of The Modularity of Mind. Behavioral and Brain Sciences 8: 1 - 42.

Grunbaum, A. (1986) Précis of The foundations of psychoanalysis: A philosophical critique. Behavioral and Brain Sciences 9: 217-284.

Harnad, S. (1976) Induction, evolution and accountability. Annals of the N.Y. Academy of Sciences 280: 58-60.

Harnad, S. (1982a) Neoconstructivism: A unifying theme for the cognitive sciences. In T. Simon & R. Scholes, R. (Eds.) Language, mind and brain. Hillsdale, N.J.: Lawrence Erlbaum Associates

Harnad, S. (1982b) Metaphor and mental duality. In T. Simon & R. Scholes, R. (Eds.) Language, mind and brain. Hillsdale, N.J.: Lawrence Erlbaum Associates

Harnad, S. (1982c) Consciousness: An afterthought. Cognition and Brain Theory 5: 29 - 47.

Harnad, S. (1984) What are the scope and limits of radical behaviorist theory? Behavioral and Brain Sciences 7: 720 - 721.

Harnad, S. R., Steklis, H.D. & Lancaster, J. (eds.) (1976) Origins and Evolution of Language and Speech. Annals of the New York Academy of Sciences 280.

Kornhuber, H. H. (1978) A reconsideration of the brain-mind problem. In: Buser & Rougeul-Buser (1978, 319 - 334).

Kornhuber, H. H. (1984) Attention, readiness for action, and the stages of voluntary decision: Some electrophysiological correlates in man. Experimental Brain Research supp. 9: 420-429.

Libet, B. (1978) Neuronal vs. subjective timing for a conscious sensory experience. In: Buser & Rougeul-Buser (1978, 69 - 82).

Libet, B. (1985) Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences 8: 529-566.

MacKay, D. M. (1978) What determines my choice? In: Buser & Rougeul- Buser (1978, 335 - 346).

McClelland, J.L., Rumelhart, D. E., and the PDP Research Group (1986)

Popper, K. R., & Eccles, J. C. (1977) The self and its brain. Heidelberg: Springer, 1977.

Rumelhart, D. E., McClelland, J.L., and the PDP Research Group (1986) Parallel distributed processing: Explorations in the microstructure of cognition, Volume 2. Cambridge MA: MIT/Bradford.

Skinner, B. F. (1984b) Reply to Harnad. Behavioral and Brain Sciences 7: 721-724.

Steklis, H. D. & Harnad, S. R. (1976) From hand to mouth: Some critical stages in the evolution of language. Annals of the New York Academy of Sciences 280: 445-455.

von Neumann, J. (1954) The computer and the brain. New Haven: Yale University Press.

FOOTNOTES

1. If we were aware of how we did these things, we could presumably construct viable theories of cognitive function from our armchairs.

2. In chapter X, I will try to show that a functional explanation of a robot that can pass the "Total Turing Test" -- i.e., exhibit performance capacity that is completely indistinguishable from a human being's -- turns out to be the most that one can expect from empirical psychology and cognitive science.

3. Familiarity, affective value, and perhaps "meaning" itself, would all be instances of such internal, "value-added" cues furnished by the processor.

4. The "objective" problem of determinism concerns the question of whether all objective physical phenomena have causal antecedents that fully determine them, or whether there exist "uncaused" or "emergent" physical phenomena too. This problem has been confused and conflated with the "subjective" problem of determinism, which concerns the causal relation between objective and subjective phenomena. The two problems are orthogonal, and all the obvious permutations seem to be tenable: One can be an objective determinist and yet believe in freedom from determinism for subjective phenomena. This would be a form of dualism. One could even believe in causal interactions between objective and subjective phenomena in the objective-subjective direction. Only if one believed in subjective-objective interactions could one no longer consistently be an objective determinist. I happen to be an objective determinist concerning physical phenomena and an epiphenomenalist concerning subjective phenomena. I do not believe that the latter have any independent causal influence on physical effects. One can also be an objective indeterminist -- believing in the possibility of uncaused or emergent physical phenomena -- and still subscribe to any of the positions on subjective determinism, from epiphenomenalism to free will. This position is even consistent with subjective-objective causation. If the only grounds for one's objective indeterminism, however, are statistical or quantum-mechanical considerations, then there is no objective basis for equating those sources of physical indeterminacy with the subjective ones, which appear to be an independent phenomenon, calling for independent arguments.

5. My approach to this problem differs markedly from that of MacKay (1978), who appears to favor mysterious third-world forces. He argues that even a true, complete description of an individual's brain state has no "unconditional claim to that individual's assent"; hence the individual is at least in that sense free, despite the existence of the full determinate account. But surely the state description, if it is truly complete (and if such a notion is coherent) must contain all transition contingencies with respect to any possible future input. One of these contingencies would be whether or not the individual would in fact believe the account if given it. This is a free parameter in the (hypothetical) account, a parameter whose value in any actual state description would (like all other input/output relations) be completely determined by specific initial states and subsequent input contingencies, not by "free will." (Invariably it is the "truth" and "completeness" premises that are covertly jettisoned at some stage in arguments of this sort, thereby, of course, trivializing them; see Harnad, in preparation, b.)

6. This process of retrospective internal discrimination, which will be used again later in this chapter, is strikingly reminiscent of the subjective delay effects reported by Libet (1978) in his cortical stimulation studies. By comparing objective and subjective times for peripheral and central sensory stimuli Libet showed that the brain can transform the serial order of events, making effects appear to have preceded their causes (thus allowing them to masquerade as causes rather than effects). My model diverges from Libet's interpretation of the findings only in that it depicts the entire process as being perfectly mechanistic, with the subjective component a temporal illusion (a perceptual trick that the CNS is rather good at playing, especially with short term expectancies) rather than a potential supernumerary reality. (Libet, like Eccles, is a dualist).

7. Libet (1985) reports interesting evidence for the existence of a mechanism very much like this. He found that the "readiness potential" -- a slow premotor electrical potential that precedes a voluntary action by about 550 msec [Kornhuber 1978, 1984] -- begins as much as 350 msec before the instant people report being conscious of having willed the action. Libet of course again interprets the effect dualistically (instead of mechanistically, as I do). He suggests that although voluntary actions may be unconsciously predetermined, as his findings indicate, they can still be consciously vetoed during a buffer time. The more natural and parsimonious interpretation of his results, however, seems to be that such vetoes would likewise first have been unconsciously predetermined in exactly the same way actions were.

8. This characterization of attention seems acceptable from both the objective mechanistic viewpoint and the subjective experiential one.

9. Strict "autoreferentiality" would require an internal process that took itself as input. On the face of it, this sounds self-contradictory, but it can be resolved if there are actually two processes, separated in time, with the input to the second being an image of all or part of the first. Consider the old conundrum of what the object of my awareness is when I am "aware of being aware of red." When I am "aware" of red, red is the object. When I am "aware of being aware of red," the object is my "awareness of red." To make such second-order internal discriminations, autoreferential processes must occur. Note, however, that in the following text I do use "autoreferentiality" somewhat more loosely, as pertaining to any higher-order pseudoconative process taking another higher-order process or product as input; this is still "auto" in the sense of having as object events internal to the same processor, but not always to the self-same process.

10. It is evident that this account sets great store by the process of discrimination. Chapter X made a case for the fact that discrimination and categorization are the fundamental cognitive activities, determining, as they do, what we can and cannot tell apart. (In my view, what you cannot tell apart, you cannot tell.) In a forthcoming paper (Harnad, in preparation, c) I argue that the attempt to discriminate and sort certain experiential (and existential) categories (e.g., being alive, being awake, being aware, being) gives rise to many of the familiar philosophical paradoxes because these categories are intrinsically uncomplemented" : They have no negative instances. This missing complement must accordingly be supplied by a highly questionable process of analogy, instead of by the context of confusable alternatives that exists and is available for sampling and for converging on the reliable confusion-resolving intracontextual invariants that underlie all normal category formation (see Chapter X).

This uncomplementedness problem represents a serious threat to the coherence of my qualia/nonqualia discrimination. For whereas the notion of an unconscious discrimination between qualia and nonqualia is perfectly viable, the notion of a conscious discrimination between them is not. Note that the latter would require a self-contradictory process of "experiencing nonexperience," having a "quale of a nonquale," or "being aware of being unaware." I am also not entirely at ease with having called pseudoconation a "possible partial" penetration to the "hot" side of the mind/brain barrier. For an unconscious ("cold") discrimination between willed and unwilled actions is not entitled to lay claim to any "hot" phenomenology at all. I take these lurking incoherencies to bode ill for the kind of uncompromising constraint I imposed at the beginning of this chapter ("no capitulations of either chirality").

11. Arguments to be presented in chapter X imply that there is no way to resolve such questions. Not only does nondualism rule out the possibility of giving free will or consciousness any independent causal role (which accordingly seems to amount to some form of ontological epiphenomenalism), but the turing-indistinguishability of conscious and nonconscious devices seems to limit us to a methodological epiphenomenalism (8) in theory-building. In other words, we may be forced to "capitulate to the right" on purely methodological grounds. If this is so (and I suspect it is), then "assaults on the mind/brain barrier" like this one cannot amount to more than exercises objective/subjective analogy and hermeneutics (Harnad, in preparation, d). Even if PCON and PCONSC modules really existed and functioned exactly as described, there would no way of testing and confirming it (except to be such a module) and no way whatever to test and confirm any putative answers to "what for" questions.

12. Language, in this view, would not be necessary for consciousness, although consciousness would certainly be necessary for language; i.e., a considerable repertoire of autoreferentiality may have to exist before an organism can refer propositionally (cf. Steklis & Harnad, 1976).

13. Absence of "causal efficacy" refers only to the illusoriness of conative cause. It is not implied that pseudoconation is itself uncaused, or without causal consequences (see 7.1, below).

14. If this sounds like a "just-so" story to the reader, then he shares the impression of the author. I believe that all "explanations" of the functional or adaptive role of consciousness are doomed to be ad hoc, imparsimonious and undecidable.

15. The problem of the incommensurability of subjective qualia with objective physical properties -- e.g., what does the subjective (so-called "secondary") quality of redness (or even the so-called "primary" quality of squareness) have to do with red (or square) objects? -- suggests that it's not even clear what "faithfully" could mean here.

16. I take it that consciousness not only represents the paradigmatic case for what an "epiphenomenon" -- a phenomenon with no independent causal role, a dangling effect -- might be, but it is also the only case. The usual analogy with the noise or heat that a machine produces as a byproduct of its functioning is clearly unsatisfactory as an instance of an epiphenomenon, because such byproducts, though usually inconsequential, do have independent causal powers (they can burn a wire or deafen a technician), whereas consciousness (on the present view) does not. Nor is the evolutionary notion of a vestigial organ or a fellow-traveller trait (one that has not been directly selected for and that performs no positive adaptive function, having arisen by chance or from initial conditions and "piggy-backing" on positive traits) a suitable analogy, for there too, no real causal dangling is involved. (On the other hand, the latter may well be the right evolutionary scenario for the origin of consciousness, rather than the adaptationist just-so story I struggled to tell in 6).

17. I am speaking, of course, of successive instants, not days!

18. Perhaps the only other contribution the information-processing perspective makes vis-a-vis the mind/body problem is to suggest that all information-processors with the level of behavioral sophistication of the compleat automaton are bound to have it (the mind/body problem, that is); that information-processors never have "direct" contact with objects, but only with data (information) about objects, from which they derive representations, hypotheses, etc. The mind/body problem may well be the theory/data problem, in information processing terms (cf. Fodor 1980).

19. Nor can neural data help resolve such uncertainties. A neurally indistinguishable device would be subject to the very same uncertainty (i.e., you could not know whether it was conscious without being it), which is otherwise known as the "other minds" problem (3.6.6). Furthermore, neural function can only be identified and understood theoretically by modeling what neurons can do, which includes their behavioral capacity! So neural function must be validated against behavioral function; it is not an independent indicator (see Chapter X, section 2.4).