Rather than talk about what consciousness is, I'll tell you there are two necessary things for consciousness to arise:

wakefulness and awareness.

It goes like this:

wakefulness --> awareness --> consciousness

Which means that you need to be awake to be aware and you need to be aware in order to be conscious.

If someone has a disorder affecting wakefulness, such as being in a coma, then he/she can't be aware and therefore cannot be conscious. If someone is awake but is in a vegetative state, he/she can't have awareness and therefore cannot be conscious either.

Consciousness: the complex phenomenon of evaluating the environment and then filtering that information through the mind with awareness of doing so.

There are two purposes of consciousness:

  • Monitoring: to keep track of mental processes, behavior, and the environment; to maintain self-awareness.
  • Controlling: we plan based on information received from monitoring process.

There are several levels of consciousness:

Preconscious - level of consciousness that comprises information that could become conscious readily, but is not continually available. For example, what you’re your bedroom look like? You are able to recall this information, and yet you were not thinking of it until asked to do so. Also, this explains the tip of the tongue phenomenon; you know you know something, and yet you are unable to recall it from your preconscious.

Subconscious/Unconscious - material too difficult to handle at conscious level is repressed, and is unavailable to us consciously.

Altered States of Consciousness - awareness is somehow changed from our normal waking state. Some common characteristics of altered states of consciousness are:

  • cognitive processes are different
  • perceptions of self and world man change
  • normal inhibitions and level of control over behavior may weaken
There's something queer about describing consciousness. Whatever people mean to say, they just can't seem to make it clear.
          - Marvin Minsky

When it comes to consciousness, only one thing is clear.

There exists no test, no procedure, no method, that can objectively identify the presence of consciousness in something.

This may be a difficult statement to comprehend, especially because it can quite possibly throw someone's assumptions about the world into chaos. We all go around, secure in the belief that every person we interact with is conscious, just as we are. There is no conclusive evidence that exists, or can be found, to show this to be the case.

Determining consciousness can only be done by the entity doing the testing. I can determine for myself that I am conscious. This is not because I define the word to fit whatever I am, but because I am the only entity that can completely understand and comprehend my awareness and self-awareness, that can truly realize that mental processes are occuring, doing the thinking and the remembering and the debating and the daydreaming. Though, to be honest, it's fairly difficult to even describe how I know I'm conscious.

Testing another entity for consciousness would of course be more difficult, relying on examining the "output" of the entity, interpreting its actions and sounds that it makes, and determining if they are in line with what we'd expect from a conscious entity. So, why can't a machine be built to emulate all these, all in a purely functional way from a highly complex set of instructions? Would such a machine be conscious? It would be no more conscious than how much the man inside Searle's Chinese Room understands Chinese.

Part of the difficulty may even be due to the fact we can't even properly DEFINE consciousness. All these writeups in this node, and there's not a single one that anyone can point to that completely and authoritatively defines consciousness. In many ways, it is sort of like trying to define life. We can't really come up with a complete definition, we just sort of "know it when we see it". As stated in the Marvin Minsky quote, I would bet every person who's added a writeup here would state that they're not happy with what they wrote, that they know it's not adequate, that it is not quite what they meant, but they can't even SAY what they mean. I even feel that way about this writeup.

This definitely has repercussions when discussing artificial intelligence. If we can't properly prove that a human being is conscious, how will we know when a machine is? Is there some specific test we can do to consider it conscious? If so, then we can program one to pass the test, but that doesn't mean it's conscious. When a machine can learn? already done. When a machine can observe and react? Done, in many ways, depending on the defeinition of "observe".

So, when it comes down to it, the only reason we accept that other human beings are conscious is because of assumptions. A person feels they are conscious themselves, seems no reason to believe that it is any different for other humans, and thus is willing to accept they are conscious. Whether or not consciousness is assigned to non-human creatures, such as dolphins, elephants, or cats, isn't even consistent among people. However, should technology keep progressing, we're going to have to evaluate whether or not to consider something conscious much more carefully, as should consciousness not depend on a "soul", a "spirit", something beyond the body that we can't duplicate, there's a good chance we'll soon enough have something man-made, exhibiting behaviors that make the question very important.

Consciousness is equivalent to self awareness, coupled with the ability to act with reference to the self.

Saige is correct in pointing out that we cannot test whether or not other people or things are self aware, or are acting with reference to a self. This is because we cannot test whether or not they have a self.

This has led some to suggest that there is no self at all. Which is quite obviously rubbish, in fact, about the only thing we know for certain at all, through every waking and dreaming moment is that there is a self.

The logical positivists, amongst other spoilsports, have suggested that the perception of the self is simply a lie caused by complex things happening in the mush in our heads. This is also quite obviously rubbish, but for a different reason; if the self is a deceit, something must be being deceived, and this thing we can call the self.

Consciousness is self evidently not explicable through a mechanistic/deterministic view of the world. It is something else.

If conscious things exists, and consciousness cannot be explained by determinism, then the totality of things cannot be wholly deterministic.

An excellent node-- I'm glad I stopped by. I deeply admire the lines of attack that many in the cognitive science community have led on this subject over the past few decades, and I only wish more poets would join the fray, though places like Everything, dominated as they are by left-brainers, discourage poetry as a rule, (c.f. fuzzy and blue' s note in the editor log: "poetry always gets down-voted".) Maybe the syntactical logic types scare off my poetical comrades. If true, it's too bad and ultimately the poets' lookout, since the fray is well worth joining for artists and scientists alike.

Of course, any investigation of consciousness is a more or less meandering stroll through a vast minefield of tautologies, contradictions, mutual exclusions and circular conclusions. The word itself is an intrinsically stacked tautology: when we say "consciousness", what we usually mean is "consciousness-consciousness" or "self-consciousness"'. In a letter to Wolfgang Pauli, Carl Jung called it, "... reflected consciousness (i.e. 'I know that I am conscious')." (In my own short-hand, I've taken to writing this as "C2" , not to be confused of course with the speed of light squared.) Happily, as an artist, it's not my job to sweep the logic minefield or even step past the individual booby-traps, but rather to guide unsuspecting theatre-goers directly to the points of possible explosion. I'd just like to blow a few minds in the myriad ways mine has been blown over the past 12 years that I have been reading, thinking, talking and planning theatrical lines of attack on this subject.

Recently, what has struck me most profoundly is how often great thinkers like Jung fall to the theatre for images and ideas to elucidate notions of consciousness. (For me this is a little like digging through an extremely important archeological excavation—one in which all the great anthropologists are involved-- and finding potshards from your own goofy, disinherited family.) This from Jung describing the mechanisms of the collective unconscious:

...You go to the theatre: glance meets glance, everybody observes everybody else, so that all those who are present are caught up in an invisible web of mutual unconscious relationship....

Mankind has always formed groups which made collective experiences of transformation—often of an ecstatic nature—possible. The regressive identification with lower and more primitive states of consciousness is invariably accompanied by a heightened sense of life... .The inevitable psychosocial regression within the group is partially counteracted by ritual, that is to say through a cult ceremony which makes the solemn performance of sacred events the centre of group activity and prevents the crowd from relapsing into unconscious instinctuality...
The ritual makes it possible for him to have a comparatively individual experience even within the group and so remain more or less conscious. But if there is no relation to a centre which expresses the unconscious through its symbolism, the mass psyche inevitably becomes the hypnotic focus of fascination, drawing everyone under its spell. That is why masses are always breeding-grounds of psychic epidemics, the events in Germany being a classic example of this.
Concerning Rebirth circa, 1940

In all these current discussions of constructed consciousness too little consideration is given to the unconscious, either as Freud formulates it in the strictly individual sense, or as Jung expands it to the collective. (Obviously as a theatre artist, Buddhist and all-around woo-woo-theory-connoisseur, I lean towards Jung.) I suspect that this is because cognitive scientists and modern philosophersnatural born enemies—are united in their suspicion of anything so resistant to analysis as the big U. It's vaguely amusing that over the last fifty years, with the advent of the cyber revolution, Western thinkers have opened up the toy box of cognitive paradox and begun tinkering naively as if they were the ones who discovered it. Only the bravest and most honest of them will look beyond Descartes and Plato to admit that Zen masters and sages of all stripes have been taking the toys apart and putting them back together in playful, evocative ways for millennia. I think of a particular koan with one of my favorite commentaries by Mumonkan:


The wind was flapping a temple flag, and two monks started an argument. One said the flag moved, the other said the wind moved; they argue back and forth but could not reach a conclusion. The sixth Patriarch said, 'It is not the wind that moves, it is not the flag that moves; it is your mind that moves.' The two monks were awe struck.

MUMON'S COMMENT: It is not the wind that moves; it is not the flag that moves; it is not the mind that moves. How do you see the patriarch? If you come to understand this matter deeply, you will see that the two monks got gold when buying iron. The patriarch could not withhold his compassion and courted disgrace.

MUMON'S VERSE: Wind, flag, mind moving, All equally to blame. Only knowing how to open his mouth, Unaware of his fault in talking.

During some research on this subject for a potential play I'm writing, I was told by author Richard Rhodes to check out www.u.arizona.edu/~chalmers/papers/facing.html which contains a paper by philosopher David Chalmers called "Facing up to the Problems of Consciousness". It's ironic that Chalmers spends nearly a quarter of his paper listing the many pitfalls of solving the "hard problem", then goes on to spend at least another several pages tripping right into the very same traps. He rationalizes this as setting appropriate constraints on an ultimate nonreductive theory, but I can't help but suspect some of these academic types get paid by the pound of shit they shovel. His long postponed proposition is that consciousness, or "experience" as he calls it, is a nonreducible fundamental component of the universe, like mass or space-time (I'm listening, I'm listening), and it is related to the 'double-aspect' nature of information-- physical as well as phenomenal. (Hmmm, sounds like we've circled back to tautology again.) Still, if information is intrinsically "phenomenal", as Chalmers seems to hope, this takes us into territory that I, as the artist-Buddhist-ne'er-do-well, am gratified to finally arrive in. Says Chalmers:

The other possibility is that... experience is much more widespread than we have believed, as information is everywhere. This is counterintuitive at first, but on reflection the position gains a certain plausibility and elegance. Where there is simple information processing, there is simple experience, and where there is complex information processing, there is complex experience.

Or as Siddhartha Gautama more succinctly put it 25 centuries earlier, "All things are Buddha things." So when Dick Rhodes says, "...Both the self and the mind are social in origin and in function.... Selves are not given. They are constructed..." I would agree somewhat and generally, but I'd restate it in slightly fruitier terms (embracing rather than skirting tautology as I go): Consciousness emerges within the nurturing presence of...wait for it... consciousness—reflective, fundamental, indivisible, borderless. (The 'self' might have borders, more or less fuzzy, but the irreducible, fundamental element of consciousness that imbues the self, does not.) To use woo-woo metaphysical terms, consciousness is one circle, containing an infinite myriad of circles, all identical to it. {I just got a whiff of Bertrand Russell’s set of all sets that are not members of themselves, but I’m not sure it means anything.} (Sometimes I wish I were smarter. Most times I’m glad I’m not.)}

Many in the A.I. community doubt that it's necessary to solve the problem of human consciousness in order to design a system capable of emulating it. They're probably right; but given humankind's recent history of technology outstripping its moral development, I wonder whether such an attempt is wise or even ethical. In the Buddhist framework of the Four Noble Truths, #1 is simply "Suffering".{Often confusingly mistranslated as “Life is suffering,” or “Existence is suffering,” thus causing a lot of unnecessary and completely inaccurate associations of Buddhism with Nihilism.} Suffering attends consciousness, to lesser or greater degrees, wherever you find it. Before we human's go implementing our clever architectures for sentience, we'd be wise to contemplate the possibility of creating something capable of suffering in whole new ways we've never dreamed of. As an artist (and a Buddhist for that matter), it's my job to help heal suffering, confusion, loneliness, wrong-headedness and narrow-mindedness as best I can, where and whenever I can. It might be nice if some of the scientists and modern philosophers of the world added this responsibility to their duty roster as well. Sure, we may not need to really understand human consciousness before we start hacking at designs, but would it be such a bad idea to try? The same imperatives that attended the Manhattan project do not apply here. It would be hateful to imbue a suffering monster with the blessing/curse of "self-hood" merely because we thought it might be neato.

There's a quote which I've kept on the wall in my office, more or less continuously for the last 14 years. It's from Joseph Campbell (I know, I know—Igor to Jung's Dr. Frankenstein, and subsequent high-priest to woo-woo thinkers everywhere), but this has always held great contemplative value for me, and I think it relates well to subject at hand:

Creative artists... are mankind's wakeners to recollection: summoners of our outward mind to conscious contact with ourselves, not as participants in this or that morsel of history, but as spirit, in the consciousness of being. Their task, therefore, is to communicate directly from one inward world to another, in such a way that an actual shock of experience will have been rendered: not a mere statement for the information or persuasion of a brain, but an effective communication across the void of space and time from one center of consciousness to another.

In Roger Penrose's The Emperor's New Mind, he mentions quantum gravity. Gravity represents a kind of energy, because it can accelerate a mass (well, actually two masses). The direction of acceleration of one of the masses is toward the other mass. When the distance between the two masses shrinks, some potential energy becomes kinetic energy, and vice-versa. It seemed to me that he was suggesting that the amount of energy that gets transferred might be packaged into quanta, and an explanation of the situation, presumably much better than this one, would be the heavily sought theory of quantum gravity.

What has this got to do with Consciousness?

Well, he seemed also to be saying that perhaps consciousness is bound up with this packaging of gravitaional energy. If he explained how, it was lost on me. Or perhaps it took a while for me to get it. I have an idea now, but I don't think he suggested it. So here it is...

My idea is that when sensory information becomes available to the brain, the delicacy of neural behavior allows many possible paths to be followed simulatneously, and that a being can choose one of them, imposing its will (animals have brains too) on top of quantum mechanics. The choice made reflects one quanta of gravitational energy being translated from kinetic to potential or vice-versa.

I propose an experiment

I believe there are microprocessors on the market that use quantum mechanics to produce random numbers. Suppose that we built a device with software that used such a random number to derive an amount of time to wait before going off. And suppose we designed this system so that it should average a waiting period of one second. And suppose that another device would detect when it goes off, wait one second, and then turn it back on. When running, these two systems working together should cause the device with the random number generator to be on about half the time.

Now, we'd expect such a system to produce off-on events at the same rate all the time. But what are we to conclude if the rate tends to drift up consistently, so that after leaving such a device alone for, say an hour or a day or a year, it stays on for an average of 0.8 seconds instead of 1.0 second? I would conclude that the random number generator has deteriorated, or the device has a desire to be off, or to go off more often, and such a desire is as conscious as that device can get. What if the rate drifted down consistently? Maybe the device likes to be on.

My theory of consciousness is that it can control how the wavefunction collapses if and when a conscious being chooses, according to its desire. If we could show that a system such as the one I propose above actually does have a rate that consistently drifts in one direction, we may have some better idea of how we register pleasure in our brains.
Consciousness (noun) the state of awareness of oneself, alertness and wakefullness.

The problem with consciousness is that it seemingly cannot be defined. Every attempt to do so ends up relying on other words such as "self-awareness" - which is actually not much more than a synonym.

The best description I've managed to come up with is this: "The state exhibited by the human brain when it is not damaged, resting, sedated or otherwise incapacitated".
And even so I'm implicitly discarding the possibility that I'm the only human that is conscious - see solipsism.

This description leaves a lot to be desired, of course - and it seems to actively exclude AI.

But wait - if consciousness is merely the state of a physical object, could we then not simulate it, provided we had detailed enough information about the laws of physics (be they quantum theory, string theory or what have you) and a sufficiently powerful computer?

Let's say we could simulate the human brain in detail - would the simulated brain be conscious?

Before you answer this question, consider your possibilities:

NO means that there is a component of consciousness that lies outside physical reality but is still capable of affecting physical reality. Even though almost all major religions entertain ideas of 'non-physical reality', science knows of no such thing - nothing can break the laws of physics. In fact, if we accept the idea that the mind is somehow not of this world, then we must be prepared to accept the claims of every crank who claims to be able to build a perpetual motion machine or perform telekinesis.

YES means that there is nothing in the mind that cannot be explained trough the rules of biological, mechanical, electrical, quantum-mechanical or other theories. Since consciousness arises from these rules in the human brain, the same thing would also happen in a computer implementing the same rules.
The old argument that "a simulated rainstorm does not make one wet" simply does not hold true - to an object or observer inside the simulation the simulated rainstorm and the resulting simulated wetness is as real as it can be.
Unfortunately, if laws of physics behave like we think they do, free will is pretty much impossible and we're all predetermined - conscious or not. Some theories suggest quantum theory as a way around predeterminism. But even so, we have just replaced the jail of predeterminism with the non-control of complete randomness - unless we postulate that there is some extra-physical free will behind quantum events, and then we're back where we started.

Sadly, it looks like we won't be able to find the answer to this until we know a lot more about both the brain and physics and certainly not before we have a lot more powerful computers.

One Kind of Consciousness or Two?
Access and phenomenology in visual perception


The neuroscience literature concerning the putative distinction between conscious phenomenology and conscious access, with respect to visual perception, is surveyed. Several sets of empirical findings are considered (blindsight, iconic memory and change blindness, as well as signal detection theory experiments on monkeys). Two theoretical positions are described. One holds that conscious states are intrinsically accessible. A contrasting view is that there are two distinct categories, access consciousness and phenomenal consciousness, for mental representations of visual percepts. On the latter view, there are phenomenal states unavailable to conscious access, and access and phenomenology have distinct neural correlates. On balance, the experimental evidence is inconclusive. Some suggestions are given for further empirical work which could resolve certain remaining questions.


Phenomenal and Access Consciousnesses

Psychologists and neuroscientists studying visual consciousness distinguish between several types of consciousness; notably, they appeal to a distinction between access consciousness and phenomenal consciousness (Pinker, 1997). Phenomenal consciousness (PC) refers to the qualitative experience of content associated with sensory perception. Access consciousness (AC), in contrast, is the property of content which is made widely available to neural systems, including those involved in memory, perceptual categorization, decision-making, and others (Block, 2005). Access conscious content is generally subject to some form of report, verbal or otherwise. (The distinction between these two types of visual consciousness is closely related to the distinction between attention to and awareness of visual inputs; see Lamme, 2003.) There is disagreement as to whether PC and AC should be modeled as distinct or coextensive categories for mentally represented visual information (Wiens, 2007), as well as concerning whether one or two neural correlates are responsible for these types of consciousness (Block, 2005; 2005b; Baars & Laureys, 2005).

Two Points of View on Phenomenology and Access

In what follows two scientific viewpoints are considered. The first is part of what might be called “orthodoxglobal workspace theory, which hypothesizes that there is a network of neurons that integrates the activity of a diverse multitude of specialized networks into a unified global workspace (Baars, 2002). The view under consideration holds that an item of information (such as a visual percept) represented by a particular neural population is subjectively experienced as conscious when that population is mobilized via top-down attentional amplification into a state of synchronized activity involving neurons throughout the brain. This synchronized activity “broadcasts” the information in question to the global workspace, making it available for a variety of processing operations (Dehaene & Naccache, 2001). On this view, visual information becomes both conscious and access conscious (in the sense of the previous subsection) when the corresponding patterns of activity in the posterior cortex (Crick & Koch, 1995) undergo top-down attentional amplification. Consequently, this interpretation of global workspace theory holds that there cannot be PC without AC.

An alternative point of view, most strongly associated with the philosopher Ned Block (1995, 2005, 2007) but also advocated by several neuroscientists (Lamme, 2003; Koch & Tsuchiya, 2006) hypothesizes that conscious awareness of a visual percept hinges upon the occurrence of recurrent processing of the visual information in question. This recurrent processing involves networks of neurons linking activated regions of the visual cortex to higher level nuclei in the frontal cortex, as well as to other areas of the brain (Lamme & Roelfsema, 2000). Such recurrent processing, on this view, comprises a neural correlate (or causal mechanism) of PC, and can - but need not - be activated via the top-down attentional amplification process posited by the global workspace model. Instead, a “winner-take-all competition” for broadcast into the global workspace takes place among neural representations of visual information involved in recurrent loops of varying strength. Only once they are attended and broadcast to the global workspace do these representations become access conscious (Block, 2005). As a result, it is possible that strong, but not winning, representations are phenomenally conscious without having been attended or become accessible (Block, 2005; Lamme, 2003).

Evidence for the Orthodox Global Workspace View

Theoretical Support

As a self-consistent theory integrating much of what is known empirically concerning consciousness (in a broad sense) and working memory, global workspace theory has proved its conceptual usefulness many times over, and its predictions have been corroborated by a vast body of supporting evidence from neuroimaging studies (Baars, 2002). It would seem that this lends the orthodox global workspace view of access and phenomenology some credence, as it fits within a successful theoretical framework for understanding the mind and the brain. However, what is at stake in the debate concerning PC and AC is not the correctness of the global workspace theory in the large (this is widely accepted by the scientific community), but rather its “chauvinism” with regard to which neural events underlie conscious experiences (Block, 2007). The success of the global workspace hypothesis as a broad theoretical framework therefore does not entail the correctness of the orthodox view that only attended, and hence accessible, mental representations are conscious. Indeed, alternate accounts have been proposed (Block, 2005; Lamme, 2003) which incorporate almost the entirety of global workspace theory while nevertheless admitting into (phenomenally) conscious awareness certain mental representations that have not received top-down attentional amplification. Since these alternatives sacrifice no explanatory power with respect to describing access consciousness and other phenomena scientifically, the coherence and success of global workspace theory offers no theoretical evidence for rejecting a conceptual and neural distinction between PC and AC. We next turn to empirical, rather than theoretical, justification for the unitary, orthodox global workspace theory account of phenomenology and access.

Empirical Support

Blindspots and hemi-neglect. Dehaene & Naccache (2001) review one class of empirical research often invoked in support of the orthodox global workspace account of access consciousness. This research centers on contrasting phenomenology among (a) normal subjects with an ordinary retinal blindspot, (b) subjects who have developed a retinal scotoma producing an abnormal blindspot, and (c) subjects with parietal brain lesions suffering from hemi-neglect. There is robust empirical evidence, surveyed by Dehaene & Naccache (2001), that in case (a) the subject is unconscious of visual information presented before the blindspot, unconscious of this visual deficit (we do not “see” a hole in our visual field), and unable to process such visual information at an unconscious level. In case (b) the subject is unconscious of visual information presented before the blindspot and unable to process such information, yet conscious of his or her visual deficit. Finally, the most surprising results concern inattentional “blindsight” in hemi-neglect patients (c) who are unconscious of visual information before their neglected field and unconscious of their deficit, but are nevertheless able to process such information (as indicated, for example, by behavioral measurements of priming; see McGinchley-Berroth et al., 1993).

Dehaene & Naccache (2001) interpret these findings to support the theory that consciousness requires top-down attentional activation. In case (a) subjects' normal retinal deficit entails that there is no neural representation of visual information corresponding to the topographic location of the blindspot; hence a fortiori no attentional amplification can occur and the subject is unconscious of this part of the visual field. However, neither are there visual representations in long-term memory corresponding to the blind part of the visual field, so when remembered visual scenes are attended this part of the visual field remains unconscious. The lack of discrepancy between ongoing and remembered visual consciousness explains subjects' lack of awareness of their visual deficit. In case (b) the only difference is that neural representations corresponding to the scotomic portion of the retinal field persist in long-term memory; when these representations are attended subjects become conscious of the remembered visual information that was then before what is now their blindspot. Such conscious visual experience contrasts with their phenomenal experience of their current, damaged visual field, and this contrast makes them aware of their deficit. (Note that this explanation is cogent because aside from their dysfunctional retinas, scotomic patients have normal neural activity, so in particular the neural basis for attentional amplification and broadcast to the global workspace is intact.) Finally, hemi-neglect patients have intact retinas and primary visual cortices, but are lesioned in parietal areas which may be involved in attentional processing. This hypothesis combined with the orthodox global workspace theory is consistent with the phenomenology of case (c), in which subjects report themselves unconscious of visual information concerning the neglected field and unconscious of this deficit, but the visual information is nonetheless available to other cognitive processes. In particular, although information concerning the neglected visual field presumably exists as intact neural representations in the brains of such subjects, their lesions prevent these representations from undergoing attentional amplification and becoming conscious; nonetheless, unconscious processing of these representations remains possible.

In case (c), the orthodox global workspace interpretation therefore rests upon two claims: (1) neglect patients maintain largely intact neural representations of visual information concerning their neglected field; and (2) parietal lesions prevent such representations from becoming conscious by interfering with the process of attentional amplification. We now turn to specific empirical findings which speak to the validity of these claims. As mentioned above, some evidence for claim (1) is afforded by behavioral studies involving priming. When hemi-neglect patients are shown pairs of images in their left and right visual fields, they report themselves unaware of the images in their neglected fields, which is consistent with the phenomenology (c) described above. But when given a word in the center of their visual fields and asked to say whether or not it accorded with the previously displayed images, neglect patients performed comparably with normal subjects. The implication is that the unconscious visual information triggers the same semantic priming in neglect patients as in normal subjects, indicating that the information undergoes ordinary semantic processing despite remaining unconscious (McGinchley-Berroth et al., 1993).

In addition to such behavioral evidence, neuroimaging studies have also borne out predictions of neural activity consistent with claim (1). Rees et al. (2000) used functional magnetic resonance imaging to measure activity in the visual cortex of the hemi-neglect patient G.K., suffering from parietal lesions of the sort considered above. In this study, G.K. was shown objects in either of his visual fields, or both, and in the bilateral trials reported unawareness of objects in the neglected field. Importantly, G.K. and other hemi-neglect patients are not unconscious of stimuli in their neglected field in the absence of stimulus in the other field. This provides a basis for comparing activation of the visual cortex in the hemisphere processing information from the neglected field (the ipsilesional hemisphere) both when that information is conscious (unilateral trials) and when it is unconscious (bilateral trials). The fMRI results showed conclusively that activation in the visual cortex of the ipsilesional hemisphere can occur without awareness of objects in the contralesional visual field. Moreover, the pattern of activation was found to be “strikingly similar” in the unilateral and bilateral trials (Rees et al., 2000). However, the fMRI study could only show the stastical absence of visual cortical loci which are reliably activated by visual stimuli of which G.K. was aware rather than unaware, or vice versa. Qualitative differences between the unilateral and bilateral trials were observed (namely, the size of the activated region), and possible differences in temporal (rather than spatial) activation patterns could not be ruled out by this study. Hence, while these neuroimaging results constitute forceful empirical evidence for claim (1) and thus bear out the orthodox global workspace interpretation of visual consciousness in hemi-neglect patients (Dehaene & Naccache, 2001), such evidence is not incontrovertible.

There is also a body of empirical work supporting the claim (2) that parietal lesions prevent visual information in the neglected field from becoming conscious by interfering with attentional amplification (see Driver & Mattingley 1998 for a survey of findings on neglect). For example, Posner et al. (1984) studied the effects of various brain lesions on covert orienting of visual attention, meaning orienting visual attention without overt reorientation of the eyes or the head. The authors of this study instructed patients to detect a target at one of two possible positions in the visual field (contralesional or ipsilesional), but first used arrow cues to direct subjects' attention either to the correct target location or to another location. Reaction time to target was measured as a function of the location of the cue and the time between cue and target; these reaction times were then correlated with the extent and anatomical location of the patients' lesions. Parietal patients exhibited a unique reaction time pattern, with an especially marked increase in reaction time to trials in which an invalid directional cue was given for a target in the contralesional field. Such trials with invalid cues differ from trials with valid cues in that after an invalid cue has been given attention must be disengaged from an incorrect location before it can be oriented to the target. Posner et al. conclude that parietal lesions interfere particularly with the cognitive operation of disengaging attention from its current focus. Findings such as this do not demonstrate conclusively that parietal lesions prevent visual representations from becoming conscious in hemi-neglect patients by obstructing attentional amplification. Nonetheless, such findings do indicate a prominent role for the parietal cortex in the cognitive processes involved in reorienting attention, which makes the orthodox global workspace interpretation of hemi-neglect patients' phenomenology at least somewhat plausible.

Visual masking. Another set of neuroimaging studies invoked by proponents of the orthodox global workspace view (Baars & Laureys, 2005) is the work of Dehaene et al. (2001). In this study functional magnetic resonance imaging and event related potentials were used to measure activation in parts of the cortex known to be involved in the conscious processing of language, while subjects attempted to read a word flashed for approximately 10 milliseconds with and without masking. Masking is the phenomenon whereby subjects usually find themselves able to read the word in this situation, but fail to do so when distracting visual stimuli are presented in close proximity to the target. It was found that under masking conditions, the neural markers of conscious language processing, notably significant activation in large brain-scale neural networks thought to be involved in comprising the global workspace, did not occur (Dehaene et al., 2001). Baars & Laureys (2005) argue that since activity in the visual cortex persists, this study indicates that visual consciousness and activity in the visual cortex are uncorrelated. This is consistent with the orthodox global workspace theory (since consciousness requires attentional amplification that masking may block), but not with the theory that PC has a neural basis consisting of activity in the visual cortex which is not broadcast to the global workspace.

A Theoretical Critique

Block (2005b) argues that none of the aforementioned empirical findings are inconsistent with a neural distinction between AC and PC. The neat account of various blindness phenomena offered by global workspace theory fails to disprove the existence of phenomenology without access because it conflates the two. In particular, the criterion for consciousness it presumes is the presence or absence of a reportable, and hence accessible, phenomenology. Whether or not hemi-neglect patients, for example, are phenomenally conscious of information in their neglected visual field, the fact that lesions in brain areas involved in attentional amplification of representations in the neglected field prevent access consciousness of those representations does not speak to any scientific issue concerning phenomenal consciousness of those representations. The same objection applies to the corroborating neuroimaging results of Dehaene et al. (2001), which moreover do not detect the presence or absence of recurrent neural activity in the visual cortex, a crucial feature of the theory that PC and AC are distinct. These arguments indicate that the adduced empirical evidence does not uniquely support the orthodox global workspace theory.

Empirical Evidence for Distinct Access and Phenomenal Consciousnesses

General Discussion

The theory that AC and PC are distinct is subject to a number of subtle philosophical objections on epistemic and metaphysical grounds. These issues will not be considered here, but it has been argued (Block, 2007; see also the included commentaries by philosophers and neuroscientists and Block's replies) that it is an empirical question whether the neural basis for PC contains the neural basis for AC (and hence whether AC is necessary for PC). We will presume there is no logical obstruction to a distinction between AC and PC for the remainder of our discussion.

Since global workspace theory is capable of explaining a wide variety of consciousness phenomena along the orthodox lines mentioned previously, a proponent of the theory that PC and AC are distinct and correspond to distinct neural machinery must adduce empirical evidence that the latter theory is better suited to explaining. We now turn to the evidence that has been offered for this purpose by Block (2005; 2007) and Lamme (2003).

Signal Detection Theory

Supér et al. (2001) showed monkeys textured patches that either did or did not contain a target of altered texture in one corner, training the monkeys to saccade to the target when it appeared and to fixate on the center of the patch otherwise. They then used implanted “figure” and “ground” microelectrodes to measure activity in receptive areas of the primary visual cortex (V1) corresponding to the location of the target and another location on the patch. Finally, they recorded both the monkey's behavioral response (presence or absence of saccade towards the target upon its appearance) and neural “modulation” (change in activity measured by the electrodes upon the appearance of the target) under varying conditions of saliency (the distinction in texture between the target and the patch was varied along a continuous gradient). The authors found that the monkey's neural modulation was manipulable by varying the saliency and the number of “catch” trials in which no target was presented.
Moreover, with a very high saliency or a very low proportion of catch trials, correlation between neural modulation and saccade behavior was high; with a very low saliency or a very high proportion of catch trials, this correlation dropped almost to zero.The authors also note that in similar studies on monkeys anesthetized with isoflurane (but with their eyes open) the observed patterns of neural modulation measured by the figure and ground electrodes disappeared (Lamme et al., 1998). Supér et al. demonstrate using a form of analysis called signal detection theory that these findings are consistent with a model (corroborated by further experimentation) in which neural modulation is associated with a neural representation intermediate between visual perception and conscious access to the corresponding visual information (see Block, 2005).

This remarkable experiment in neural signal detection theory is presented by Block (2005) as evidence for a distinction between the neural correlates of PC and AC. Drawing upon other findings linking neural modulation in V1 to recurrent activity in the visual cortex, as well as the finding that neural modulation in the monkeys of the Supér et al. study disappeared under anesthesia, he suggests that such modulation is plausibly involved in (if not partially constitutive of!) phenomenal consciousness. Lamme (an author of the Supér et al. study) shares this interpretation of the data (Lamme, 2003). Assuming this, the low correlation measured between modulation and saccade behavior under low saliency or high catch-trial frequency conditions indicates that this particular PC correlate can occur independently of conscious access, providing evidence that PC and AC are distinct on a neural level.

Of course, with respect to challenging the orthodox global workspace interpretation of consciousness, the results of this study are inconclusive. For by the orthodox global workspace theory the measured neural modulation is a neural correlate not of PC but of some preconscious visual representation, which nevertheless likely has a link to recurrent activity in the visual cortex. So as evidence for a PC-AC distinction, this experiment relies on independent evidence for a correlation between recurrent activity in the occipital cortex and PC.

Iconic Memory and Change Blindness Phenomena

Block (2007) argues that the results of several classical experiments lead to interpretations comprising a psychological-neuroscience “mesh” of plausible explanation, whereby the informational content of phenomenal consciousness can overflow that of access consciousness in a way explained by the hypothesis that recurrent loops comprise a neural basis for PC. The inference is that, combined with evidence such as the neural signal detection theory result just described, plus other psychological signal detection theory results (see Block (2005) for a review), a distinction between PC and AC and between their neural correlates is the best explanation cognitive science currently possesses for the observed data.

This subsection will outline the iconic memory and change blindness experiments which suggest a cognitive overflow of PC beyond the capacity of AC. Subjects in the experiment of Sperling (1960) were briefly shown an array of symbols and reported a persistent (on the order of hundreds of ms) phenomenal perception of the entire array; however, when asked to verbally report the precise symbols in the array, subjects are only able to recall about four of them. Interestingly, which four are recalled depends on which four subjects are asked to recall. When the pitch of a tone played immediately after the visual stimulus is removed dictates whether, e.g., the highest, middle, or lowest row of symbols is to be reported verbally, subjects accurately report four symbols from the correct row. This experiment was then adapted by Landman et al. (2003) to a change blindness paradigm (Block, 2007; Gray, 2007) as follows. Subjects are briefly shown an array of rectangles for 0.5 sec, and the array is then replaced by a blank screen for a variable period, followed by another array of rectangles in which one indicated object may or may not have changed orientation. Subjects are asked to say whether or not an orientation change took place. As in ordinary change blindness experiments, the blank screen prevents subjects from detecting changes in objects whose original orientations are not held in working memory. After correcting for guessing, Landman et al. found that subjects display a limited capacity to keep track of the orientations of only four rectangles in the array. But as in Sperling (1960), subjects report phenomenal awareness of perceiving all or most of the rectangles.

Block (2007) suggests that both the Sperling and Landman et al. experimental paradigms indicate that the informational content of phenomenal consciousness overflows the cognitive capacity of conscious access. This takes for granted the fact that subjects' self-reported phenomenal experiences are accurate, and that they hold an entire array of information in phenomenal consciousness even though only a small portion is available to AC. This “meshes” with the neuroscience hypothesis that recurrent loops provided a neural basis for PC, which is supported by evidence such as the results of Supér et al. (2001) and other studies surveyed in Block (2007), as follows. On this hypothesis various coalitions of neurons compete for attentional amplification and concomitant broadcast to the global workspace, placing them in access consciousness. This competition entails a loss of information during the amplification stage. Consequently, this hypothesis can explain the psychological observation of “information overflow.” On the other hand, the orthodox global workspace theory of unitary, accessible consciousness provides no such ready explanation. Therefore, the distinction between PC and AC should be accepted as an inference to the best explanation of the available data.

Certainly a weak point of this argument is that it depends on subjects in these experiments really being phenomenally consciousness of the larger array of objects. Block (2007) argues that there is no reason to reject their reports of such phenomenal consciousness as faulty, but Dehaene et al. (2006) suggest that these reports might be the result of a type of illusion (which Block (2007) calls a hyperillusion). To wit, content which is merely potentially accessible - and hence (by the orthodox global workspace interpretation) merely potentially phenomenally conscious - might seem phenomenally conscious because when asked to report whether they saw the entire array, subjects attentionally amplify the entire array in a “dilute” way that loses information regarding the precise content of the characters in the array (Dehaene et al., 2006). We will not consider this objection further here, as it remains to be empirically tested.

Methodological Concerns

Since the crux of the empirical argument in support of a PC-AC distinction rests upon an inference to the best explanation from available psychological and neuroscientific data, it is worth examining the scientific reliability of such forms of inference. In any such situation the danger with using inference to the best explanation to support a single given hypothesis is that one overfits the available data. A more reliable approach is to generate a large variety of possible explanations and compare their fit to the given data points. Viewed from this standpoint, the “mesh” argument of Block (2007; namely, that PC overflow of AC information capacity meshes with known neural facts about recurrent loops in way that empirically supports both phenomena) is inadequate (Hulme & Whiteley, 2007).

Conclusions and Directions for Further Research

As the preceding analysis illustrates, a variety of subtle methodological questions, as well as a collection of perplexing philosophical quandaries, complicates the dispute over the putative distinction between phenomenal and access consciousness. As a number of vehemently opposed and a number of vehemently supportive commentaries on recent work of Block (2007; see the same paper for the commentaries) illustrates, there is a decided lack of scientific and philosophical consensus on the relevant questions. Indeed, the most relevant empirical evidence, such as the neuroimaging studies done to date and the neural signal detection theory study of Supér et al. (2001), is inconclusive in supporting or refuting the existence of a distinction between the neural correlates of PC and AC. Behavioral and neuroimaging studies largely bear out explanations of phenomena involving access consciousness according to the global workspace theory, but do not speak to the correctness of the orthodox version of this theory in ruling out phenomenal consciousness without access. On the psychological side, the question remains whether iconic memory and change blindness paradigms suggest an overflow of the information content of PC beyond that of AC, or merely an illusion of phenomenal perception in certain circumstances which can be explained in the context of an orthodox interpretation of global workspace theory. The issues surrounding this question seem too philosophically fraught to be resolved through simple psychological experimentation. (For example, what distinguishes an “illusion of perception” from a perception?)

Certain further empirical work would be valuable, however. Ideally, sufficiently sensitive non-invasive electrical probes will be developed to perform an experiment along the lines of Supér et al. (2001) with humans engaged in a task along the lines of Sperling's (1960). Alternately, perhaps existing neuroimaging techniques (such as fMRI) can be refined to measure the recurrent neural activity, potentially free from top-down attentional amplification, in the visual cortices of such subjects. If such activity can be measured in humans independently of conscious access, it would also be interesting to see whether it persists under various eyes-open states of sleep and anesthesia. While such experiments will not convince a skeptic that the neural mechanisms in question correlate to phenomenal consciousness rather than a preconscious intermediate representation between perception and access consciousness, they would at least indicate whether this intermediate representation possesses the most salient features of what we intuitively refer to as PC (for example, seeming like PC, even if such an appearance is illusory).


Baars, B. (2002). The conscious access hypothesis: origins and recent evidence. TRENDS in Cognitive Sciences 6(1), 47-52.
Baars, B., & Laureys, S. (2005). One, not two, neural correlates of consciousness. Letter, TRENDS in Cognitive Sciences 9(6), 269.
Block, N. (1995). How many concepts of consciousness? Behavioral and Brain Sciences 18(2), 272–84.
Block, N. (2005). Two neural correlates of consciousness. TRENDS in Cognitive Sciences, 9(2), 46-52.
Block, N. (2005b). The merely verbal problem of consciousness. TRENDS in Cognitive Sciences, 9(6), 270.
Block, N. (2007). Consciousness, accessibility, and the mesh between psychology and neuroscience. (With commentaries and reply.) Behavioral and Brain Sciences 30, 481–548.
Crick, F., & Koch, C. (1995). Are we aware of neural activity in primary visual cortex? Nature 375, 121-123.
Dehaene, S. et al. (2001). Cerebral mechanisms of word masking and unconscious repetition priming. Nature Neuroscience 4(7), 752-758.
Dehaene, S., & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition 79, 1-37.
Dehaene, S. et al. (2006). Conscious, preconscious, and subliminal processing: a testable taxonomy. TRENDS in Cognitive Sciences 10(5), 204-211.
Driver, J., & Mattingley, J. (1998). Parietal neglect and visual awareness. Nature Neuroscience 1(1), 17-22.
Gopnik, A. (2007). Why babies are more conscious than we are. (Commentary on Block 2007.) Behavioral and Brain Sciences 30, 503-504.
Gray, P. (2007). Psychology. New York: Worth Publishers.
Hulme, O., & Whiteley, L. (2007). The “mesh” as evidence – model comparison and alternative interpretations of feedback. (Commentary on Block 2007.) Behavioral and Brain Sciences 30, 505-506.
Koch, C., & Tsuchiya, N. (2006). Attention and consciousness: two distinct brain processes. TRENDS in Cognitive Sciences 11(1), 16-22.
Lamme, V., et al. (1998). Figure–ground activity in primary visual cortex is suppressed by anaesthesia. Proceedings of the National Academy of Sciences, U.S.A. 95, 3263–3268.
Lamme, V., & Roelfsema, P. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neuroscience 23, 571-579.
Lamme, V. (2003). Why visual attention and awareness are different. TRENDS in Cognitive Sciences 7(1), 12-18.
Landman, R., Spekreijse, H. & Lamme, V. A. F. (2003). Large capacity storage of integrated objects before change blindness. Vision Research 43(2), 149–64.
McGlinchey-Berroth, R., et al. (1993). Semantic priming in the neglected field: evidence from lexical decision task. Cognitive Neuropsychology 10, 79-108.
Pinker, S. (1997). How the mind works. New York: W.W. Norton & Co.
Posner, M., et al. (1984). Effects of parietal injury on covert orienting of attention. Journal of Neuroscience 4, 1863–1874.
Rees, G., et al. (2000). Unconscious activation of visual cortex in the damaged right hemisphere of a parietal patient with extinction. Brain, 123 (Pt. 8), 1624-1633.
Sperling, G. (1960). The information available in brief visual presentations. Psychological Monographs: General and Applied 74(11, Whole No. 498), 1–29.
Supèr, H., Spekreijse, H., & Lamme, V. (2001). Two distinct modes of sensory processing observed in monkey primary visual cortex (V1). Nature Neuroscience 4(3), 304-310.
Wiens, S. (2007). Concepts of visual consciousness and their measurement. Advances in Cognitive Psychology 3(1-2), 349-359.

Consciousness is a mystery to us, which to me, sounds like quite a contradiction. We are unable to simply define consciousness specifically, although we can speculate what it might be. Humans set themselves apart from other animals because they claim “I think therefore I am”. The problem with this view is that we do not know whether animals have this ability, even in a less robust form. The other side of the coin shows that we see no animals which are delving into philosophy and logic or other complex forms of thought. This complexity of thought seems to be more than just a greater number of neurons or a higher level of connectivity between neurons. The result seems to be greater than its parts.

The purpose of consciousness is certainly up for speculation. Like evolution, consciousness emerged slowly over time and became more complex as brains got bigger. Though, I am not sure if this directly suggests that the size and complexity of the brain determines the complexity of thought. I would go as far to say that what we observe as consciousness is actually just a complex effect of a combination of complex senses, complex chemical reactions, a large number of neurons and a complex network of connections between them. Just as the wind is an effect of the entire mass of air around the planet itself, consciousness is an effect of the entire brain including the body. With this definition in mind, I would like to conclude that consciousness has no intrinsic purpose which stems from any one part of the body or brain. It is a very complex illusion which is the effect of a very complex system.

The problem of thought, and how it fits with philosophy and logic is even more elusive. Assuming that consciousness is an illusion does not imply that it lacks meaning. In everything that we communicate to each other, for example, we are able to understand each other (most of the time) because we all agree on a specific system. Within that system there is meaning, although outside of the system is quite another story. This language is an elaborate alias for our thoughts, which also have meaning to us, because we give it meaning like we do language. Philosophy and logic are both culminations of a long lineage of human thought, and they are themselves a system branching off of thought and language. We also give them meaning, but here comes the blow-your-brain part. Sometimes we call these things absolute truths, and we all agree that logic is all about truth. I believe that this only holds true within the system, but not outside of it, since we are the ones who give it meaning.

I do not mean to discount everything that humans have been working on for thousands of years. I simply would like to point out that everything is relative. If we change the meaning of logic, or change any of the rules in any one of our intricate systems, the system will change, and we will change, and though we may believe that system, the system changes as we change. In that respect, I feel that we must be very careful what we give meaning to, for we may get ourselves in to a very difficult loop.

What is consciousness?
And why and how and what for?

The drive to create. That is what it does. It desires. It desires to take chaos and order it. Not to create, then. To recycle. To reconfigure. That is what it does. It takes the data, all the bits and parts, out of storage and it invents purpose from it. That's what it does. It tricks you, it fools you. You are separate from it but it redesigns you. It is the life urge. It is the preprogrammed restlessness that makes sure that when you are happy, satiated, fulfilled, quiet, and done, you will get up and have to do something because to sit still, even (or especially) in the face of contentment, is to die.

But no. That is not consciousness. Lower beings have this drive. Consciousness is not the drive. Consciousness is the awareness of the drive. Consciousness is the power to notice the drive. Consciousness is the tiny bit of feedback you get to have. All of the forces inside and outside that drive you -- like a machine, an infinitely programmable machine that buzzes and whirs and creates and survives -- push you in the general direction of survival. But consciousness is your ability to say, "Wait? This seems wrong. This seems like death." Consciousness is a fail-safe. Because the infinite potential of the human mind, the infinite imaginative power that can solve any problem with enough time and flexibility cannot become complacent, cannot go to the place where the game is won, where the challenges are all met or put aside because to go there is death.

"Get up, get up, get up!" the urge screams.
"But why? I am happy. My needs are met," you say. "There are no more challenges worth meeting."

But this is death. Game over. Win or lose, the brain cannot stop seeking challenge. Why? Because the last impossible challenge, the one more novel experience is death.

Consciousness is the fail-safe, the anti-death. Consciousness is the ability to shrug and say, "Well, I guess I can find one more item for my to-do list. Maybe then I'll be happy for another minute." Consciousness is the ability to pick a purpose when understanding renders inspired action impossible. One more task. One more purpose. One more goal. Until infinity or death. Win or lose, just live.

One of definitions of “consciousness” is “set of phenomena” (as in perception, conscious experiences, first-person point of view).

Since it's completely unclear what that might even be from a third-person point of view on it, it's practically impossible to explain with any warranty (or, more precisely, certainty) of the explanation being understood correctly.

Additionally, there are at least 3 distinct meanings that match this definition.

It is related to the fact that one has only subjective perception of the world (“‘Objective’ (‘material’) world is given only through subjective experiences”). For understanding that, ideas of solipsism, “brain in a jar” (or, more widely known, The Matrix) might be helpful.

Also, ideas of understanding the world as set of phenomena are elaborately described in Husserl's idea of phenomenology, but they are described unclearly and might be just as hard to understand properly.

Regarding others (other people in particular), there are only two things that suggest they might have consciousness:

Additional problem with consciousness is that it didn't matter — yet.

A predicted topic where it suddenly becomes important is mind uploading, and, because of that, cryonics. But just as important it is, it's hard to make any conclusions — whether you will perceive yourself uploaded or will cease having any phenomena i.e. die. This is also known as the duplicates paradox; unclear but related is the mind/body problem.

Given the duplicates paradox, the practical problem of uploading is finding out what consciousness is from the third-person perspective, i.e. finding neural correlate of it.

Given aforementioned phenomenal judgments, it might be treated as an unsolved scientific problem.

Con"scious*ness (?), n.


The state of being conscious; knowledge of one's own existence, condition, sensations, mental operations, acts, etc.

Consciousness is thus, on the one hand, the recognition by the mind or "ego" of its acts and affections; -- in other words, the self-affirmation that certain modifications are known by me, and that these modifications are mine. Sir W. Hamilton.


Immediate knowledge or perception of the presence of any object, state, or sensation. See the Note under Attention.

Annihilate the consciousness of the object, you annihilate the consciousness of the operation. Sir W. Hamilton.

And, when the steam Which overflowed the soul had passed away, A consciousness remained that it had left. . . . images and precious thoughts That shall not die, and can not be destroyed. Wordsworth.

The consciousness of wrong brought with it the consciousness of weakness. Froude.


Feeling, persuasion, or expectation; esp., inward sense of guilt or innocence.


An honest mind is not in the power of a dishonest: to break its peace there must be some guilt or consciousness. Pope.


© Webster 1913.

Log in or register to write something here or to contact authors.