Philosopher of mind John Searle begins his book The Mystery of Consciousness with the following reflection:
The enormous variety of stimuli that affect us – for example, when we taste wine, look at the sky, smell a rose, listen to a concert – trigger sequences of neurobiological processes that eventually cause unified, well-ordered, coherent, inner, subjective states of awareness or sentience. Now what exactly happens between the assault of stimuli on our receptors and the experience of consciousness, and how exactly do the intermediate processes cause the conscious states? (3)
Without resorting to dualism between body and mind, and without eliminating human subjectivity in favor of reductive materialism, Searle attempts not to explain consciousness, but to move us toward exploring consciousness as an emergent property of a physical, embodied system: “The liquidity of water is a good example: the behavior of the H20 molecules explains liquidity, but the individual molecules are not liquid” (16). According to Searle, consciousness does indeed emerge from the material operations of the brain. However, he maintains, “Consciousness has a first-person or subjective ontology and so cannot be reduced to anything that has third-person or objective ontology” (212). He argues that our dualist/materialist vocabulary of mind stymies our ability to formulate a coherent response to the hard problem of consciousness.
I am interested in how we might understand the differences between human consciousness and machine “consciousness.” I believe a character in William Gibson’s Neuromancer offers a literary example of these concepts. The Dixie Flatline, who exits only as a “ROM personality construct” utilizes language, responds to verbal commands, carries out autonomous tasks, and analyzes vast amounts of data. “Dix” would certainly pass the Turing test as developed in Alan Turing’s 1950 paper “Computing Machinery and Intelligence,” a simple game in which a human player attempts to distinguish between a computer and another human player through text based queries. If the investigator incorrectly declares the computer a human, according to Turing, we might then think of the computer as an “intelligent” machine. If our criteria for determining machine intelligence is merely its ability to process and manipulate a wide array of linguistic information, the Flatline would indeed possess the capacity for thought as defined in the Turing test. However, it is precisely this narrow definition of conscious thought that Gibson’s character poignantly undermines. First, the Dixie Flatline forces us to confront the gulf between syntax and semantics. Searle explains:
Computation, so defined, is purely a syntactical set of operations, in the sense that the only features of the symbols that matter for the implementation of the program are the formal or syntactical features. But we know from our own experience that the mind has something more going on it than the manipulation of formal symbols; minds have contents. For example, when we are thinking in English, the English words going through our minds are not just uninterpreted formal symbols; rather, we know what they mean. For us the words have a meaning, or semantics. The mind could not be just a computer program, because the formal symbols of the computer program by themselves are not sufficient to guarantee the presence of the semantic content that occurs in actual minds. (10)
The reverse holds true as well, a computer program cannot be a mind if it merely manipulates symbols without possessing semantic content. Searle’s famous rebuttal to Turing’s account of machinic thinking, “The Chinese Room Argument – as it has come to be called – has a simple three-step structure: 1. Programs are entirely syntactical. 2. Minds have a semantics. 3. Syntax is not the same as, nor by itself sufficient for, semantics. Therefore programs are not minds. Q.E.D” (11-12). Like the occupant of the Chinese room who successfully manipulates symbols without understanding their semantic content, the Dixie Flatline appears to be conscious, but does not possess a self, a locus of subjective experience in which words come to “mean” something. Circuits and silicon cannot recreate the experience of embodied consciousness or the feeling of qualia. For Dix, the biological character of the human sensorium has been completely liquidated, and with it human feeling and the meaning sedimented in the formal symbols of our language.
In short, I am interested in exploring the question of whether there exists some kind of “ontological” distinction between human consciousness and machinic intelligence.
No comments:
Post a Comment