Levels of Representation in Mind/Software and Brain/Computer
Professor Nicholas V. Findler
Minds, Brains and Programs CSC 591, ASU
19 November 1987 (reformatted and edited December 2002)
Recently, in cognitive science circles, there has been a resurgence of interest in the architecture of the brain as inspiration for computer models of human “intelligent” behavior. Along with this interest, there are assumptions, beliefs and hopes that, with a new level of focus, there will come breakthroughs that will shed light on the mysteries of brain and mind. Have there been successes that could transform beliefs into testable hypotheses? Is mind solely a function of brain? If so, where does brain end and mind begin? What’s wrong with focusing solely on the conceptual level? These are some of the questions addressed, if not answered, in this paper, the catalyst being the article, “Minds, Brains and Programs” by John Searle.
1.1. Searle, Symbols and Syllogisms
We begin with Searle’s Chinese Room thought experiment: A person is locked in a room and given three batches of Chinese characters. The person is a fluent English speaker but understands no Chinese, neither written nor spoken. The subject is also given a large set of rules, written in English, which enable him to produce some Chinese characters in response to the characters given him. The people giving him the characters call the first batch “a script,” the second a “story,” the third “questions,” the rules in English “the program” and his responses “answers to the questions.” These designations are unknown to the subject but this doesn’t affect his ability to produce intelligent responses. In fact, the “answers to the questions are absolutely indistinguishable from those of native Chinese speakers” (Searle, 1980, p. 418).
Stories in English are also presented and questions asked. For this, the subject is only to use his native language ability, common sense etc. to answer the questions about the English stories. After a time, he gets so good at obeying “the programs” and the “programmers” get so good at writing them that “the answers to the Chinese questions and English questions are equally good” (Searle, 1980, p. 418). Searle suggests that, in the case of the English stories, there is real understanding going on (both with the subject and the people outside the room) while in the Chinese case the subject is simply following formal rules in order to manipulate meaningless (from the subjects point of view) symbols. Thus, outside the room is the only place where there is true understanding occurring.
Searle draws an analogy between the person manipulating the Chinese symbols and programs created by artificial intelligence (AI) researchers and concludes: “…in the literal sense, the programmed computer understands what [a] car and [an] adding machine understands, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.” And what’s more, “[t]he example shows that there could be two ‘systems,’ both of which pass the Turing test, but only one of which understands” (Searle, 1980, p. 419).
The Turing test is a thought experiment created by British mathematician, cryptologist and AI pioneer, Alan Turing. In Turing’s scenario, a person sits at a computer terminal and communicates with two other participants in the test exclusively through typed responses using the terminal. One of the other two participants is a human and the other is a computer program designed to posses some commonsense, real-world “knowledge” and to understand and produce English language conversation via text. The question is: Can the first person tell which of the other two is human and which is the program? If the program can produce responses that are sufficiently human-like that it can pass for a human, the program is said to have “passed” the Turing test.
1.3. SAM & PAM: A sham?
Searle posits two flavors of AI, namely: STRONG & weak. According to weak AI, computers and their programs are tools to aid cognitive science researchers in testing hypotheses about mind/brain. They make no claims that these programs think or understand. STRONG AI, on the other hand, claims, “the appropriately programmed computer really is a mind…and can be literally said to understand and have other cognitive states” (Searle, 1980, p. 417). Now let’s look at a specific example of two AI programs and some of the claims of their creators.
SAM & PAM are story “understanding” programs that were written at Yale University, under the direction of Roger Schank. These programs display their ability to comprehend simple stories by first reading them and then answering questions that sometimes require the use of logical inference and other human thought processes.
• “…cognitive simulation”
• “…they are all rather artificial”
• “PAM…really doesn’t know what a story is about”
• “SAM has achieved a reasonable depth of comprehension”
• “…capable of making inferences”
• “…a reasonable level of understanding”
• “…fills in causal relations”
• “…PAM’s understanding of…motivation”
• “…PAM’s knowledge of goal relationships”
• “…PAM has made a prediction”
The primary “knowledge” structure is the Script, which consists of:
• Stereotypical or default sequences of events
• Exceptions and possible error situations
• Both procedural and declarative descriptions (Schank & Riesbeck, 1981).
According to Searle, the Chinese Room thought experiment shows that the claims of STRONG AI are invalid because “whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all” (Searle, 1980, p. 418). The Chinese room, according to Searle, lacks an essential quality of human consciousness, namely, intentionality, which consists of “all of those mental states (whether conscious or unconscious) that refer to, or are about, the world apart from the mind.”
1.4. Searle’s Reasoning Process in Syllogistic Form
Premise 1: Brains cause minds.
Premise 2: Syntax is not sufficient for semantics.
Premise 3: Computer programs are entirely defined by their formal, or syntactical,
Premise 4: Minds have mental contents; specifically, they have semantic contents.
Conclusion 1: No computer program by itself is sufficient to give a system a mind.
(This follows from premises 2, 3 and 4)
Conclusion 2: The way that brain functions cause minds cannot be solely in virtue of
running a computer program. (Follows from conjoining the first premise with our
Conclusion 3: Anything else that caused minds would have to have causal powers at least
equivalent to those of the brain. (A trivial consequence of premise 1)
Conclusion 4: For any artifact that we might build which had mental states equivalent to
human mental states, the implementation of a computer program would not by
itself be sufficient. Rather the artifact would have to have powers equivalent to
the powers of the human brain. (Follows from conclusions 1 and 3)
(Searle, 1984, pp. 39-41).
So we can see that Searle does believe that a manmade artifact could, in principle, be built that could justifiably be said to think, understand, have intentions, etc. “[A]ssuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours…if you can exactly duplicate the causes, you could duplicate the effects” (Searle, 1980, p. 422).
Basically, Searle’s argument boils down to this: ”[Y]ou can’t understand Chinese if you don’t know the meanings of any of the words in Chinese” (Searle, 1980, p. 454). Sounds true enough. You won’t catch me arguing with that. I don’t think the guy in the room understands Chinese. He doesn’t even have to know what Chinese is as long as he can follow the rules.
Let’s think about Searle’s rules:
1. Searle has basically ceded the whole Newell & Simon classical AI paradigm that intelligent behavior, in this case, in the form of perfect communication skills in a human language, can be produced by a formal system using rules to manipulate symbols. No actual AI program has ever been able to achieve this level of ability. So, by posing his thought experiment in Newell & Simon terms, Searle is accepting, in principle, the validity of their model.
Any researcher working on computational linguistics would gladly sell his soul to Noam Chomsky for a set of rules that could pass the Turing test. Indeed, the reason Searle finds the AI researchers totally nonplused by his argument is that the scenario he has described would be the fulfillment of many of their aspirations. It would certainly cause the grant money to flow. Let’s face it, a lot of researchers just need results, and to that end, an engineering approach rather than a psychologically plausible one would be just fine. It is only because Searle has endowed the Chinese Room with such great ability, i.e., passing the Turing test by performing as well as human speakers of Chinese, that there is any philosophical dilemma. Thus, his argument degenerates to that of some compulsive nit-picker who complains that, although airplanes are useful inventions, they can’t really be said to fly because they don’t flap their wings.
Yes, I realize that the Turing test uses nothing other than “naked behaviorism” as the sole criteria for determining an artifact’s intelligence but, really, what else can we use for such an evaluation?
2. How does Searle determine that a person has intentionality and understanding but that a computer program doesn’t? Here’s where he’s cheating. He uses two different methods. For people, he observes their behavior, and through projection, he ascribes intentionality to them the same way McCarthy ascribes it to thermostats (Searle, 1980, p. 420). Of course, they will only be granted this status if their responses fall within a certain range of expectations. All Searle needs to know about software is that it is software: “My objection would hold against any program at all, qua formal computer program” (Searle, 1980, p. 452).
It is only the fact that Searle knows the man in the room understands no Chinese and is only following formal rules that any philosophical dilemma has arisen. Without this a priori knowledge, he would have used the same behavioral criteria that he uses to determine whether a human exhibits intentionality and understanding, and concluded that there must be a fluent Chinese speaker in the room producing the responses.
I call this the “Argument from Detailed Knowledge.” It goes like this: Once I understand the inner workings of something, it looses its charm, mystery and humanity. It becomes just another mechanistic artifact. If science describes the functioning of the human body in purely physical terms, as though we’re just soul-less automatons, something is missing. Something has been lost. Surely we are more than some Newtonian “meat machine” programmed by Darwinian selection. Aren’t we? There’s got to be some non-material essence to validate my ennobled self-image. Doesn’t there?
3. Searle has characterized the man in the room as a mere processor of symbols, blindly following rules with no understanding going on at his level, but yet giving the appearance of intelligence on another level. While I do agree with Searle that this is a fair description of the current state of AI programs, I also can’t help noticing that he might just as well have been describing a brain. Aren’t neurons, like the man in the room, mere processors of signals, blindly following their own biologically imposed rules with no understanding going on at that level but yet producing intelligence on another level?
It boils down to this: Brains don’t understand but minds do. Similarly, the man in the Chinese room doesn’t understand the symbols he is pushing around, but the room, (analogous to the brain) exhibits understanding (a mind) through its flawless Chinese communication skills.
Compare these two quotes:
1. “Because the formal symbol manipulations by themselves don’t have any intentionality; they are quite meaningless; they aren’t even symbol manipulations, since the symbols don’t symbolize anything” (Searle, 1980, p. 422).
2. “The brain uses stereotyped electrical signals to process all the information it receives and analyzes. The signals are virtually identical in all nerve cells. They are symbols that do not resemble in any way the external world they represent…any more than the letters d o g, taken together resemble a spotted Dalmation” (Kuffler, Nicholls, & Martin, 1984).
The prosecution rests.
I agree that no AI program “understands” the symbols it pushes around. No program is self-conscious and none can even pass the Turing test. But, no current AI program has the ability that Searle has given to his mythical Chinese Room, either. Using the same criteria we use to ascribe cognitive states to people, the Chinese room would be considered intelligent by 99,999 out of 100,000 people. (The one dissenter being, statistically, a philosopher.) Whenever you are dealing with a hierarchical system wherein there is a level that can be described “semantically” you can always look below that level and find a “syntactic” level. In this case, the man in the room represents the syntactic level and to complain that he has no understanding of what he is doing is analogous to Searle lamenting the fact that minds have intentionality but neurons are just dumb processors of signals.
Searle has suggested that more progress could be made in cognitive science if its models were more biologically realistic. This is a very intuitively appealing idea: After all, if you simulate the brain at a sufficient level of detail, the mind is sure to follow. In the next section, I will discuss the brain with a focus on the function of neurons and their value as a computational model.
2. Brain: The Machinery of the Mind
“[The brain] is an evolutionary development over hundreds of millions of years…this human brain that does not look too distinguished on the outside, weighing about 1.5 kilograms, is without any qualification the most highly organized and the most complexly organized matter in the universe” (Eccles 1977).
Given this proclamation, let’s look at a few statistics about the brain. It is estimated that the number of neurons in the brain is on the order of 1¹º to 1¹¹. Each neuron receives input from 1,000 to 100,000 neurons and sends signals to 1,000 to 100,000 other neurons. “…[T]here are perhaps hundreds of kinds of neurons, …many different kinds of neurotransmitters and…there are different systems in which different of these neurotransmitters are dominant” (Rumelhart & McClelland, 1986, p. 137).
The firing rates of neurons are measured on the order of milliseconds – “perhaps 10s of milliseconds. Thus, the basic hardware of the brain is some 1 to-the-sixth times slower than that of serial computers” (Rumelhart & McClelland, 1986, p. 130).
For example, when an object enters our visual field we can recognize it in approximately half a second. It takes about 25-50 milliseconds for information to be sent from the photoreceptors to the cortex “and several hundred milliseconds are required for the motor system to produce a response, which leaves about 200-300 milliseconds for visual processing” (Sejnowski, 1986, p.378).
At these speeds, about 100 serial time steps are all there is time for. This is what is known as the 100-step program constraint, which is a bottleneck for computer simulation “because the best AI programs are not nearly as general and require millions of computational time steps” (Feldman, 1986, p. 277). This constraint also limits the amount of information passed between processing elements to a few bits rather than the large symbolic structures that are ubiquitous in traditional AI programs. It is the massive parallelism of neural processing that reconciles the apparent paradox of the slowness of neuron firing rates and the speeds at which things get done.
2.1. Neurons: The DPU’s (Distributed Processing Units) of the Brain
Although there are many different kinds of neurons, they can be grouped into two general categories:
1. “Interneuron” cells. These are for local processing and “have all of their inputs and outputs limited to a very small region (usually a few hundred microns)” (Stevens, 1985, p. 287). 2. Output cells. These carry signals across large regions of the brain, to muscles and from sensory organs back to the brain.
Each neuron has a large (about 3 microns) central body called the soma. This is where the metabolism and genetic information, necessary for the cell’s survival, reside. Lipids and proteins are synthesized by the soma and these are used in the maintenance of the synapses. Attached to the soma are the dendrites. It is the job of the dendrites to receive signals from other neurons across the synapse.
The dendrites can be several feet in length and have a range of different shapes. These shapes have a dramatic effect on the behavior of the neuron, which partially accounts for the many different “personalities” among neurons.
While the dendrite can be viewed as a passive collector of inputs, it is the axon that is the “final output channel of the cell” (Stevens, 1985, p. 289). When the “resting potential” in the soma goes beyond a certain threshold a spike, an “action potential” is sent along the axon to the synapse. Thus, an axon can be described as a “nonlinear threshold device” (Stevens, 1985, p. 289).
A synapse can have an excitatory or inhibitory effect on the cell to which it is connected. If the effect is excitatory, neurotransmitters are released thus connecting the cells and allowing the signal to be transmitted.
Neurons may have both digital and analog processes that interact in different ways depending on the type of neuron: “Output cells have graded analog inputs with digital, all-or-none final outputs; interneurons have graded analog inputs but also have graded analog output” (Stevens, 1985, p. 289).
3. Computer Models of Mind and Brain
“For a few behaviors there is general agreement for all species about the hippocampal EEG correlates” (O’Keef & Nadel, 1978¸ p. 163). “Conditioning and discrimination experiments can be used to generate a series of behaviors with which the hippocampal EEG can be correlated” (O’Keef & Nadel, 1978¸ p. 186). Interesting, but what do we know about EEG correlates with “higher” brain functions?
3.1. The Mind as Electromagnetic Wave
Recently, using EEGs, a few brain scientists have been trying to correlate wave patterns with words. Experiments were conducted in which subjects listened to identical-sounding words used as both nouns and verbs, e.g., “(The) rose” and “(He) rows.”
“When the EEG responses were averaged over many subjects, the brain-wave pattern produced by the noun was consistently different from that produced by the verb” (Lerneṛ 1984, p.36). For nouns: “the potential fields peaked positively near the front of the brain and then negatively near the back. But when words were interpreted as verbs, the field pattern was reversed” (Lerneṛ 1984, p.37). Even more interesting, was an experiment where words were grouped into six categories by connotation e.g., “good” words like beauty and “bad” words like crime. “Each word-type gave rise to a distinctive EEG pattern that could be recognized by a computer at a rate far better than chance” (Lerneṛ 1984, p. 36).
According to one hypothesis: “Just as a sound wave at a given frequency causes a tuning fork that is tuned to the same frequency to vibrate, the electromagnetic wave pattern associated with a familiar sensory stimulus causes millions of brain cells to generate a similar pattern that has somehow been stored in the chemical structure of these neurons. In the brain, resonance is nonlinear: Activation of a stored pattern occurs not only when the incoming wave pattern is identical but even when it is similar” (Lerneṛ 1984, p. 37). As more and better stimulus/response correlations are found our knowledge of the brain will certainly become much more predictive – a must for any enterprise wishing to be considered a science.
3.2. Self-organizing Systems
One of the greatest benefits of neurally inspired models is their ability to learn by adjusting their patterns of connectivity. The learning is a consequence of the nature of the model and does not require that explicit rules or heuristics be given to it a priori. An example of this is the “Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position” (Fukushima, 1980, p. 193).
The structure of the system has been inspired by some properties of the visual nervous system in vertebrates. “During the process of self-organization, the network is repeatedly presented with a set of stimulus patterns to the input layer, but it does not receive any other information about the stimulus patterns” (Fukushima, 1980, p. 197). The network “acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without being affected by their position nor by small distortion of their shapes” (Fukushima, 1980, p. 193). Of course, this kind of thing has been done using traditional symbolic models, but the beauty of the Neocognitron is that programmers don’t have to concoct a bunch of “feature detection” mechanisms. The system’s pattern recognition ability emerges “for free” as a consequence of its organization.
3.3. Emergent Behavior
“Often a network’s emergent properties are much more complex than the network components from which they arise. In a good network model, the whole is far greater than the sum of its parts” (Grossberg, 1987, p. 24).
“I feel it is critical to focus on collective phenomena, particularly on the idea that some information or knowledge or ideas can exist at the level of collective activities, while being totally absent at the lowest level. In fact, one can even go so far as to say that no information exists at that lowest level” (Hofstadter, 1985, p.646).
This kind of language pervades the literature on neural modeling and ranges from claims that the whole is “far greater than,” to claims that it is just “different from” the sum of the parts (Rumelhart & McClelland, 1986, p. 128). The idea is somewhat troublesome and leads to suspicions that they’re proposing some thinly veiled bit of magic, mysticism, perpetual motion, etc. Is this a scientifically valid point of view, or is it just passing the buck?
There seems to be some support for the idea of “epiphenomena” in physics: If we put two molecules in a box we’ll get a collision every now and then. We can add 10 more or even 1,000 more and continue to get collisions but the behavior is still “inherited” i.e., it can be explained by noting the properties of individual molecules. When we put a billion billion molecules in the box, we can get sound waves. Thus, we have “emergent” behavior, i.e., it is not explained by looking solely at the properties of individual molecules. The phenomenon has arisen from the collective behavior of the molecules (Allman, 1986, p.24). Similarly, in fluid dynamics, “turbulence is not predicted by the knowledge of the elements of the system; it is inherent in the interactions among these elements” (Rumelhart, Smolensky, McClelland, & Hinton, 1986, p.56). I think, given our current understanding of the workings of the brain, the perspective of mind as a “synergistic” consequence of the brain must be viewed as a “speculative” hypothesis, or a useful metaphor.
3.4. A Proposal for a Hybrid Symbolic/Subsymbolic Model
“The subsymbolic approach [is] dependent upon other research paradigms in the cognitive sciences, and…certainly in the short term, the subsymbolic paradigm cannot replace these other research paradigms” (Smolensky, 1987, p.10).
Smolensky suggests a model wherein conscious rule application would be formalized at the conceptual level while intuition (unconscious rule application) would be formalized at the subconceptual level. He postulates that, for humans, language ability is the key to the ability to perform in a domain where one is lacking in expertise but can function by following rules expressed in, say, English. This “trick” must be exploited by the subsymbolic paradigm if a hybrid system is to be created. Note: This is considered to be a necessary but not a sufficient condition for relating the two levels.
The progression from the conscious to the intuitive through experience would proceed as follows: “The rule interpretation process requires that the retrieved linguistically coded rule be maintained in memory while it is being interpreted. By contrast, after connections have been developed to perform the task directly, there is no correspondingly stable [linguistically coded] pattern formed during the performance of the task. Thus the loss of conscious phenomenology with expertise can be understood naturally” (Smolensky, 1987, p. 17). The problem with implementing such a system is that the connectionist paradigm cannot yet encode linguistic expressions as patterns of activity in a form that would allow language fluency to the degree necessary for this model.
Given our lack of understanding of the brain’s complex processes (to say nothing of its “offspring,” the mind) in relation to the conceptual level of consciousness, an attempt to model this level seems to be ad hoc-ism run amuck. Because, if we accept the idea that the conceptual level of consciousness is the result of sub-symbolic processes (patterns of neurons firing), then any attempt at conceptual modeling involves the merely analogical description of the output of a complex, non-linear, hierarchical and ill-understood system. Thus, the high-level symbolic approach has only a tip-of-the-iceberg focus.
Oh, I know, we didn’t achieve flight by building machines that flapped their wings. But wing-flapping is only one means to an end. By flapping their wings, birds achieve a “resolution of forces.” Once we were able to understand the interactions of thrust, drag, gravity and lift, we jolly well flew (Simons, 1985, p. 4). At this stage, AI is still fixated on wing-flapping models while the underlying principles still elude us. Maybe connectionism will provide the “shot in the arm” that is desperately needed in AI circles.
In light of all this, does it make sense to try and model minds at all? If it’s wrong headed to try to model concepts, which can, at least partially, be elucidated by the subjects of “introspective” psychological experiments, why tackle brains? Only because of a strong intuition/belief/assumption that, if you can get the form right, the content will follow; the content is there because, and only because, of the form. (Not ruling out the possibility of “non-wing-flapping” forms, however, as long as the essentials are there.)
Given the properties of neurons, it would seem that they would be very interesting subjects to model some kind of computational device after. Until recently, this kind of model has been generally viewed as too complex, ill-defined or just plain irrelevant to the task of explaining cognitive phenomena. There are elements of both reductionist and holistic paradigms that inspire the neural modelers. Holism: Mind is an emergent, epiphenomenon that arises from the brain’s non-linear complexity caused by the massively parallel, collective behavior of networks of simple neurons. Reductionism: Mind is a function of brain. The human mind will be explained when the brain is understood and to the extent that it is understood. Understanding will be achieved by discovering more about the properties of neurons and their interconnections. “The gain in brain is mainly in the stain.”
It must be clear by now that I think connectionism is the way to go if were striving for “replication” of mental processes. If all we need is “simulation” or “imitation,” then I think progress along symbolic lines will be just fine. With the renewed interest in neural nets, reminiscent of the hype of expert systems that began a couple of years ago, we should know relatively soon whether neural modeling will open any doors to the mind. Until then, I believe the question of the ultimate validity of connectionism is still open.
5. Highlights from the History of the Brain
c. 40,000 B.C. The human brain evolves to its present form.
c. 430 B.C. Hippocrates, the patron of physicians, calls the brain the organ of thought.
c. 335 B.C. Aristotle, watching headless chickens running around, decides the heart is
the seat of consciousness.
1664 Thomas Willis, an English physician, wrote the first accurate treatise on the brain
and showed that nerves emanated from that organ.
1860 Pierre Paul Broca unveils the speech center beforethe Paris Anthropological
1874 German neurologist Carl Wernicke identifies an area specialized for speech
comprehension in the left hemisphere.
1901 Santiago Ramon y Cajal notices that neurons are separated by tiny gaps, or
1906 Sir Charles Sherrington describes how reflexes are“wired” in the brain.
1921 Otto Loewi identifies acetylcholine, the first known neurotransmitter.
1929 Hans Berger records brain waves from a persons scalp.
1940s Some of Wilder Penfield’s patients have interesting “flashbacks” during brain
1949 Donald O. Hebb describes the “neural net.”
1950 Lashley gives up on the engram, concludes memories are not localized.
1952 Robert Heath implants deep brain electrodes in a human being.
Alan Hodgkin and Andrew Huxley describe how neurons fire.
James Olds and Peter Milner activate a rat’s “pleasure center.”
1957 Vernon Mountcastle shows that neurons are arranged in columns.
1961 The first “split brain” operation is performed by Roger Sperry and Joseph Bogen.
1963 Jose Delgado becomes the first neuro-physiologist/matador, stopping an electrode
equipped bull dead in his tracks via radio remote control.
1973 First PET scan shows the metabolic activity inside a dog’s brain.
The opiate receptor is discovered by Candice Pert and Soloman Snyder.
1982 First human “brain transplant” (actually, a graft of dopamine-rich tissue from the
patient’s adrenal gland) is performed in Stockholm; fails to alleviate the patient’s
1987 Incisive critique of John Searle’s “Minds, Brains, and Programs” is delivered by
Chris Armstrong. (Hooper, & Teresi, pp. xvii-xix, 1986).
6. Thoughts to Ponder
• “Minds are what brains do” (Marvin Minsky).
• “We understand human mental processes only slightly better than a fish
understands swimming” (John McCarthy).
• “We have an inherited cultural resistance to treating the conscious mind as a
biological phenomenon like any other” (John Searle).
• “Thoughts are not weightless and ethereal. When you have a thought, brain
activity is actually going on” (John Searle).
• “Until words like ‘concept’ have become terms as scientifically legitimate as,
say, ‘neuron’ or ‘cerebellum,’ we will not have come anywhere close to
understanding the brain” (Douglas Hofstadter).
• “If the brain was so simple [that] we could understand it, we would be so simple
that we couldn’t” (Lyall Watson).
• “Ten billion neurons, ten-to-the-fourteenth different connections…That’s more
than enough to contain a ‘soul’“ (some anonymous neurologist).
Allman, W. F. (1986, May). Mindworks. Science, 23-31.
Eccles, J. C. (1977). Understanding the Brain. (2nd ed.). New York:
McGraw-Hill Book Company.
Feldman, J. A. (1985, April). Connections: Massive parallelism in natural and artificial
intelligence. BYTE, 277-284
Fukushima, K. (1980). Neocognitron. In: Biological Cybernetics 36:193-202.
Grossberg, S. (1987). Competitive Learning: From Interactive Activation to Adaptive
Resonance. In: Cognitive Science 11: 23-63.
Hofstadter, D.R. (1985). Variations on a theme as the crux of creativity. In
Metamagical Themas: Questing for the essence of mind and pattern (pp. 232-259).
New York: Basic Books.
Hofstadter, D.R. (1985). “Waking up from the boolean dream In Metamagical Themas:
Questing for the essence of mind and pattern (pp. 631-665). New York: Basic Books.
Hooper, J., & Teresi, D. (1986). The 3-Pound Universe. New York: Macmillan.
Kuffler, S., Nicholls, J., & Martin A. (1984). From Neuron to Brain: A Cellular Approach
to the Function of the Nervous System. (2nd ed.). Sunderland, MA:
Sinauer Associates, Inc.
Learner, E. J. (1984, August). Why can’t a computer be more like a brain?
High Technology, 34-37.
O’Keefe, J., & Nadel, L. (1978). The Hippocampus as a Cognitive Map.
Oxford: Clarendon Press.
Rumelhart, D.E., & McClelland, J.L. (1986). PDP Models and General Issues in
Cognitive Science. In J.L. McClelland, D.E. Rumelhart (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations (pp. 110-146). Cambridge, MA: MIT Press/Bradford Books.
Rumelhart, D.E., Smolensky , P., McClelland, J.L., & Hinton, G.E. (1986).
Schemata and Sequential Thought Processes in PDP Models. In J.L. McClelland,
D.E. Rumelhart (Eds.), Parallel Distributed Processing: Explorations in the
Microstructure of Cognition. Volume 2: Psychological and Biological Models
(pp 7-57). Cambridge, MA: MIT Press/Bradford Books.
Schank, R., & Riesbeck, C. (1981). Inside Computer Understanding: Five Programs
Plus Miniatures. Hillsdale, N.J.: Lawrence Erlbaum Associates.
Searle, J. R. (1980). Minds, Brains and Programs. The Behavioral and
Brain Sciences, 3, 417-457.
Searle, J. R. (1984). Minds, Brains and Science. Cambrigde, MA:
Harvard University Press.
Sejnowski, T.J. (1986). Open Questions About Computation in Cerebral Cortex.
In J.L. McClelland, D.E. Rumelhart (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models (pp.372-389). Cambridge, MA: MIT Press/Bradford Books.
Simons, M. (1985). Model Aircraft Aerodynamics. Sacramento, CA: Argus Books.
Smolensky, P. (1987). On the proper treatment of connectionism.
Draft copy: Soon to be published in Behavioral and Brain Sciences.
Stevens, J. K. (1985, April). Reverse Engineering the Brain: Chipping away at the brain.