Continuity of Consciousness

…all things Transhuman and Singularitarian…

Below is the text of my speech given at the Transhuman Visions Conference in San Francisco, produced by Hank Pellissier Feb. 1st, 2014


Thank you to Hank Pellissier for inviting me to speak and for promoting me from bouncer to speaker…true story. 

And also for publishing my first Transhumanist article on Transhumanism and Powerlifting, of all things.

This my first time speaking at a Transhumanist conference. So this is a historic day for me and all of you will get to tell your grandchildren that “you were here” in San Francisco, appropriately, when Chris Armstrong “came out” and declared himself to be “openly Transhumanist.”

And now, on to the book, The Transhumanist Wager, by Zoltan Istvan

I’ll be excerpting from a larger article I’m writing, called:

Jethro Knights: DIY Omnipotender. Tale of a Self-Made Superman

When The Transhumanist Wager was published, I lurked for several months…watching reviews…some of which were scathing, some very positive…probably the most balanced was from Giulio Prisco. 

My own review was practically a love letter to the book and to Zoltan. 

I had very positive emotions about it even while recognizing the extreme ideas and actions of the protagonist, Jethro Knights…

Several of Jethro’s attitudes and personality traits reminded me of my own when I was young. 

I had that same youthful zeal and mono-focus on one goal. I didn’t have a quest anywhere near as grand and epic as Jethro’s. Mine was just to be the most amazing and skilled musician possible. 

Like Jethro, I put people in second place, after my sacred mission; told myself that romantic relationships would be unnecessary distractions and should take a backseat to the mission. 

And like Jethro, I didn’t always adhere to such a harsh and Spartan ideal in practice.

As I said in my review, Jethro’s words were far more “scary” than his actions. 

But why such scary words?

One of Carl Sagan’s most often mentioned lines from his TV series, Cosmos, “Extraordinary claims require extraordinary evidence” could be paraphrased by Jethro Knights as:

Extraordinary aims require extraordinary expedience. 

And what is this extraordinary aim?

Oh, nothing more or less than to defeat death!

Quoting Jethro:

“Death must be conquered. From now on, that is my first and foremost aim in life. That is the quintessential first goal of the transhumanist.”

He has chosen a goal that is literally impossible to achieve at the time, but believes it could be achieved, with a strong concerted effort by the best scientists, within a couple of decades. 

In order to increase his long term chances for survival he decides to fashion himself into an omnipotender: “one who contends for omnipotence.”

He needs power to push research forward and to defend against those who would try to interfere with the achievement of much needed scientific breakthroughs and even threaten his life directly. 


Extraordinary aims require extraordinary expedience. 

Any other non-life-or-death oriented goal a human being can aspire to is something that they can strive for with all their energy and if they fall short they will suffer disappointment but may have the option to get back on track and try again, depending on the nature of the goal and the amount of life/health-span they have ahead of them. For nearly all goals, a failure to reach them can be seen as a temporary setback — an opportunity to regroup, begin again or pickup at the point of the failure and attempt to finally accomplish the purpose of the quest.

But what happens when the goal chosen is to avoid death at any cost, when a failure to reach that goal means, the end…period? No regrouping for another attempt. No second chances. Failure to achieve immortality, or even a radically extended lifespan, is fatal and final. Game over. For anyone seriously committed to such a goal, there can be no such thing as “moderation”; no resigned acceptance of defeat; no room to allow ANYTHING to take precedence over the accomplishment of this goal. In Jethro’s mind, he is engaged in a classic zero-sum, success-failure enterprise. No middle ground is desired, tolerated, nor even POSSIBLE. At any given time, you are either dead or alive. Period.

The extreme nature of a goal that is still, at the time that it is committed to, quite literally IMPOSSIBLE, requires a level of commitment and downright zealotry that is far beyond the purview of lesser goals. Prior to the achievement of indefinite lifespans, all possible goals a human being could aspire to exist within the context of a finite and relatively short timeframe within which to accomplish them. A finite lifespan has always been our fundamental physical AND psychological constraint and anyone attempting to do battle with this inviolate limitation could quite reasonably be said to be “out of touch with reality.” 

According to Jethro, Transhumanist morality is “defined and decided by the amount of time we have left to live.”

At this point, we need pause to understand the most important fact about Jethro Knights: HE IS NOT LIKE US. 

For a moment, I’ll presume to speak broadly about Transhumanists. We self-identify as humans. Humans who want to extend their capabilities and transcend their limitations but still undeniably human. 

In Jethro’s case, I propose that he is doing all he can to engage in a kind of psychological self-programming designed to put himself into a mental state wherein he has already transcended his humanity. It amounts to a self-guided reorientation of his entire human psyche toward his idealized vision of a much more powerful, durable, and far superior being relative to the most advanced humans currently alive.

He already views himself as beyond human and explores this new moral landscape stripped of any human biological imperatives and “mammalian niceties.” In his most extreme moments, his values are no longer commensurate with a humanistic, bio-centric sensibility.

He is constantly exploring the limits of how far beyond his human roots he is willing to go in his quest to become an omnipotender. 

Some critics have pointed to Jethro’s most aggressive statements, borderline personality traits and harshest philosophical musings that comport quite well with the kinds of mental states displayed by people with narcissistic, sociopathic and even psychopathic disorders. 

The problem with this simplistic and shortsighted conclusion is that it ignores many examples of Jethro’s traits and actions that serve to disconfirm these kinds of diagnoses. 

The discrepancy between Jethro’s most aggressive statements and his much more humane actions and close emotional bonds with other people — even attaining full-on soulmate status with Zoe Bach — belies any attempts to paint him as a mere psycho and shows his constant struggle between his remaining human “frailties” and his cyborg superman ideal.

Now, I would like to leave you with something very special. It is a deeper look into what is behind this book and what may be coming in the further evolution of Jethro Knights. 

This is the scoop. The exclusive. Turn on your recorders. This is right from the horses mouth, Zoltan Istvan, via email:

“I tended to write the story from the perspective of a simple question: How far would one man go to achieve his mortality? 

But I wrote this for you the other day in one of my moments when I tend to jot down my ideas, unpolished as they may be. It’s something I haven’t spoken about much, but the ideas of it are also very important to me:

You must understand that The Transhumanist Wager is a bridge. AI is coming. Merge with the future powers or be destroyed. It is evolution. And a machine’s moral system is like nothing we know. 

Can I kill my wife a thousands times? Can I kill every person on the planet? Do I want to? Am I supportive of humanicide? The obvious answers are: No.

But ask that same question to an AI. And its answer will be a very resounding: Yes.

To survive, we must be stronger than AI.

These are some of the reasons why I wrote The Transhumanist Wager the way I did. 

The Transhumanist Wager is a message from the future. If you don’t lose the weakness of your species, your species will not survive. You must embrace a new you — a fiercer, bolder you. Otherwise you will be no match for your own inventions.”

I’m really looking forward the continuation of this story. As the very last sentence of The Transhumanist Wager says: 

“This is just the beginning of Jethro Knights.”


My essay was re-published on the Institute for Ethics & Emerging Technologies site!

But which one is ME? Letting go of our mono-being identity orientation: Poly-beings, here we come!

A paper I wrote for an AI seminar at ASU in 1987 on neural modeling/connectionism/sub-symbolic computation.









Levels of Representation in Mind/Software and Brain/Computer


Chris Armstrong

Professor Nicholas V. Findler 

Minds, Brains and Programs CSC 591, ASU

19 November 1987 (reformatted and edited December 2002)











1. Introduction

Recently, in cognitive science circles, there has been a resurgence of interest in the architecture of the brain as inspiration for computer models of human “intelligent” behavior. Along with this interest, there are assumptions, beliefs and hopes that, with a new level of focus, there will come breakthroughs that will shed light on the mysteries of brain and mind. Have there been successes that could transform beliefs into testable hypotheses? Is mind solely a function of brain? If so, where does brain end and mind begin? What’s wrong with focusing solely on the conceptual level? These are some of the questions addressed, if not answered, in this paper, the catalyst being the article, “Minds, Brains and Programs” by John Searle.


1.1. Searle, Symbols and Syllogisms

We begin with Searle’s Chinese Room thought experiment: A person is locked in a room and given three batches of Chinese characters. The person is a fluent English speaker but understands no Chinese, neither written nor spoken. The subject is also given a large set of rules, written in English, which enable him to produce some Chinese characters in response to the characters given him. The people giving him the characters call the first batch “a script,” the second a “story,” the third “questions,” the rules in English “the program” and his responses “answers to the questions.” These designations are unknown to the subject but this doesn’t affect his ability to produce intelligent responses. In fact, the “answers to the questions are absolutely indistinguishable from those of native Chinese speakers” (Searle, 1980, p. 418). 


Stories in English are also presented and questions asked. For this, the subject is only to use his native language ability, common sense etc. to answer the questions about the English stories. After a time, he gets so good at obeying “the programs” and the “programmers” get so good at writing them that “the answers to the Chinese questions and English questions are equally good” (Searle, 1980, p. 418). Searle suggests that, in the case of the English stories, there is real understanding going on (both with the subject and the people outside the room) while in the Chinese case the subject is simply following formal rules in order to manipulate meaningless (from the subjects point of view) symbols. Thus, outside the room is the only place where there is true understanding occurring. 

Searle draws an analogy between the person manipulating the Chinese symbols and programs created by artificial intelligence (AI) researchers and concludes: “…in the literal sense, the programmed computer understands what [a] car and [an] adding machine understands, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.” And what’s more, “[t]he example shows that there could be two ‘systems,’ both of which pass the Turing test, but only one of which understands” (Searle, 1980, p. 419). 

The Turing test is a thought experiment created by British mathematician, cryptologist and AI pioneer, Alan Turing. In Turing’s scenario, a person sits at a computer terminal and communicates with two other participants in the test exclusively through typed responses using the terminal. One of the other two participants is a human and the other is a computer program designed to posses some commonsense, real-world “knowledge” and to understand and produce English language conversation via text. The question is: Can the first person tell which of the other two is human and which is the program? If the program can produce responses that are sufficiently human-like that it can pass for a human, the program is said to have “passed” the Turing test.  


1.3. SAM & PAM: A sham?

Searle posits two flavors of AI, namely: STRONG & weak. According to weak AI, computers and their programs are tools to aid cognitive science researchers in testing hypotheses about mind/brain. They make no claims that these programs think or understand. STRONG AI, on the other hand, claims, “the appropriately programmed computer really is a mind…and can be literally said to understand and have other cognitive states” (Searle, 1980, p. 417). Now let’s look at a specific example of two AI programs and some of the claims of their creators.

SAM & PAM are story “understanding” programs that were written at Yale University, under the direction of Roger Schank. These programs display their ability to comprehend simple stories by first reading them and then answering questions that sometimes require the use of logical inference and other human thought processes.

weak claims: 

• “…cognitive simulation” 

• “…they are all rather artificial” 

• “PAM…really doesn’t know what a story is about”

STRONG claims:

• “SAM has achieved a reasonable depth of comprehension”

• “…capable of making inferences”

• “…a reasonable level of understanding”

• “…fills in causal relations”

• “…PAM’s understanding of…motivation”

• “…PAM’s knowledge of goal relationships”

• “…PAM has made a prediction”

The primary “knowledge” structure is the Script, which consists of:

• Stereotypical or default sequences of events

• Exceptions and possible error situations 

• Both procedural and declarative descriptions (Schank & Riesbeck, 1981).

According to Searle, the Chinese Room thought experiment shows that the claims of STRONG AI are invalid because “whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all” (Searle, 1980, p. 418). The Chinese room, according to Searle, lacks an essential quality of human consciousness, namely, intentionality, which consists of “all of those mental states (whether conscious or unconscious) that refer to, or are about, the world apart from the mind.” 








1.4. Searle’s Reasoning Process in Syllogistic Form

Premise 1: Brains cause minds.

Premise 2: Syntax is not sufficient for semantics.

Premise 3: Computer programs are entirely defined by their formal, or syntactical,


Premise 4: Minds have mental contents; specifically, they have semantic contents.

Conclusion 1: No computer program by itself is sufficient to give a system a mind. 

(This follows from premises 2, 3 and 4) 

Conclusion 2: The way that brain functions cause minds cannot be solely in virtue of

running a computer program. (Follows from conjoining the first premise with our

first conclusion)

Conclusion 3: Anything else that caused minds would have to have causal powers at least

equivalent to those of the brain. (A trivial consequence of premise 1)

Conclusion 4: For any artifact that we might build which had mental states equivalent to

human mental states, the implementation of a computer program would not by

itself be sufficient. Rather the artifact would have to have powers equivalent to

the powers of the human brain. (Follows from conclusions 1 and 3)

(Searle, 1984, pp. 39-41).

  So we can see that Searle does believe that a manmade artifact could, in principle, be built that could justifiably be said to think, understand, have intentions, etc. “[A]ssuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours…if you can exactly duplicate the causes, you could duplicate the effects” (Searle, 1980, p. 422).


1.5. Critique 

Basically, Searle’s argument boils down to this: ”[Y]ou can’t understand Chinese if you don’t know the meanings of any of the words in Chinese” (Searle, 1980, p. 454). Sounds true enough. You won’t catch me arguing with that. I don’t think the guy in the room understands Chinese. He doesn’t even have to know what Chinese is as long as he can follow the rules.

Let’s think about Searle’s rules:

1. Searle has basically ceded the whole Newell & Simon classical AI paradigm that intelligent behavior, in this case, in the form of perfect communication skills in a human language, can be produced by a formal system using rules to manipulate symbols. No actual AI program has ever been able to achieve this level of ability. So, by posing his thought experiment in Newell & Simon terms, Searle is accepting, in principle, the validity of their model. 

Any researcher working on computational linguistics would gladly sell his soul to Noam Chomsky for a set of rules that could pass the Turing test. Indeed, the reason Searle finds the AI researchers totally nonplused by his argument is that the scenario he has described would be the fulfillment of many of their aspirations. It would certainly cause the grant money to flow. Let’s face it, a lot of researchers just need results, and to that end, an engineering approach rather than a psychologically plausible one would be just fine. It is only because Searle has endowed the Chinese Room with such great ability, i.e., passing the Turing test by performing as well as human speakers of Chinese, that there is any philosophical dilemma. Thus, his argument degenerates to that of some compulsive nit-picker who complains that, although airplanes are useful inventions, they can’t really be said to fly because they don’t flap their wings. 

Yes, I realize that the Turing test uses nothing other than “naked behaviorism” as the sole criteria for determining an artifact’s intelligence but, really, what else can we use for such an evaluation? 

2. How does Searle determine that a person has intentionality and understanding but that a computer program doesn’t? Here’s where he’s cheating. He uses two different methods. For people, he observes their behavior, and through projection, he ascribes intentionality to them the same way McCarthy ascribes it to thermostats (Searle, 1980, p. 420). Of course, they will only be granted this status if their responses fall within a certain range of expectations. All Searle needs to know about software is that it is software: “My objection would hold against any program at all, qua formal computer program” (Searle, 1980, p. 452).

It is only the fact that Searle knows the man in the room understands no Chinese and is only following formal rules that any philosophical dilemma has arisen. Without this a priori knowledge, he would have used the same behavioral criteria that he uses to determine whether a human exhibits intentionality and understanding, and concluded that there must be a fluent Chinese speaker in the room producing the responses.

I call this the “Argument from Detailed Knowledge.” It goes like this: Once I understand the inner workings of something, it looses its charm, mystery and humanity. It becomes just another mechanistic artifact. If science describes the functioning of the human body in purely physical terms, as though we’re just soul-less automatons, something is missing. Something has been lost. Surely we are more than some Newtonian “meat machine” programmed by Darwinian selection. Aren’t we? There’s got to be some non-material essence to validate my ennobled self-image. Doesn’t there?    

3. Searle has characterized the man in the room as a mere processor of symbols, blindly following rules with no understanding going on at his level, but yet giving the appearance of intelligence on another level. While I do agree with Searle that this is a fair description of the current state of AI programs, I also can’t help noticing that he might just as well have been describing a brain. Aren’t neurons, like the man in the room, mere processors of signals, blindly following their own biologically imposed rules with no understanding going on at that level but yet producing intelligence on another level? 

It boils down to this: Brains don’t understand but minds do. Similarly, the man in the Chinese room doesn’t understand the symbols he is pushing around, but the room, (analogous to the brain) exhibits understanding (a mind) through its flawless Chinese communication skills. 

Compare these two quotes:

1. “Because the formal symbol manipulations by themselves don’t have any intentionality; they are quite meaningless; they aren’t even symbol manipulations, since the symbols don’t symbolize anything” (Searle, 1980, p. 422). 

2. “The brain uses stereotyped electrical signals to process all the information it receives and analyzes. The signals are virtually identical in all nerve cells. They are symbols that do not resemble in any way the external world they represent…any more than the letters d o g, taken together resemble a spotted Dalmation” (Kuffler, Nicholls, & Martin, 1984).

The prosecution rests. 


1.6. Discussion 

I agree that no AI program “understands” the symbols it pushes around. No program is self-conscious and none can even pass the Turing test. But, no current AI program has the ability that Searle has given to his mythical Chinese Room, either. Using the same criteria we use to ascribe cognitive states to people, the Chinese room would be considered intelligent by 99,999 out of 100,000 people. (The one dissenter being, statistically, a philosopher.) Whenever you are dealing with a hierarchical system wherein there is a level that can be described “semantically” you can always look below that level and find a “syntactic” level. In this case, the man in the room represents the syntactic level and to complain that he has no understanding of what he is doing is analogous to Searle lamenting the fact that minds have intentionality but neurons are just dumb processors of signals. 

Searle has suggested that more progress could be made in cognitive science if its models were more biologically realistic. This is a very intuitively appealing idea: After all, if you simulate the brain at a sufficient level of detail, the mind is sure to follow. In the next section, I will discuss the brain with a focus on the function of neurons and their value as a computational model.


2. Brain: The Machinery of the Mind

“[The brain] is an evolutionary development over hundreds of millions of years…this human brain that does not look too distinguished on the outside, weighing about 1.5 kilograms, is without any qualification the most highly organized and the most complexly organized matter in the universe” (Eccles 1977).

Given this proclamation, let’s look at a few statistics about the brain. It is estimated that the number of neurons in the brain is on the order of 1¹º to 1¹¹. Each neuron receives input from 1,000 to 100,000 neurons and sends signals to 1,000 to 100,000 other neurons. “…[T]here are perhaps hundreds of kinds of neurons, …many different kinds of neurotransmitters and…there are different systems in which different of these neurotransmitters are dominant” (Rumelhart & McClelland, 1986, p. 137). 

The firing rates of neurons are measured on the order of milliseconds – “perhaps 10s of milliseconds. Thus, the basic hardware of the brain is some 1 to-the-sixth times slower than that of serial computers” (Rumelhart & McClelland, 1986, p. 130).

For example, when an object enters our visual field we can recognize it in approximately half a second. It takes about 25-50 milliseconds for information to be sent from the photoreceptors to the cortex “and several hundred milliseconds are required for the motor system to produce a response, which leaves about 200-300 milliseconds for visual processing” (Sejnowski, 1986, p.378).

At these speeds, about 100 serial time steps are all there is time for. This is what is known as the 100-step program constraint, which is a bottleneck for computer simulation “because the best AI programs are not nearly as general and require millions of computational time steps” (Feldman, 1986, p. 277). This constraint also limits the amount of information passed between processing elements to a few bits rather than the large symbolic structures that are ubiquitous in traditional AI programs. It is the massive parallelism of neural processing that reconciles the apparent paradox of the slowness of neuron firing rates and the speeds at which things get done.


 2.1. Neurons: The DPU’s (Distributed Processing Units) of the Brain

  Although there are many different kinds of neurons, they can be grouped into two general categories: 

  1. “Interneuron” cells. These are for local processing and “have all of their inputs and outputs limited to a very small region (usually a few hundred microns)” (Stevens, 1985, p. 287). 2. Output cells. These carry signals across large regions of the brain, to muscles and from sensory organs back to the brain. 

Each neuron has a large (about 3 microns) central body called the soma. This is where the metabolism and genetic information, necessary for the cell’s survival, reside. Lipids and proteins are synthesized by the soma and these are used in the maintenance of the synapses. Attached to the soma are the dendrites. It is the job of the dendrites to receive signals from other neurons across the synapse. 

 The dendrites can be several feet in length and have a range of different shapes. These shapes have a dramatic effect on the behavior of the neuron, which partially accounts for the many different “personalities” among neurons. 

  While the dendrite can be viewed as a passive collector of inputs, it is the axon that is the “final output channel of the cell” (Stevens, 1985, p. 289). When the “resting potential” in the soma goes beyond a certain threshold a spike, an “action potential” is sent along the axon to the synapse. Thus, an axon can be described as a “nonlinear threshold device” (Stevens, 1985, p. 289).

A synapse can have an excitatory or inhibitory effect on the cell to which it is connected. If the effect is excitatory, neurotransmitters are released thus connecting the cells and allowing the signal to be transmitted. 

  Neurons may have both digital and analog processes that interact in different ways depending on the type of neuron: “Output cells have graded analog inputs with digital, all-or-none final outputs; interneurons have graded analog inputs but also have graded analog output” (Stevens, 1985, p. 289). 


3. Computer Models of Mind and Brain

“For a few behaviors there is general agreement for all species about the hippocampal EEG correlates” (O’Keef & Nadel, 1978¸ p. 163). “Conditioning and discrimination experiments can be used to generate a series of behaviors with which the hippocampal EEG can be correlated” (O’Keef & Nadel, 1978¸ p. 186). Interesting, but what do we know about EEG correlates with “higher” brain functions?


3.1. The Mind as Electromagnetic Wave

Recently, using EEGs, a few brain scientists have been trying to correlate wave patterns with words. Experiments were conducted in which subjects listened to identical-sounding words used as both nouns and verbs, e.g., “(The) rose” and “(He) rows.” 

“When the EEG responses were averaged over many subjects, the brain-wave pattern produced by the noun was consistently different from that produced by the verb” (Lerneṛ 1984, p.36). For nouns: “the potential fields peaked positively near the front of the brain and then negatively near the back. But when words were interpreted as verbs, the field pattern was reversed” (Lerneṛ 1984, p.37). Even more interesting, was an experiment where words were grouped into six categories by connotation e.g., “good” words like beauty and “bad” words like crime. “Each word-type gave rise to a distinctive EEG pattern that could be recognized by a computer at a rate far better than chance” (Lerneṛ 1984, p. 36). 

According to one hypothesis: “Just as a sound wave at a given frequency causes a tuning fork that is tuned to the same frequency to vibrate, the electromagnetic wave pattern associated with a familiar sensory stimulus causes millions of brain cells to generate a similar pattern that has somehow been stored in the chemical structure of these neurons. In the brain, resonance is nonlinear: Activation of a stored pattern occurs not only when the incoming wave pattern is identical but even when it is similar” (Lerneṛ 1984, p. 37). As more and better stimulus/response correlations are found our knowledge of the brain will certainly become much more predictive – a must for any enterprise wishing to be considered a science. 


3.2. Self-organizing Systems

  One of the greatest benefits of neurally inspired models is their ability to learn by adjusting their patterns of connectivity. The learning is a consequence of the nature of the model and does not require that explicit rules or heuristics be given to it a priori. An example of this is the “Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position” (Fukushima, 1980, p. 193).

The structure of the system has been inspired by some properties of the visual nervous system in vertebrates. “During the process of self-organization, the network is repeatedly presented with a set of stimulus patterns to the input layer, but it does not receive any other information about the stimulus patterns” (Fukushima, 1980, p. 197). The network “acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without being affected by their position nor by small distortion of their shapes” (Fukushima, 1980, p. 193). Of course, this kind of thing has been done using traditional symbolic models, but the beauty of the Neocognitron is that programmers don’t have to concoct a bunch of “feature detection” mechanisms. The system’s pattern recognition ability emerges “for free” as a consequence of its organization.


3.3. Emergent Behavior

  “Often a network’s emergent properties are much more complex than the network components from which they arise. In a good network model, the whole is far greater than the sum of its parts” (Grossberg, 1987, p. 24).

  “I feel it is critical to focus on collective phenomena, particularly on the idea that some information or knowledge or ideas can exist at the level of collective activities, while being totally absent at the lowest level. In fact, one can even go so far as to say that no information exists at that lowest level” (Hofstadter, 1985, p.646).

  This kind of language pervades the literature on neural modeling and ranges from claims that the whole is “far greater than,” to claims that it is just “different from” the sum of the parts (Rumelhart & McClelland, 1986, p. 128). The idea is somewhat troublesome and leads to suspicions that they’re proposing some thinly veiled bit of magic, mysticism, perpetual motion, etc. Is this a scientifically valid point of view, or is it just passing the buck?

 There seems to be some support for the idea of “epiphenomena” in physics: If we put two molecules in a box we’ll get a collision every now and then. We can add 10 more or even 1,000 more and continue to get collisions but the behavior is still “inherited” i.e., it can be explained by noting the properties of individual molecules. When we put a billion billion molecules in the box, we can get sound waves. Thus, we have “emergent” behavior, i.e., it is not explained by looking solely at the properties of individual molecules. The phenomenon has arisen from the collective behavior of the molecules (Allman, 1986, p.24). Similarly, in fluid dynamics, “turbulence is not predicted by the knowledge of the elements of the system; it is inherent in the interactions among these elements” (Rumelhart, Smolensky, McClelland, & Hinton, 1986, p.56). I think, given our current understanding of the workings of the brain, the perspective of mind as a “synergistic” consequence of the brain must be viewed as a “speculative” hypothesis, or a useful metaphor. 


3.4. A Proposal for a Hybrid Symbolic/Subsymbolic Model

  “The subsymbolic approach [is] dependent upon other research paradigms in the cognitive sciences, and…certainly in the short term, the subsymbolic paradigm cannot replace these other research paradigms” (Smolensky, 1987, p.10).

  Smolensky suggests a model wherein conscious rule application would be formalized at the conceptual level while intuition (unconscious rule application) would be formalized at the subconceptual level. He postulates that, for humans, language ability is the key to the ability to perform in a domain where one is lacking in expertise but can function by following rules expressed in, say, English. This “trick” must be exploited by the subsymbolic paradigm if a hybrid system is to be created. Note: This is considered to be a necessary but not a sufficient condition for relating the two levels. 

  The progression from the conscious to the intuitive through experience would proceed as follows: “The rule interpretation process requires that the retrieved linguistically coded rule be maintained in memory while it is being interpreted. By contrast, after connections have been developed to perform the task directly, there is no correspondingly stable [linguistically coded] pattern formed during the performance of the task. Thus the loss of conscious phenomenology with expertise can be understood naturally” (Smolensky, 1987, p. 17). The problem with implementing such a system is that the connectionist paradigm cannot yet encode linguistic expressions as patterns of activity in a form that would allow language fluency to the degree necessary for this model.


4. Conclusions

Given our lack of understanding of the brain’s complex processes (to say nothing of its “offspring,” the mind) in relation to the conceptual level of consciousness, an attempt to model this level seems to be ad hoc-ism run amuck. Because, if we accept the idea that the conceptual level of consciousness is the result of sub-symbolic processes (patterns of neurons firing), then any attempt at conceptual modeling involves the merely analogical description of the output of a complex, non-linear, hierarchical and ill-understood system. Thus, the high-level symbolic approach has only a tip-of-the-iceberg focus.  

Oh, I know, we didn’t achieve flight by building machines that flapped their wings. But wing-flapping is only one means to an end. By flapping their wings, birds achieve a “resolution of forces.” Once we were able to understand the interactions of thrust, drag, gravity and lift, we jolly well flew (Simons, 1985, p. 4). At this stage, AI is still fixated on wing-flapping models while the underlying principles still elude us. Maybe connectionism will provide the “shot in the arm” that is desperately needed in AI circles.

In light of all this, does it make sense to try and model minds at all? If it’s wrong headed to try to model concepts, which can, at least partially, be elucidated by the subjects of “introspective” psychological experiments, why tackle brains? Only because of a strong intuition/belief/assumption that, if you can get the form right, the content will follow; the content is there because, and only because, of the form. (Not ruling out the possibility of “non-wing-flapping” forms, however, as long as the essentials are there.)  

Given the properties of neurons, it would seem that they would be very interesting subjects to model some kind of computational device after. Until recently, this kind of model has been generally viewed as too complex, ill-defined or just plain irrelevant to the task of explaining cognitive phenomena. There are elements of both reductionist and holistic paradigms that inspire the neural modelers. Holism: Mind is an emergent, epiphenomenon that arises from the brain’s non-linear complexity caused by the massively parallel, collective behavior of networks of simple neurons. Reductionism: Mind is a function of brain. The human mind will be explained when the brain is understood and to the extent that it is understood. Understanding will be achieved by discovering more about the properties of neurons and their interconnections. “The gain in brain is mainly in the stain.”

It must be clear by now that I think connectionism is the way to go if were striving for “replication” of mental processes. If all we need is “simulation” or  “imitation,” then I think progress along symbolic lines will be just fine. With the renewed interest in neural nets, reminiscent of the hype of expert systems that began a couple of years ago, we should know relatively soon whether neural modeling will open any doors to the mind. Until then, I believe the question of the ultimate validity of connectionism is still open. 


5. Highlights from the History of the Brain 

c. 40,000 B.C. The human brain evolves to its present form. 

c. 430 B.C. Hippocrates, the patron of physicians, calls the brain the organ of thought. 

c. 335 B.C. Aristotle, watching headless chickens running around, decides the heart is

the seat of consciousness. 

1664 Thomas Willis, an English physician, wrote the first accurate treatise on the brain

and showed that nerves emanated from that organ. 

1860 Pierre Paul Broca unveils the speech center beforethe Paris Anthropological


 1874 German neurologist Carl Wernicke identifies an area specialized for speech

comprehension in the left hemisphere.

 1901 Santiago Ramon y Cajal notices that neurons are separated by tiny gaps, or

synaptic clefts. 

 1906 Sir Charles Sherrington describes how reflexes are“wired” in the brain.

 1921 Otto Loewi identifies acetylcholine, the first known neurotransmitter.

 1929 Hans Berger records brain waves from a persons scalp.

1940s Some of Wilder Penfield’s patients have interesting “flashbacks” during brain


1949 Donald O. Hebb describes the “neural net.”

1950 Lashley gives up on the engram, concludes memories are not localized. 

1952 Robert Heath implants deep brain electrodes in a human being.

Alan Hodgkin and Andrew Huxley describe how neurons fire.

James Olds and Peter Milner activate a rat’s “pleasure center.” 

1957 Vernon Mountcastle shows that neurons are arranged in columns.

1961 The first “split brain” operation is performed by Roger Sperry and Joseph Bogen.

1963 Jose Delgado becomes the first neuro-physiologist/matador, stopping an electrode

equipped bull dead in his tracks via radio remote control.

1973 First PET scan shows the metabolic activity inside a dog’s brain.

The opiate receptor is discovered by Candice Pert and Soloman Snyder.

1982 First human “brain transplant” (actually, a graft of dopamine-rich tissue from the

patient’s adrenal gland) is performed in Stockholm; fails to alleviate the patient’s

Parkinson’s disease.

1987 Incisive critique of John Searle’s “Minds, Brains, and Programs” is delivered by

Chris Armstrong. ;-) (Hooper, & Teresi, pp. xvii-xix, 1986).


6. Thoughts to Ponder

• “Minds are what brains do” (Marvin Minsky).

• “We understand human mental processes only slightly better than a fish

understands swimming” (John McCarthy).

• “We have an inherited cultural resistance to treating the conscious mind as a

  biological phenomenon like any other” (John Searle).

• “Thoughts are not weightless and ethereal. When you have a thought, brain

  activity is actually going on” (John Searle). 

• “Until words like ‘concept’ have become terms as scientifically legitimate as,

  say, ‘neuron’ or ‘cerebellum,’ we will not have come anywhere close to

  understanding the brain” (Douglas Hofstadter). 

• “If the brain was so simple [that] we could understand it, we would be so simple

  that we couldn’t” (Lyall Watson).

• “Ten billion neurons, ten-to-the-fourteenth different connections…That’s more

  than enough to contain a ‘soul’“ (some anonymous neurologist).



Allman, W. F. (1986, May). Mindworks. Science, 23-31.

Eccles, J. C. (1977). Understanding the Brain. (2nd ed.). New York: 

McGraw-Hill Book Company. 

Feldman, J. A. (1985, April). Connections: Massive parallelism in natural and artificial

intelligence. BYTE, 277-284

Fukushima, K. (1980). Neocognitron. In: Biological Cybernetics 36:193-202.

Grossberg, S. (1987). Competitive Learning: From Interactive Activation to Adaptive

Resonance. In: Cognitive Science 11: 23-63.

Hofstadter, D.R. (1985). Variations on a theme as the crux of creativity. In

Metamagical Themas: Questing for the essence of mind and pattern (pp. 232-259).

New York: Basic Books. 

Hofstadter, D.R. (1985). “Waking up from the boolean dream In Metamagical Themas:

Questing for the essence of mind and pattern (pp. 631-665). New York: Basic Books.

Hooper, J., & Teresi, D. (1986). The 3-Pound Universe. New York: Macmillan.

Kuffler, S., Nicholls, J., & Martin A. (1984). From Neuron to Brain: A Cellular Approach

to the Function of the Nervous System. (2nd ed.). Sunderland, MA:

Sinauer Associates, Inc.

Learner, E. J. (1984, August). Why can’t a computer be more like a brain? 

High Technology, 34-37.

O’Keefe, J., & Nadel, L. (1978). The Hippocampus as a Cognitive Map.

Oxford: Clarendon Press.

Rumelhart, D.E., & McClelland, J.L. (1986). PDP Models and General Issues in

Cognitive Science. In J.L. McClelland, D.E. Rumelhart (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations (pp. 110-146). Cambridge, MA: MIT Press/Bradford Books. 

Rumelhart, D.E., Smolensky , P., McClelland, J.L., & Hinton, G.E. (1986). 

Schemata and Sequential Thought Processes in PDP Models. In J.L. McClelland,

D.E. Rumelhart (Eds.), Parallel Distributed Processing: Explorations in the

Microstructure of Cognition. Volume 2: Psychological and Biological Models 

(pp 7-57). Cambridge, MA: MIT Press/Bradford Books.

Schank, R., & Riesbeck, C. (1981). Inside Computer Understanding: Five Programs

Plus Miniatures. Hillsdale, N.J.: Lawrence Erlbaum Associates. 

Searle, J. R. (1980). Minds, Brains and Programs. The Behavioral and

Brain Sciences, 3, 417-457.

Searle, J. R. (1984). Minds, Brains and Science. Cambrigde, MA: 

Harvard University Press.

Sejnowski, T.J. (1986). Open Questions About Computation in Cerebral Cortex.

In J.L. McClelland, D.E. Rumelhart (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models (pp.372-389). Cambridge, MA: MIT Press/Bradford Books.

Simons, M. (1985). Model Aircraft Aerodynamics. Sacramento, CA: Argus Books.

Smolensky, P. (1987). On the proper treatment of connectionism.

Draft copy: Soon to be published in Behavioral and Brain Sciences. 

Stevens, J. K. (1985, April). Reverse Engineering the Brain: Chipping away at the brain. 

BYTE, 287-299. 

Risk Aversion in Transhumanism

Risk Aversion in Transhumanism

A new essay is up on my website for the book I’m writing about Zoltan Istvan‘s book, The Transhumanist Wager.

Here’s my latest essay about the novel, The Transhumanist Wager by Zoltan Isvan.

For those who have noted some intersections between some religious/spiritual ideas/ideals and the Transhumanist world-view-thingy, here’s a VERY GROOVY article by the VERY GROOVY Giulio Prisco. Word!

For those who have noted some intersections between some religious/spiritual ideas/ideals and the Transhumanist world-view-thingy, here’s a VERY GROOVY article by the VERY GROOVY Giulio Prisco. Word!

But which one is ME? Letting go of our mono-being identity orientation: Poly-beings, here we come!

 This essay was published on October 14, 2013 at

But which one is ME? 

Letting go of our mono-being identity orientation: 

Poly-beings, here we come!


by Chris T. Armstrong


Audience for this essay: You can save yourself from wasting some of your finite and far too short lifespan by totally ignoring this essay if you fall into any of the following three categories:


1. You don’t accept the possibility, even in principle, of a future wherein it will be possible via a “whole brain emulation” to upload the ENTIRETY of a human being’s consciousness to a non-biological substrate/processing-medium, whether it is because you think that this process will not be able to capture some special x-factor or soul-like essence or for ANY OTHER reason.


2. You totally accept, in principle, that we will one day be able to achieve a “whole brain emulation” and by uploading that emulation into a non-biological substrate we can achieve a full “continuity of consciousness.” Further, you have no problem accepting that multiple copies of the same consciousness would be full and completely valid instantiations of the “original” consciousness and not “mere copies” that are somehow less-than the original.


3. You choose to direct your energies toward near-term solutions to ACTUAL problems and regard this kind of conjecture as just so much “chin music” and premature, speculative, philosophical jibber-jabber.


Now, for those of you still with me, here we go… 


Thought Experiment:


You are living in a future time, around 2045, when mind-uploading has just recently become a routine, safe, and successful way to copy one’s consciousness to a “platform” other than the biological one in which it was born. You are accompanying a friend to a mind-uploading center so she can undergo this procedure. This is a very new thing and people are still trying to comprehend all the implications involved. You have some doubts and questions about it yourself and don’t fully understand the whole procedure, but you are there to support your friend in her decision and not to debate the issues.


Your friend goes in and you, along with other friends and family, watch as she lies motionless while her brain is scanned. After this is finished, there is a long period of processing that needs to be done. A MONSTROUS amount of scanned brain data about your friend’s “connectome,” all the levels of neurotransmitter concentrations at each synapse, and all relevant neuronal micro-structures, the combination of which comprises your friend’s unique “consciousness-pattern,” is instantiated in the new non-biological substrate. A technician comes in and tells everyone that they may want to go have lunch and do some shopping or something and come back in the afternoon…


When you come back, the first-phase is completed. Your friend’s consciousness has been uploaded into its new platform and has been connected to visual, auditory and speech interfaces. All of you are there, crowded around a prosthetic head, waiting to greet the newly uploaded consciousness…your friend. The switch is thrown and the head, that will later be put on a “whole body prosthesis,” comes to “life,” looks around, gets a huge smile and says, “It WORKED!” and everyone’s tensions and anxieties are melted away as they all begin laughing, crying, clapping and cheering.


After they finally quiet down, they ask a thousand questions about what it feels like and if it hurt…all the typical stuff…and then finally, your friend’s skeptical uncle moves forward and begins something akin to a deposition or cross-examination: firing fact-oriented questions in rapid succession as a sort of test to see if this disembodied head really seems to be your friend, his niece. After several successful answers to what seem to her to be silly and obvious questions, your friend says, “Come on uncle, it’s ME!”


After a while, a technician comes in and begins talking with your friend about some details of the last thing they talked about just before the brain scanning began. It was a story about a mythical and magical origami crane that bestows eternal life and good luck on all those who bring it into existence via the intricate origami folding ritual. All the friends and family were there when the technician told this story before the brain scan as well. Soon, the technician says, “Yep, that’s about it. See you after the next phase,” and he leaves. “Why does that guy keep talking about some freaky paper crane?,” someone asks.


Your friend explains, “That’s kind of a tradition that has evolved, mostly for the benefit of friends and family, so they can see something immediate that reassures them that I’m the same person who he told that story to earlier. They tell each person who will be uploading a short story that was uniquely created just for them so the person would have no way of knowing such a story except through this conversation just before uploading.”


“It’s also just a quick check for the people running the uploading process, because if I didn’t recognize the person who told me the story or couldn’t recall the prior conversation at all, they would know immediately that something went seriously wrong with the process. But, for the most part, they depend on these interactions with friends and family, like we’re having right now, for feedback about the authenticity of the uploaded consciousness, relative to the ‘original’ one.”


“They told me that philosophers are still very busy debating all kinds of arcane issues surrounding identity, dualism, soul, and such, but as far as these technicians and doctors are concerned, if everyone who knows me says I’m exactly the same person as before I uploaded, that’s good enough for them. They don’t worry too much about the philosophers and their ‘busy-work.’ They figure, at least it keeps them off the streets.”


After a few more hours of visiting with your friend, a technician comes in and tells everyone that they have a lot to do to get your friend’s new brain connected up to the “whole body prosthetic” and all of its enhanced sensory apparatus and you can come back in about 24 hours to see the finished “product.” As you walk out, you can see through a glass window your friend’s biological body in a medically induced coma and you get a very odd and jarring feeling of cognitive dissonance since you were, only seconds before, talking with your friend…or were you? This question haunts you through the night and into the next day.


Before you leave the uploading center, you are all asked to meet with several specialists who interview each of you to get your impressions of your interactions with your friend and whether anything seemed amiss or if there was no difference between this uploaded consciousness and your biological friend.


The next afternoon you all reconvene at the uploading center. You are brought into a room and there stands your friend in her new prosthetic body. Everyone rushes over to her and hugs her and the tears and laughs begin to flow again. Everyone, that is, except for that same curmudgeonly uncle who remains standoffish for a while. Eventually, your friend notices his hesitance and yells to him, “Come over here and give your favorite niece a hug, you crusty old Luddite!” Somewhat embarrassed, he shuffles over and hugs this “thing” that he’s now supposed to accept as his niece. She hugs him tightly and whispers something in his ear that makes him laugh and cry at the same time. He’s beginning to see that this “thing” is seeming more and more niece-like and he is starting to feel that he might eventually accept “her” as such.


Your friend begins showing off her new body’s capabilities, a bit carefully at first since she’s still getting used to the different feel of it and how to coordinate it all. She tells you that she can already feel greatly enhanced potential in this body that she hasn’t even fully explored yet. Not to mention, how incredibly sturdy it is and practically impervious to physical destruction. She talks about her plans to eventually try out other body configurations. The recommended first step is to upload into a humanoid biped body, since one’s brain already knows how to operate that kind of configuration. Eventually people learn how to thrive in many kinds of body-styles or may even, at times, choose no body at all, preferring to exist for a time in an idealized virtual world of their own design.


After being challenged to the obligatory arm-wrestling matches by several large males and even a couple females, which she wins “hands-down,” a technician comes in to go over a few final points and mentions that, according to protocol and your friend’s wishes, her bio-body will be kept alive and in a coma for 30 days, after which time, it will be terminated. When you hear this, you are STUNNED. You hadn’t thought about what would happen to her old body. Maybe you assumed that it would be kept alive or maybe even woken up or who knows what you thought, but you blurt out, “What? Their gonna KILL you? All that will be left is this COPY of you? What will happen to YOU?” The room goes silent. People look a bit shocked. The uncle has a faint look of yeah-what-about-THAT? on his face.


Your friend breaks the silence, “What do you mean what will happen to ME? I’m RIGHT HERE! That’s just my old body in that room over there. I have 30 days to change my mind, but as of right now, I would NEVER choose to go back and “live” in that failing old ‘flesh coffin.’” The technician adds, “She has chosen this option. Some people DO choose to live along side their bio-bodies, or copy their consciousness-pattern into multiple synth-bodies.”


You interrupt, “But which one would be HER?” “They all would be her/me,” the technician and your friend answer in near-perfect unison. Undeterred, you say, “But, if her body dies, which one is the REAL her and not just a COPY? Where will her true self live?” The uncle chimes in, “What they’re trying to get us to swallow is that they’re ALL her true self. None of them are more “true” than any other.” Your friend interjects, “By George, I think he’s getting it” and gives her dear uncle a quick wink. 


“Ok you two, I admit that I had some doubts as well, but after I was put into my new synth-body and before you all came back to see the unveiling, I availed myself of an option that all uploaded people have open to them. I asked that my old body be awakened so that we could have bit of a chat. And let me tell you, it’s a SERIOUSLY surreal experience to have a chat with YOURSELF…talk about finishing each other’s sentences…wow! Anyway, I had decided beforehand that I might want to do this, so it wasn’t a total shock to, well, EITHER of me. 


The reason I did this was to get confirmation from the world’s number one expert on me…ME…that my uploaded, synth-self, was truly me in every discernible way. After we both got a bit more used to our initial “identity disorientation,” the bio-me grilled me for a couple of hours about my future plans, my deeply held principles and most private thoughts, fears, and secrets…everything she could think of to determine whether she was me and I was her. We were both a little surprised at how quickly we were able to accept each other as…each other, or us, or WE…I guess we need to expand the range of pronouns available for these kinds of situations. 


So, bottom line: We both agreed that for us, in spite of some remaining instinctual qualms to the contrary, there was no point in the bio-me remaining active in the world, now that a synth-me was here to take “our” life to the next level. We both accepted that there was no important difference between each other and the bio-me accepted that “she” would REALLY be living on as a synth-her/me. And I accepted that all that was “her” was REALLY me as well.” 


You and the uncle are silent for a while and then slowly begin to speak…the discussion goes on, but with a much less contentious tone…minds are blown…lives are changed…




Humans have evolved on this planet as mono-beings: We have only one version of our consciousness, and it is located in our biological body in which “we” were born and live, until that body wears out or is destroyed. That’s how it has always been for all(?) living things on this planet and of course it’s hard for us to imagine it being any other way. But once we can truly have multiple selves, the ONE-TRUE-SELF paradigm becomes inoperative and obsolete, or at least, merely one “historical” paradigm among newer ones.


At present, we have a fundamental difficulty, an aversion really, to accepting the idea that there could be multiple, and fully “valid” versions of ourselves existing SIMULTANEOUSLY: poly-beings. I’ve had many discussions with people about this and almost no one can break free from the, quite natural and instinctual feeling that, regarding their sense of self, as in the movie, Highlander, “there can be only ONE.” 


It’s totally understandable. It feels natural to me too, since I have only known mono-being-ness. It feels like something as natural as breathing — an existential axiom  — that we have a singular identity. We’ve never had any experience other than that, at least not in our “normal” (“neuro-typical”) existence, apart from the realm of chemically induced psychedelic or otherwise “altered-state” experiences.


We say things like: “I am the REAL me and all uploaded versions are obviously only copies.” Some people hold on to the “only one” intuition so strongly that they assume that all of the copies would be “in sync” somehow and be experiencing exactly the same things in unison…because OBVIOUSLY there can be only one version of their consciousness existing and experiencing the world at one time. 


Some think that the “mere copies” will be somewhat zombie-esque: conscious to some degree but lacking some special human quality that only THEY, the original, possess. And these are not only religious people who reject the idea of mind-uploading outright because they believe it cannot capture the “soul.” Often, these are people who THINK they’ve accepted the idea, in principle, that a brain can be copied, uploaded, and could be a COMPLETE consciousness. 


Even people who are not arguing that the copies will not be accurate enough in a physical/functional way are often unable to think of the copies as TRUE versions of themselves. They still feel that SOMETHING must be missing and/or their original bio-version can be the only real “them”…for some reason. They are still holding on to their inherent mono-being bias when thinking about the idea of multiple-selves.


A Pragmatic Proposal


No, I am not contending that I have discovered some groundbreaking property of consciousness or a new paradigm of identity. I am proposing that we adopt this point of view as a useful tool, until we have reason to believe that the reality is otherwise. 


The proposition is this:


A sufficiently detailed copy of a person’s brain — not a rough approximation, simulation, or software engineer’s conception — is, in EVERY sense YOU. If you make 1,000 ACCURATE copies, every one of them is REALLY YOU, just as much as the original bio-version of you is you. 


They will all feel and believe, just as strongly as the bio-original does, that THEY are the true you. They may even have doubts about the validity of the other 999 copies AND the bio-original. They will ALL have the same ego-centered sense of self as the bio-original has. They will ALL naturally balk at the idea that it’s no big deal to be “killed” because “they” will live on through the copies. They will resist this idea unless they can come to some philosophical acceptance of the validity and PARALLEL SAMENESS of ALL the copies: The same philosophical struggle you may be going through right now.


The uploaded selves are not “clones” or the same as identical twins. Neither of those kinds of entities have an exact copy of someone else’s life experiences, memories, personality, etc. They are DISTINCT individuals. An uploaded consciousness is not “merely a copy” that is somewhat less-than the original, it is an EXACT duplicate of the original individual. 


An uploaded consciousness will be EXACTLY as happy as the original was to be alive and to be interacting with his/her loved ones (gender will become an irrelevant/fluid/optional property of an uploaded being) and every bit of your consciousness that you consider to be YOU will be there, planning your future, making your same dumb jokes, suffering the same insecurities and displaying your same strengths. This upload that you may be tempted to think of as a mere clone or copy, but not YOU, will express his/her/its happiness over the success of the upload process and the opportunity it will have to live on, in a non-biological and more durable form.


Embracing the Poly-Being Opportunities


In order to embrace the full range of implications of mind-uploading, we need to abandon our either/or intuition in favor of a both/and view. Asking which one is the REAL you is akin to asking a gay couple: “Which one of you is the man/woman?” You’re trying to shoehorn something completely new into an old model. What is needed is a recognition of the new model and a conceptual adjustment that is necessary in order to integrate this newness into one’s worldview. 


In order to determine if you are taking this idea 100% seriously…TOTALLY buying into it…ask yourself if you would be fine with letting your original bio-body die and believe that you, the REAL and ACTUAL you, in EVERY sense, would live on as the uploaded duplicate. The duplicate would feel that it was you and all the behavioral evidence it exhibits would lead people to accept that it truly IS you. It, YOU, will carry on with your life without missing a BEAT, because there is no BEAT to miss. It IS you. 


A Multiplicity of Possibilities


We can imagine all kinds of sci-fi variations on this theme. We could, perhaps, network all versions together and each could take in all the experiences of the others and choose which memories collected through all the versions’ experiences to integrate into their own version. That is, until they each start becoming different enough that they may want to break away from this network and go off on their own. And if they are not linked, we have to accept the idea that, yes, they BEGAN as many duplicates of one-self, but if they travel off somewhere and have different experiences, learn different things, and make new relationships with other consciousnesses, they will begin to diverge and become more and more different from the original and become truly autonomous variations on the original theme. 


Several consciousnesses may be able to merge into a collective entity for a while and then go back to being individuals again. Or imagine merging with your lover on such a fundamental level. You won’t have to imagine what they’re feeling or guess at their perspective. You will be THEM and they will be YOU, while in that merged-state. 


And, of course, one of the most valuable aspects of mind-uploading will be the ability to back-up your mind-file and keep it in one or more safe and “off site” locations.


These kinds of possibilities are what we MUST consider if we are to fully accept the idea of mind-uploading as conceived of by Transhumanists. 


Hypothesize and Test


Of course, this all rests upon the assumption that the copy is of sufficient resolution/detail to faithfully capture the true functionality of the brain. And that rests further on the reductionist/materialist hypothesis that the totality of our consciousness is an emergent property of the complexity of the brain; that “Minds are what brains do.” — AI pioneer, Marvin Minsky


On the other hand, for dualists, who believe that consciousness exists outside/independently of the physical brain (and to some, beyond the fathomability or scope of science), no amount of cleverly engineered brain-like artifacts will be sufficient to allow consciousness to emerge therefrom or reside therein (unless maybe the consciousness “wants” to reside there for some reason or is PUT there by a god). In this view, “Looking for consciousness in the brain is like looking inside a radio for the announcer.” — speculative physicist, Nassim Haramein.


Yes, both sides are functioning from positions of hypothesis rather than solidly demonstrated and completely settled FACT, but the big difference between these two positions is: At least the transhumanists are planning to RUN THE EXPERIMENT…to put their hypothesis to the test by working toward mind-uploading to see if we can achieve a “continuity of consciousness” — across bio and non-bio “platforms” — through science rather than just believing and hoping that our consciousness AUTOMATICALLY lives on without the brain, in the form of an immortal soul, as most religions contend.


To sum up: 


We are currently mono-beings by default; no choice in the matter; dictated by evolution. 


Mind-uploading technologies will enable us to become poly-beings by choice; multiple instantiations of THE SAME consciousness functioning simultaneously, but independently. 


At the time they are created, they are all equivalently YOU, but immediately begin having their own experiences and developing their own new histories unless a choice is made to keep them all IN SYNCH and “the same” by updating all of them (including non-operational backups) periodically with the experiences of only ONE of them.   


If asked, all selves will demonstrate the same will to live and the same stubborn, ego-driven insistence that they are YOU as much as YOU will. 


Regardless of how many paradox-laden feelings and intuitions we may have to the contrary, at the time of SUCCESSFUL uploading, the original and ALL copies, all posses equivalent YOU-NESS. There is no logical basis for considering the bio-self to have more PERSONHOOD-STATUS than the uploads. Only a preexisting bias against poly-being-ness would cause you to hold that view.


New conceptions of “individual” and property rights and many other legal and social “entanglements” will need to be resolved in order to deal with the novel implications of this “expanded identity-matrix.” 


Instead of the motto on the Seal of The United States: E Pluribus Unum – “Out of many, one.”


The motto of the mind-uploading era could be: E Unum Pluribus – “Out of one, many.”


My new website is up!

This is the website for the book (in progress), At Any Cost: A Guide to The Transhumanist Wager and the Ideas of Zoltan Istvan, by Chris T. Armstrong. I will be posting essays on various topics related to my book, Zoltan’s book, and his ideas that come up during the writing process.


Hangout on Air: Futurists discuss The Transhumanist Wager, with Zoltan Istvan

Hangout on Air: Futurists discuss The Transhumanist Wager, with Zoltan Istvan

I’m on the panel of this online meetup with the London Futurists this Sunday, October 20. Check it out and submit questions…

Is It Time for a Transhumanist Olympics?

Is It Time for a Transhumanist Olympics?

Great News from Zoltan Istvan:

“Friends, earlier this week Arianna Huffington welcomed me to bring my voice to The Huffington Post. I will be regularly writing about transhumanism, life extension, philosophy, art, and other exciting topics. My first story is below. Please share it with your friends and peers. Thanks!”


Get every new post delivered to your Inbox.