Continuity of Consciousness

…all things Transhuman and Singularitarian…

Jethro Knights: DIY Omnipotender — Tale of a Self-Made Superman by Chris T. Armstrong

(Note: Below is the full article from which I excerpted for my speech given at the Transhuman Visions Conference in San Francisco, produced by Hank Pellissier Feb. 1st, 2014

 

“What is great in man is that he is a bridge and not an end.

Man is something that shall be overcome.

The time has come for man to set himself a goal.

The time has come for man to plant the seed of his highest hope.”

— Friedrich Nietzsche 

“We are at the cusp of incredible things. It’s time to wake up and embrace it. 

We didn’t evolve through billions of years to remain animals. 

I believe that the biology has to go. Essentially, we’re living in a flesh coffin. 

I think we need to get down to the point when we’re basically pure data in machines.”

— Zoltan Istvan

Toward the Omnipotender

Jethro Knights is the protagonist of the philosophical novel, The Transhumanist Wager, by Zoltan Istvan. Knights is a transhumanist, (transhuman means, “beyond human”) which is someone who wants to transcend their biological limitations through the use of advanced science and technology, including radical life extension leading to an eventual indefinite lifespan. His approach to transhumanism is to fashion himself into an omnipotender: “one who contends for omnipotence.” (80)…an “elite transhuman champion…the ideal and zenith of the life extension and human enhancement populace…This omnipotender is an unyielding individual whose central aim is to contend for as much power and advancement as he [can] achieve, and whose immediate goal is to transcend his human biological limitations in order to reach a permanent sentience.” (33)

Jethro’s quest for power is the result of his most fundamental guiding principle: 

“Death must be conquered. From now on, that is my first and foremost aim in life. That is the quintessential first goal of the transhumanist.”(19) He views power as an essential tool to be used, as Istvan has described it: “…to preserve his life, security, and goals indefinitely…In order to be guaranteed to be able to protect himself, he really needs to be ‘all’ powerful.” A true omnipotender wants “a universal dictatorship—or at least a draw—over everything and everyone.” (80)

In service of this goal, Jethro creates the Three Laws of Transhumanism.

1) A transhumanist must safeguard one’s own existence above all else. 

2) A transhumanist must strive to achieve omnipotence as expediently as possible—so long as one’s actions do not conflict with the First Law. 

3) A transhumanist must safeguard value in the universe—so long as one’s actions do not conflict with the First and Second Laws. 

The extreme nature of these laws will lead to extreme actions by anyone who follows them to the letter when pushed into a situation that calls for difficult choices to be made. These laws leave no room for subtlety, equivocation, or nuanced exceptions arising from extenuating complications. The power of these laws comes from their black and white simplicity. In fact, the first law is really all Jethro needs, if followed without fail. The second law is merely a potent mechanism in support of the first law. While the third law supports the preservation and accumulation of like-minded people, technologies and ideas that contribute positively to the aims of transhumanism. But, make NO mistake, absolutely EVERYTHING is subordinate to the preservation of Jethro’s life, including the lives of any and ALL other beings, if they should present a threat to his life. If such an improbable scenario should ever present itself that forced him to extinguish every other living being on earth, or elsewhere, in order to save his life and it was within his power to do so, the first law would compel Jethro to do it.

“The world and every one of its inhabitants [are] not worth living or dying for.” (53)

“There [is] no right and wrong when it [comes] to dying or not dying. There [is] only success or failure.” (53)

For Jethro, there is no question about whether “the ends justify the means.” Rather, the means are only “justified,” valued and implemented according to his own transhumanism-oriented “utility function”: whether or not they contribute to the expedient realization of transhumanist goals in general and the preservation of Jethro’s life in particular.

Yes, Jethro’s first law leads to harsh actions that would be unacceptable to Transhumanists who focus on the “humanist” part of the concept, but one thing is undeniable: people who have some line that they refuse to cross in defending their life, will necessarily be less successful in preserving their life than Jethro, who will protect his life at ANY cost. 

One of Carl Sagan’s most often mentioned lines from his TV series, Cosmos, “Extraordinary claims require extraordinary evidence” could be paraphrased by Jethro Knights as:

Extraordinary aims require extraordinary expedience. 

Any other non-life-or-death oriented goal a human being can aspire to is something that they can strive for with all their energy, but if they fall short they will suffer disappointment but may have the option to get back on track and try again, depending on the nature of the goal and the amount of life/health-span they have ahead of them. For nearly all goals, a failure to reach them can be seen as a temporary setback — an opportunity to regroup, begin again or pickup at the point of the failure and attempt to finally accomplish the quest.

But what happens when the goal chosen is to avoid death at any cost, when a failure to reach that goal means, the end…period? No regrouping for another attempt. No second chances. Failure to achieve immortality, or even a radically extended lifespan, is fatal and final. Game over. For anyone seriously committed to such a goal, there can be no such thing as “moderation”; no resigned acceptance of defeat; no room to allow ANYTHING to take precedence over the accomplishment of this goal. In Jethro’s mind, he is engaged in a classic zero-sum, success-failure enterprise. No middle ground is desired, tolerated, nor even POSSIBLE. At any given time, you are either dead or alive. Period. “There [is] only success or failure.”

The extreme nature of a goal that is still, at the time that it is committed to, quite literally IMPOSSIBLE, requires a level of commitment and downright zealotry that is far beyond anything required to achieve lesser goals. Prior to the future achievement of indefinite lifespans, all possible goals a human being could aspire to exist within the context of a finite and relatively short timeframe within which to accomplish them. A finite lifespan has always been our fundamental physical AND psychological constraint and anyone attempting to do battle with this inviolate limitation could quite reasonably be said to be “out of touch with reality.” 

Transhumanist Morality

According to Jethro, Transhumanist morality is “defined and decided by the amount of time we have left to live.” (85)

The only thing allowing Jethro’s goal to avoid being designated as nothing more than Quixotic madness is his informed and calculated vision of paradigm-overturning breakthroughs that can be achieved within a couple of decades given the proper conditions. With a concentrated research program made up of the world’s best transhumanist scientists all working together, Jethro determines that the beginnings of significant life-extending develops can occur in “[e]ight to twelve years, with enough funding. More years of experiments afterward to eliminate perils, unwanted side effects, and dead ends. In less than two decades, however, we could be at the doorstep of a reasonably waged, ongoing sentience.” (157) 

Because Jethro sees his goal of achieving an indefinite lifespan as possible and within reach in his lifetime, he views all efforts to stop, or even delay, the requisite research and development of life extension technologies as a form of life threatening aggression: 

“Every time someone gets in the way of life extension and human enhancement goals, every time a new anti-transhumanist law is passed, every time you decide to hinder scientific progress, you are knowingly shortening the lives and productive working hours of transhumanists and your own citizens. We have a specific legal term for that type of behavior in this country. It’s called manslaughter.” (28)

The Effect of Jethro’s Youth On His Developing Philosophy

 An often overlooked factor that needs to be considered when encountering Jethro Knights’ philosophy is his age at the time he begins formulating it. He was an undergraduate student of philosophy in his early 20s when he wrote his final essay, Rise of the Transhuman Citizen. Even though Jethro “knew he was still young and had much to learn” (33) and his philosophy “still needed much development” (38), he viewed it as something that could inspire any transhumanists who had become complacent and inert due to the unsupportive, and increasingly aggressive, anti-transhumanist policies of most of the governments around the world. Knights wanted nothing less than to inspire a revolution of transhumanist thought and action. “He wanted his philosophy to convince transhumanists of their moral right and obligation to rapidly push their ambitions forward, regardless of cultural headwinds or religious interference.” (33)

We can see his youthful enthusiasm, zeal and single-mindedness when he speaks of “using whatever means necessary to accomplish [his] aims” (53) and a steadfast refusal to allow any “compromise of core transhuman ideals.” (181) For Jethro, his pure philosophy “is what it is and, like mathematics, can never be altered or compromised.” (183)   

There are also some times when his thinking goes quite a bit beyond a “youthful enthusiasm, zeal and single-mindedness” as when he writes of something with the chilling title: The Humanicide Formula, which is to be used to determine “whether an individual should live or die based on an algorithm measuring transhuman productivity in terms of that individual’s remaining life hours, their resource consumption in a finite system, and their past, present, and potential future contributions.” (215)

The Humanicide Formula could be viewed as Jethro’s conception of his “dictatorship over the universe,” wherein he has complete control over all of those who might hinder his progress toward immortality and other transhuman goals. Any who are not going along with the program to a sufficient degree, in this hypothetical scenario, will be eliminated. It’s as simple as that. 

In a strong field of scary statements, writings, and thoughts of Jethro Knights that could “wither the blubber off a Bishop” and curdle the blood of any gentle Humanist, “meek and mild”, The Humanicide Formula is the clear frontrunner, for its sheer visceral impact on the human psyche. However, we can see that this is nothing more than an intellectual exercise in exploring the extremes of where his principles could ultimately lead since, even when Jethro attains power over the entire world, he does not set up any such system as outlined in his Humanicide Formula conception. To the contrary, there are several times when he demonstrates concern and compassion for innocent life while at war with the aggressive governments of the world. 

Some critics have pointed to Jethro’s most aggressive statements, “borderline” personality traits and harshest philosophical musings that comport quite well with the kinds of mental states displayed by people with narcissistic, sociopathic and even psychopathic disorders. 

The problem with this simplistic and shortsighted conclusion is that it ignores many examples of his traits and actions that serve to disconfirm these kinds of diagnoses. Unlike people afflicted with the disorders mentioned above, he is empathetic even toward his enemies and is able to experience a powerful loving relationship with Zoe as well as strong bonds and fully-functional relationships with his closest compatriots. 

Beyond Good, to Evil?

What then are we to make of such an extreme and unapologetic totalitarian formulation? 

Perhaps the most reasonable interpretation is that Jethro is exploring the boundaries of the omnipotender concept. This is the kind of scenario that could follow from a unyielding commitment to become all powerful combined with a vow to defend one’s life at any cost. 

The genesis of the omnipotender concept comes from Jethro’s chosen self-image as a being that is already in the process of transcending his status as a human being. 

As Istvan has explained:

“The Humanicide Formula is the core part of the story and philosophy. One must be able to do “whatever” it takes to preserve one’s immortality, including pursuing the extinction of every other advanced living entity on Earth by one’s own hand if they don’t contribute to the overall gain and threaten one’s ascent to becoming the omnipotender. 

This is why I have stated in public that I cannot go as far as Jethro. He may be right in his dark logic, he may be philosophically correct, but it is too blatantly inhuman for me personally. But it is a perfect and logical moral code for someone who is aspiring to be an omnipotender–who wants to end up as God. Even Jethro in the story may not be able to do it. However, he understands the logic of the philosophy. It’s indisputable. And he’s striving to think purely like a machine. It’s worth discussing and writing about on Transhumania. They do not shy away from dangerous, difficult ideas–even awful ones.

Furthermore, when we design AI, this will be the way a being with no sense of inborn morality or understanding of murder will be like. My novel is a bridge to understanding how dangerous AI can become.

As I’ve stated many times, The Transhumanist Wager can be interpreted as a source of inspiration, and as a warning. I take it as both, myself.

Finally, the Humanicide Formula is also a literary device that’s used. Many books have used such devices. The classic one is of Noah’s Arc in the bible.

…[R]emember, we are not discussing a perfect human being, we are discussing someone whose final aim is all power over everything. Don’t see Jethro only as a human, he is an evolving, amassing point of organized energy in a universe spanning billions of light years. 

You must try to think how God (should something like that exist) would think.”

As Istvan said in a podcast interview, “Jethro is not the best humanist, but he may be the best transhumanist.”

At this point, it should be obvious that the most important fact about Jethro Knights is: 

HE IS NOT LIKE US. 

I’ll presume to speak broadly about Transhumanists. We self-identify as humans. Humans who want to extend our capabilities and transcend our limitations but who are still undeniably human. 

Transhuman, Reprogram Thyself

The discrepancy between Jethro’s most threatening statements and his much more humane actions once he attains world power can be explained by the specific psychological device he has been employing since he began putting his budding philosophy into practice. The device could be thought of as a form of psychological self-programming or reorientation of his entire human psyche designed to put himself into a state wherein he views himself as a being who has already transcended his humanity. He is doing all he can to move himself toward his idealized vision of a much more powerful, durable, and far superior being relative to the most advanced humans alive at the time. 

He already views himself as beyond human and explores this new moral landscape stripped of any human biological imperatives and “mammalian niceties.” In his most extreme moments, his values are no longer commensurate with a humanistic, bio-centric sensibility.

He is constantly exploring the limits of how far beyond his human roots he is willing to go in his quest to become an omnipotender. 

And Zoltan Istvan himself is not of the variety of transhumanist who imagines himself living 1,000+ years while assuming he will remain human, or even biological.

“At some point we will probably discover other entities to become…sub-atomic particles, pure energy, all sorts of cool nanotechnologies that will exist, but I don’t believe that biology is going to make it. I think it’s fragile. I think it’s crude. It’s also beautiful…for where we are now, the human being is also a magnificent creature. But in a hundred years, we’re going to look back and say: Wow, the human being was such a fragile entity.” — Zoltan Istvan

We can see a clear example of Jethro’s self-programming when he describes a set of “meditations” that he “reads and considers everyday” because he deems them to be “essential” precepts that will help him “to get what [he] wants out of life.” In these meditations, he writes of always using a “statistical analysis of value”; following the “best, most logical path”; form following function; avoiding “slavery to emotions (or anything else)”; not being “fundamentally one with the Earth, its people, or its multitudes of life”; not being a “beholden spawn or child of the universe”; being “alone and distinct”; shunning “any sense of social pride from others or in [himself]”; learning from his errors and making “fewer and fewer mistakes as the years pass”; “always focussing on long-term growth patterns…and not necessarily [on] the immediate moment, which may reveal little of reality or [his] ultimate destiny”; “zero tolerance for betraying [his] ambitions and quests…the universe and one’s existence can offer no forgiveness for failed opportunity”; completely eschewing love…”[he] is self-sufficient, not needing anything or anyone else.” (69-70)

With these self-imposed commandments, Knights girds himself against any internal weaknesses he may fall prey to should he let his “eternal vigilance” falter, as well as reinforcing the kinds of strengths required to approach the omnipotence required to “insure” an indefinite “ongoing sentience.” While his “rules” are nothing if not rigid and unwavering, he does avoid the kind of rigidity that inevitably renders unquestioned religious commandments brittle and outdated, by occasionally [adding] to them or even [rewriting] them. 

Old-timey software engineers, like myself, may recognize Jethro’s self-programming methodology as akin to the 20th century computer programming paradigm known as “top-down design through stepwise refinement,” which is quite appropriate, metaphorically, considering Knights’ desire to explore and even attain a “cold precisionlike morality” (33) and a “harsh machinelike objectivity.” (12)

A Superman Encounters His Kryptonite?

“There is always some madness in love.” — Friedrich Nietzsche 

Throughout the novel, Jethro is constantly contemplating the extremes he may need to go to in order to reach his ideals. There is one challenge, however, that gives him the most difficulty in adhering to his Spartan, super-hero, warrior code. This challenge comes in the form of a diminutive, beguiling, and unnerving temptress by the name of, Zoe Bach. 

Zoe’s Asian-influenced spirituality and ability to comfortably embrace contradiction and paradox serves as a counterbalance to Jethro’s no-nonsense pragmatic functionalism. She is the yin to Jethro’s yang. She challenges his basic assumptions about the universe and life, and he is willing to consider her often diametrically-opposed points of view to an extent that he might not otherwise, had they come from anyone other than his beloved soulmate, Zoe Bach. 

Jethro had written his set of “meditations” prior to meeting Zoe. The final one of the set was this:

“An omnipotender doesn’t fall in love. I will fail to achieve my goals if I lose myself in another, live for another, or place my happiness and aspirations in another. I am self-sufficient, not needing anything or anyone else.” (70)

All of which stands in stark contrast to his eventual surrender to Zoe’s love, causing her to exclaim: “Wow, you must be really smitten with me. The man whose most important goal in life is to achieve immortality has fallen in love with someone who doesn’t believe there’s a need to do that.” (55)

But his inner battle between his growing feelings for Zoe and his Man-as-Island Transhumanist ideals begins when he realizes what a like-minded and philosophical “fellow traveller” Zoe is, even while recognizing that she has many confounding mystical and extra-scientific views about how the universe operates and her place therein.

“His heart was a puzzle. The conflict in him, between her Zenlike acceptance of the universe and his aggressive, egocentric views on transhumanism, was growing. A dangerous tempest was gathering in their future. He knew it. She knew it.” (61)

Eventually, he could no longer deny that he had fallen in love with Zoe. “He was in love with her fiercely incisive mind. Her body that perfectly fit his. Her iron work ethic and competence as a surgeon. Her faith in destiny taking care of itself. Her dark, death-wish soul aspiring for passion and life.” (65)

However, not too long after this conflicted relationship had begun, Jethro, with great difficulty, decided that they needed to be apart while he began establishing his Transhumanist movement. Only after this was accomplished, did he feel that he could resume his relationship with Zoe. 

“Now that Transhuman Citizen and TEF, [his organization and philosophy] were established and operational, Jethro knew it was time to make contact with Zoe. He was still wary of what she made him feel inside, how loyal he felt to her—the overriding instinct of love and bonding that often seemed in total conflict with TEF. But he could accept it now. He was ready to take that chance.” (99)

When they were finally reunited, Jethro proclaimed to Zoe: “I love you. I’ve always loved you. I haven’t reconciled anything. Everything is still at odds. But I can accept it now and still pursue my transhuman dreams.” (111)

Ultimately, Jethro did not find the “madness of love” and quest-undermining distraction he feared would be the unavoidable result of loving and “losing himself” in Zoe. Their deep connection continued to grow and will, as Istvan has foreshadowed in the book as well as in interviews, even transcend a single finite lifetime.

Embrace Seeming Contradictions and Shun Rigid Ideological Purity

A complete presentation of Jethro’s philosophy, TEF, Telelogical Egocentric Functionalism, is far beyond the scope of this essay, but a short exploration of some aspects of the Functionalism portion of the concept will help to flesh-out some important facets of Jethro’s character and also provide some indispensable clues to Zoltan Istvan’s unique philosophical ethos.

It has not been unusual for critics and interviewers to point out seeming contradictions in Jethro’s ideology and actions. At times he seems very libertarian, while at others, he is decidedly quite non or anti-libertarian. On one hand, he promotes great autonomy and on the other he is quite dictatorial. For those accustomed to very linear, logical systems of ideology, this propensity of Jethro’s is quite off-putting and to the harshest critics it is indicative of a muddled, disorganized, amateurish mind guided by an impoverished philosophy. 

But consider another interpretation…

As Zoltan explains in a podcast interview:

“When you are taking a truly utilitarian approach, there’s no such thing as “double-think.” Every single time you have a dilemma or a question, you ask yourself: What is the most functional perspective to take at that moment, given the long-term goals that you’re after?…You just simply always choose the quickest and most expedient method that would lead to the most transhumanist inspired world that you can find…I realize that’s an idealist perspective. It’s probably very difficult to ever achieve such a thing. However, as a philosopher, these are the ideas that I want to bring forth, because it gets people to think: Well, maybe these are improvements that we can make.” 

In another context, Zoltan has said:

“Think of it in terms of Walt Whitman: ‘Do I contradict myself? Very well then I contradict myself, (I am large, I contain multitudes.)’”

This philosophy is eclectic, functional, and pragmatic rather than faithfully ideological. Just as Zoe Bach is completely comfortable with contradiction and paradox, the TEF philosophy is not compelled to have an “either/or” worldview. A “both/and” approach may be more suitable to its ends…until some point when even that is no longer “functional” and something else will be chosen that will get the job done, with no fear of being labelled “contradictory.” 

“Of course I’ve contradicted myself. I always do. Only cretins and logicians don’t contradict themselves. And in their consistency, they contradict life.” — Tom Robbins

Conclusion: Beyond “mammalian niceties”

Zoltan Istvan:

“I tended to write the story from the perspective of a simple question: How far would one man go to achieve his immortality? 

You must understand that The Transhumanist Wager is a bridge. AI is coming. Merge with the future powers or be destroyed. It is evolution. And a machine’s moral system is like nothing we know. 

Can I kill my wife a thousands times? [This absurd scenario is put to Jethro in the book] Can I kill every person on the planet? Do I want to? Am I supportive of humanicide? The obvious answers are: No.

But ask that same question to an AI. And it’s answer will be a very resounding: Yes.

To survive, we must be stronger than AI.

These are some of the reasons why I wrote The Transhumanist Wager the way I did. 

The Transhumanist Wager is a message from the future: 

If you don’t lose the weakness of your species, your species will not survive. You must embrace a new you–a fiercer, bolder you. Otherwise you will be no match for your own inventions.”

And finally, I’ll leave you with the very last sentence of The Transhumanist Wager: 

“This is just the beginning of Jethro Knights.”

Indefinite Lifespan and Risk Aversion: A Short-Lived Problem by Chris T. Armstrong

(Note: This is an expanded version of the previously posted article below called, Risk Aversion in Transhumanism. I made the article more generally focussed, as opposed to Transhumanist Wager focussed, for online publication.)

There was an interesting panel discussion at the Transhuman Visions Conference in San Francisco, February 1, 2014, which got even interesting-er when the following question was posed:

“If you knew you could live for 1,000 years or more, would you possibly become so risk-averse that you may be afraid to do anything that is even remotely dangerous and consequently live a long, but very insular and inhibited life?”

The responses to this question were a bit disappointing to me, since I strive to keep my natural, human, status-quo, meat-bag biases in check when thinking about such trans/post-human questions. The people on the panel, and everyone I heard discussing it outside after the event had concluded, approached this question with the assumption that they would have a greatly lengthened life/health-span, but would still be a biological being with much of our current existential vulnerabilities. This lead me immediately to the thought:

Well, if you plan on living 1,000+ years and insist on, or assume you’ll be, spending the whole time in something like your current hyper-fragile biological body that can be uber-easily destroyed, you jolly well BETTER be risk averse!

As Zoltan Istvan so eloquently put it in a podcast interview:

“At some point we will probably discover other entities to become…sub-atomic particles, pure energy, all sorts of cool nanotechnologies that will exist, but I don’t believe that biology is going to make it. I think it’s fragile. I think it’s crude. It’s also beautiful…for where we are now, the human being is also a magnificent creature. But in a hundred years, we’re going to look back and say: Wow, the human being was such a fragile entity.”

Whence Risk Aversion?

For the vast majority of people today, who have accepted the inevitability of their death in a few decades or years (and are sometimes derogatorily referred to as “deathists” by transhumanists), death is not nearly as great a loss to them as it is for a transhumanist, who believes that death may soon be conquered. A deathist sees no other option than to live a few decades and then surrender to the inescapable and obligatory “dying of the light.” For them, it’s the natural way of things. Some even see it as an almost noble act: to get out of the way for the next generation.

If a deathist also has a belief in a guaranteed spiritual afterlife, their concern over an end to this temporary and relatively insignificant and flawed corporeal existence is diminished even further. A deathist who is a “believer” has a few years of “inferior” earthly existence to lose and an eternity of blissful existence to look forward to, so it is much easier for them to find a few other people or causes that they may value more than the preservation of their own life. In stark contrast to the deathist’s submission to the inevitable, a transhumanist believes s/he will miss out on a chance for physical immortality, which may be the only kind possible, if they fail to preserve their life “above all else.” The stakes couldn’t be higher in the transhumanist hierarchy of values.

It is often pointed out that this could lead to an extreme risk-aversion in anyone who has achieved, or thinks they can eventually achieve, a greatly increased lifespan or physical immortality.

In the character of Jethro Knights, the protagonist of the philosophical novel, The Transhumanist Wager, by Zoltan Istvan, we can see a gradual evolution from a natural concern for his physical safety to a state wherein he can put almost any fear of death aside. This eventual indifference toward danger has its roots in an incident involving him stepping on a landmine that, luckily, turns out to be a dud and doesn’t explode. This becomes a life-altering event for Jethro.

“A single moment that transformed his youthful transhuman outlook into a physical law of its own…thinking solemnly and with full focus: What happened today is unacceptable. Death must be conquered. From now on, that is my first and foremost aim in life. That is the quintessential first goal of the transhumanist.” (19) Because of this experience, he “felt more strongly than ever that he was going to dedicate all his life’s energy to this movement’s success.” (23)

One key factor enables him to take more calculated risks than one might expect from one whose “quintessential first goal” is to stay alive long enough to “conquer death” via science and technology. It is that very science and technology that mitigates the quite natural tendency to avoid any life-threatening risks and allows him to act boldly in the face of physical danger. For example, once Jethro’s medical technology reaches an extreme level of efficacy, his aversion to physical risk plummets to near zero: “Don’t you know you don’t have enough bullets in that gun to kill me? A surgery center ten times more advanced than anything on this continent is on that aircraft up there. My rescue team will be here in less than sixty seconds.”(250)

Throughout the book, Jethro is trying to ensure the indefinite survival of his bio-body and to achieve, eventually, a continuity of his consciousness across biological and synthetic substrates by uploading his entire personal-essence, or mind-file, or consciousness-pattern into another processing-medium and synth-body or virtual-space.

“In all his studies on transhumanism, Jethro Knights decided he connected most with the work of Dr. Nathan Cohen. The scientist’s experiments, to combine brain neurons to the hardwiring of computers in order to download human consciousness, seemed the most sensible and important direction for the immortality quest. While getting the human body to live longer was a priority, it was not a long-term solution. Jethro already assumed that the human body, at least as it was, would only be around for another half century in its current form. Dr. Cohen’s work was where the real evolutionary jumps could be made. Conscious computerized machines and their digital content, with proper maintenance, could last indefinitely. They were so much more durable than flesh. But this thinking was exactly the most radical as well. Because eventually, perhaps sooner than even many transhumanists would have it, there would be no need left at all for the human body.” (72)

Especially for people who encounter this idea for the first time, fear and repulsion are often the automatic responses to such a prospect. People who express an aversion to their consciousness being uploaded to a non-bio “whole body prosthetic” often have a conception of it as being something similar to today’s “machines”…they think of a laptop or some other extremely limited device. However, the transhumanists who think seriously about replacing their bodies envision a greatly enhanced sensory experience, not one that is less sensorially nuanced than our current bio-bodies. That would be a totally unacceptable step backward.

Genetic modification and other biotechnologies will provide great enhancements and, admittedly, we may be able to make some very transformative changes to our biology before we have the knowledge and technological ability to upload ourselves to a complete synth-body. However, many transhumanists consider even brilliant and heroic bio-tweaking as suboptimal attempts to retrofit a body that evolved to earth-centric specs, with all the inherent limitations that entails. The most forward thinking transhumanists tend to view a complete synthetic re-design to non-earth-centric specs as a surer way to protect our consciousness from a fatal, game-over damage “event.” From this perspective, if our consciousness is still trapped in one uber-fragile meat-machine that can be easily and fatally damaged by nothing more technologically advanced than a large rock or stick, we’re still living in what is essentially a “flesh-coffin” as Zoltan Istvan has described it. Transhumanists tend to envision a body with greatly enhanced sensory apparatus — all senses extending to ranges and resolutions beyond anything that any bio-being naturally evolved to experience — that is INSANELY durable and capable of exploring the universe far beyond the confines of this delicate biosphere on this one little dust-speck we call “Earth.”

In addition to the advantages of increased durability, once our consciousness has been uploaded into a synthetic processing-medium, the possibility of upgrading our minds with the latest advances in artificial intelligence will enhance our intellects, memories, and processing power to levels never dreamed of by the humble “requirements” of natural selection.

The Roots of Fear of Transhumanism

Many of the most fearful objections to transhumanism are actually rooted in misconceptions about what is meant by terms like “machine” and intuitions about the inherent fragility of our current bodies. It is our current biological state that is highly vulnerable to all manner of life threatening disease and injury and requires uninterrupted supplies of water, oxygen, and carbon-based food. By providing a mechanism by which each person’s consciousness can be set free from the limitations of biology and can be protected by much sturdier platforms and, most importantly, can be backed-up and restored if any particular synth-body/consciousness instantiation of us is destroyed, transhumanists will alleviate many of the typical bio-being fears of existential threats.

This possibility comes from the inherent ways that our relationship with mere “objects” changes once that object can be fully specified in the form of information. Once any kind of “content,” whether it be from a photo, video, audio, or text file, has been captured in a copiable digital form, several aspects of its nature, as a unique object and its perceived value, change in fundamental ways.

In digital form, the content on these various forms of media becomes information or data and is thereby infinitely reproducible, which creates abundance, where once there was scarcity. And, more importantly, the nature of ownership and property is altered in at least one transformative way: whereas, in the past, if someone stole something from you that existed only in “physical” form, you suffered a loss of property — you lost ownership, or at least, possession of that object at the time of the theft and if those physical objects are never returned to you, that loss was total and permanent. However, once that same “content” exists in digital form, it becomes possible for a thief to “steal” ALL of your photos, videos, music, and ebooks, by copying all of these files from your storage media to their storage media, while you suffer no actual loss of property. In fact, you may even be totally unaware that such a “robbery” ever occurred.

In this scenario, the only person to suffer a loss would be the copyright owner/content creator and even this is much different from a theft of non-digital property, in that, this digital theft doesn’t deprive a rights-holder of a tangible object, but rather, of compensation for a copy. Just as in the case above, of the person who bought the content that was later “stolen” through copying, the rights-holder may not even know a theft has taken place because there has been absolutely no loss of property and no loss of the rights-holder’s ability to continue to sell digital copies of the content. This is not to say, however, that absolutely no “crime” has been committed against either the purchaser of the media or the rights-holder, but rather, that the EFFECT of the crime is much different in the case of unauthorized copying of digital content versus the more conventional theft of a physical object involving a “transfer of possession.”

Now, let’s look at a case of a vandal who destroys someone’s entire collection of music in the form of compact discs, vinyl, tapes, and even some antique phonograph cylinders, versus the destruction of a storage medium: a hard-drive containing all the same music, but in digital form. In both cases the destruction could be total, assuming that there was no backup of the hard drive kept in a safe location. Ah, but what if there were a backup? In the case of the physical media, all was lost, but in the case of the file-based media, the entire music collection is intact in the backup, as though no destruction had ever occurred. Yes, the original hard-drive was damaged in the vandalism, yet the entire collection of music remains safe in the form of a copy, resulting in a case of minor property damage to a hard-drive and nothing more.

Now, what if we include in our list of content, not only photos, videos, audio, or text, but also some, currently, more exotic things like, DNA sequences, or perhaps even THE ENTIRE CONTENT OF ONE’S CONSCIOUSNESS? Does anything change in the above analysis? Compare the kidnapping of a person to the mere copying of a digital specification of that person’s consciousness on some storage medium to another storage medium. Compare the murder of a person to the destruction of a digital specification of that person’s consciousness on some storage medium. What if there is a safe backup of that digital specification of that person’s consciousness that can be restored to an artificial processing medium that revitalizes the person to full consciousness in a non-biological substrate?

The Cure for the Fear of Transhumanism

Our transition from biology-based entities to beings residing in human engineered media provides the solution for some of the most common fears often expressed about transhumanism and Singularity related possibilities: the abuse of new and powerful technologies by malevolent people who would use them to control and enslave others in the service of their plans for world domination.

You may be familiar with…

Godwin’s Law - Given enough time, in any online discussion, someone inevitably makes a comparison to Hitler or the Nazis.

I would like to propose something I call, humbly…

Armstrong’s Law - Given enough time, in any discussion of massively transformative future technologies, someone inevitably expresses a fear that some rich megalomaniacal psychopath will abuse these technologies to enslave us all and rule, or even to destroy, the world.

Mind uploading could actually be a solution to issues related to any potential violations of an individual’s rights, as far as enslavement or murder are concerned. Once a person’s consciousness is in a form that is able to be moved to different platforms, can be backed-up, and can take advantage of various security features, all issues related to life/liberty threatening actions taken against him/her/it automatically become far less severe and are no longer guaranteed to be catastrophic, as they would be in the case of a conventional “murder” of a singular biological entity. When the destruction of the current consciousness-residing platform doesn’t automatically result in a guaranteed and permanent game-over condition for the “victim,” what power does a threat of violence, or even a successful act of platform-destroying damage, wield against a being whose consciousness may exist in multiple platforms or can be restored from an up-to-date backup? Would such an act be anything more serious than a property damage incident?

Yes, there are many issues related to the simultaneous coexistence of multiple versions of ourselves; security issues regarding the hi-jacking and enslavement of uploaded minds; failsafe mechanisms that would allow for the continual backing-up, monitoring, and disabling of any mind-upload that fell victim to any form of piracy or coercion. There will be plenty of new challenges to be met, and plenty of new opportunities as well. One of the most important and world-transforming prospects will be the ability to safeguard our consciousness against the destruction of one particular instantiation of it in one particular platform in one particular geographical location.

Given this kind of future scenario, we can see that virtually all fears regarding the abuse of future technologies and individuals’ loss of autonomy stem from our intuitive grasp of the inherent fragility and irreplaceability of our easily-destroyed bodies and the only “copy” of our consciousness housed therein.

The Primacy of Consciousness

Jethro Knights, in his manifesto, states: “If you love life, you will always strive to reach the most advanced form of yourself possible while protecting that life.” (179)

And he means it: “…we embark on the most critical journey of our lives, and embrace the quest to discover how far we can go as humans, as cyborgs, as conscious intelligent machines, as rays of light, as pure energy, as anything the future brings.” (85)

Essentially, he is proposing that matter is, ultimately, of secondary importance and consciousness is of primary importance. We are currently bound to, and limited by, material resources. We are bound to one material bio-body. Matter comprises the vehicle in which our consciousness evolved, but it is something that we can evolve beyond, or more precisely, we can engineer beyond.

Science may someday work out a way for our “consciousness-pattern” to remain coherent outside the confines of a physical brain or brain-inspired artifact and become freed from the limitations of many/most/all physical constraints. Once a way is found to de-couple mind from matter, then consciousness will no longer be primary, rather, it will be unitary — it will be THE WHOLE SHOW — and we can dispense with any material “box” to put it in. Hmm, that could remind one of the religious conception of a soul that is independent from matter.

This would bring us to a stage wherein we’ve gotten past most of our dependence on matter and live in a state of almost pure consciousness, liberated from most of the limitations of material existence. At which time, all our current matter-orientation/dependence and tribal, monkey-brained conflicts over material resources will be something we can someday look back on quizzically, as we now look back to a time, long ago, when we used to be tadpole-like creatures.

Reminiscent of the differences in world-view between Jethro and, his much more mystically oriented love interest, Zoe Bach, transhumanists and religious people still have a certain fundamental “essentia” in common and may not be so far from each other as far as general themes are concerned. They are both talking about a “soul” or “consciousness” that can transcend the death of the body. The difference is that transhumanists prefer to employ science and uber-technological-advancement to reach their goals and have little interest in embracing beliefs that can’t eventually be subjected to experiment.

Conclusion

Contrary to the common assumption that a personal goal to greatly extend one’s lifespan will necessarily lead to extreme risk-aversion, as technology advances to a stage when death has essentially been “cured” and science works out how the essence of a being can be captured in its totality, transferred to other platforms, and with multiple backup copies available, an individual’s destruction or even the threat thereof, need no longer be feared to anywhere near the extent that we currently fear serious damage or a fatal, unrecoverable, game-over event, to our one-and-only hyper-delicate meat-bag platform.

Addendum

Undoubtedly, throughout the above discussion of mind-uploading and copying one’s consciousness and multiple, equally “authentic” copies, existing in parallel, there have been a certain, perhaps large, number of people shouting, “No, those are only COPIES…’philosophical zombies’ that may seem to be the ‘real’ person in every respect but decidedly ARE NOT! These copies are missing an essential x-factor that doesn’t arise solely from the operations of a physical brain or even an ‘exact’ copy of a brain, however incredible its level of detail, resolution, and precision may be.”

Well, now we are into the classic debate between dualism and monism and we run up against the fundamental assumption or hypothesis underlying much of Transhumanism: a functionalist, physicalist, materialist, behaviorist perspective that views the brain as the sole organ and generator of the sum-total of our consciousness, personality, and identity. “Minds are what brains do.” So, if you can make an incredibly “exact” copy of a particular brain, EVERYTHING that makes you, YOU, including your sense of identity, will reside in that “copy” as well and there is no sense, assuming the copying has been done well enough, in which that copy, or a THOUSAND copies, will be any “less than” the original bio-version, in terms of being “really you” in every meaningful sense.

One angle on this kind of hypothesis can be found on the website of The Brain Preservation Foundation. http://www.brainpreservation.org/content/overcoming-objections#patternism

“The patternism hypothesis proposes that it is a special physical pattern, not the matter, or even the type of matter (computer or biological), that stores the highest level information in living systems. If the special pattern that stores this information can be successfully maintained, and copied as necessary, the information survives.”

I have provided a link, below, to another article I wrote on this topic which I hope might be helpful in getting people to think beyond their automatic, VERY strong, and VERY natural intuitions about identity and the quite common aversion to thinking that their COMPLETE consciousness could be copied, uploaded to a non-biological medium and that there could be more than one COMPLETELY valid version of themselves coexisting simultaneously.

But which one is ME? Letting go of our mono-being identity orientation: Poly-beings, here we come!

Below is the text of my speech given at the Transhuman Visions Conference in San Francisco, produced by Hank Pellissier Feb. 1st, 2014

 

Thank you to Hank Pellissier for inviting me to speak and for promoting me from bouncer to speaker…true story. 

And also for publishing my first Transhumanist article on Transhumanism and Powerlifting, of all things.

This my first time speaking at a Transhumanist conference. So this is a historic day for me and all of you will get to tell your grandchildren that “you were here” in San Francisco, appropriately, when Chris Armstrong “came out” and declared himself to be “openly Transhumanist.”

And now, on to the book, The Transhumanist Wager, by Zoltan Istvan

I’ll be excerpting from a larger article I’m writing, called:

Jethro Knights: DIY Omnipotender. Tale of a Self-Made Superman

When The Transhumanist Wager was published, I lurked for several months…watching reviews…some of which were scathing, some very positive…probably the most balanced was from Giulio Prisco. 

My own review was practically a love letter to the book and to Zoltan. 

I had very positive emotions about it even while recognizing the extreme ideas and actions of the protagonist, Jethro Knights…

Several of Jethro’s attitudes and personality traits reminded me of my own when I was young. 

I had that same youthful zeal and mono-focus on one goal. I didn’t have a quest anywhere near as grand and epic as Jethro’s. Mine was just to be the most amazing and skilled musician possible. 

Like Jethro, I put people in second place, after my sacred mission; told myself that romantic relationships would be unnecessary distractions and should take a backseat to the mission. 

And like Jethro, I didn’t always adhere to such a harsh and Spartan ideal in practice.

As I said in my review, Jethro’s words were far more “scary” than his actions. 

But why such scary words?

One of Carl Sagan’s most often mentioned lines from his TV series, Cosmos, “Extraordinary claims require extraordinary evidence” could be paraphrased by Jethro Knights as:

Extraordinary aims require extraordinary expedience. 

And what is this extraordinary aim?

Oh, nothing more or less than to defeat death!

Quoting Jethro:

“Death must be conquered. From now on, that is my first and foremost aim in life. That is the quintessential first goal of the transhumanist.”

He has chosen a goal that is literally impossible to achieve at the time, but believes it could be achieved, with a strong concerted effort by the best scientists, within a couple of decades. 

In order to increase his long term chances for survival he decides to fashion himself into an omnipotender: “one who contends for omnipotence.”

He needs power to push research forward and to defend against those who would try to interfere with the achievement of much needed scientific breakthroughs and even threaten his life directly. 

Again…

Extraordinary aims require extraordinary expedience. 

Any other non-life-or-death oriented goal a human being can aspire to is something that they can strive for with all their energy and if they fall short they will suffer disappointment but may have the option to get back on track and try again, depending on the nature of the goal and the amount of life/health-span they have ahead of them. For nearly all goals, a failure to reach them can be seen as a temporary setback — an opportunity to regroup, begin again or pickup at the point of the failure and attempt to finally accomplish the purpose of the quest.

But what happens when the goal chosen is to avoid death at any cost, when a failure to reach that goal means, the end…period? No regrouping for another attempt. No second chances. Failure to achieve immortality, or even a radically extended lifespan, is fatal and final. Game over. For anyone seriously committed to such a goal, there can be no such thing as “moderation”; no resigned acceptance of defeat; no room to allow ANYTHING to take precedence over the accomplishment of this goal. In Jethro’s mind, he is engaged in a classic zero-sum, success-failure enterprise. No middle ground is desired, tolerated, nor even POSSIBLE. At any given time, you are either dead or alive. Period.

The extreme nature of a goal that is still, at the time that it is committed to, quite literally IMPOSSIBLE, requires a level of commitment and downright zealotry that is far beyond the purview of lesser goals. Prior to the achievement of indefinite lifespans, all possible goals a human being could aspire to exist within the context of a finite and relatively short timeframe within which to accomplish them. A finite lifespan has always been our fundamental physical AND psychological constraint and anyone attempting to do battle with this inviolate limitation could quite reasonably be said to be “out of touch with reality.” 

According to Jethro, Transhumanist morality is “defined and decided by the amount of time we have left to live.”

At this point, we need pause to understand the most important fact about Jethro Knights: HE IS NOT LIKE US. 

For a moment, I’ll presume to speak broadly about Transhumanists. We self-identify as humans. Humans who want to extend their capabilities and transcend their limitations but still undeniably human. 

In Jethro’s case, I propose that he is doing all he can to engage in a kind of psychological self-programming designed to put himself into a mental state wherein he has already transcended his humanity. It amounts to a self-guided reorientation of his entire human psyche toward his idealized vision of a much more powerful, durable, and far superior being relative to the most advanced humans currently alive.

He already views himself as beyond human and explores this new moral landscape stripped of any human biological imperatives and “mammalian niceties.” In his most extreme moments, his values are no longer commensurate with a humanistic, bio-centric sensibility.

He is constantly exploring the limits of how far beyond his human roots he is willing to go in his quest to become an omnipotender. 

Some critics have pointed to Jethro’s most aggressive statements, borderline personality traits and harshest philosophical musings that comport quite well with the kinds of mental states displayed by people with narcissistic, sociopathic and even psychopathic disorders. 

The problem with this simplistic and shortsighted conclusion is that it ignores many examples of Jethro’s traits and actions that serve to disconfirm these kinds of diagnoses. 

The discrepancy between Jethro’s most aggressive statements and his much more humane actions and close emotional bonds with other people — even attaining full-on soulmate status with Zoe Bach — belies any attempts to paint him as a mere psycho and shows his constant struggle between his remaining human “frailties” and his cyborg superman ideal.

Now, I would like to leave you with something very special. It is a deeper look into what is behind this book and what may be coming in the further evolution of Jethro Knights. 

This is the scoop. The exclusive. Turn on your recorders. This is right from the horses mouth, Zoltan Istvan, via email:

“I tended to write the story from the perspective of a simple question: How far would one man go to achieve his mortality? 

But I wrote this for you the other day in one of my moments when I tend to jot down my ideas, unpolished as they may be. It’s something I haven’t spoken about much, but the ideas of it are also very important to me:

You must understand that The Transhumanist Wager is a bridge. AI is coming. Merge with the future powers or be destroyed. It is evolution. And a machine’s moral system is like nothing we know. 

Can I kill my wife a thousands times? Can I kill every person on the planet? Do I want to? Am I supportive of humanicide? The obvious answers are: No.

But ask that same question to an AI. And its answer will be a very resounding: Yes.

To survive, we must be stronger than AI.

These are some of the reasons why I wrote The Transhumanist Wager the way I did. 

The Transhumanist Wager is a message from the future. If you don’t lose the weakness of your species, your species will not survive. You must embrace a new you — a fiercer, bolder you. Otherwise you will be no match for your own inventions.”

I’m really looking forward the continuation of this story. As the very last sentence of The Transhumanist Wager says: 

“This is just the beginning of Jethro Knights.”

 

My essay was re-published on the Institute for Ethics & Emerging Technologies site!

But which one is ME? Letting go of our mono-being identity orientation: Poly-beings, here we come!

A paper I wrote for an AI seminar at ASU in 1987 on neural modeling/connectionism/sub-symbolic computation.

 

 

 

 

 

 

 

 

Levels of Representation in Mind/Software and Brain/Computer

 

Chris Armstrong

Professor Nicholas V. Findler 

Minds, Brains and Programs CSC 591, ASU

19 November 1987 (reformatted and edited December 2002)

 

 

 

 

 

 

 

 

 

 

1. Introduction

Recently, in cognitive science circles, there has been a resurgence of interest in the architecture of the brain as inspiration for computer models of human “intelligent” behavior. Along with this interest, there are assumptions, beliefs and hopes that, with a new level of focus, there will come breakthroughs that will shed light on the mysteries of brain and mind. Have there been successes that could transform beliefs into testable hypotheses? Is mind solely a function of brain? If so, where does brain end and mind begin? What’s wrong with focusing solely on the conceptual level? These are some of the questions addressed, if not answered, in this paper, the catalyst being the article, “Minds, Brains and Programs” by John Searle.

 

1.1. Searle, Symbols and Syllogisms

We begin with Searle’s Chinese Room thought experiment: A person is locked in a room and given three batches of Chinese characters. The person is a fluent English speaker but understands no Chinese, neither written nor spoken. The subject is also given a large set of rules, written in English, which enable him to produce some Chinese characters in response to the characters given him. The people giving him the characters call the first batch “a script,” the second a “story,” the third “questions,” the rules in English “the program” and his responses “answers to the questions.” These designations are unknown to the subject but this doesn’t affect his ability to produce intelligent responses. In fact, the “answers to the questions are absolutely indistinguishable from those of native Chinese speakers” (Searle, 1980, p. 418). 

 

Stories in English are also presented and questions asked. For this, the subject is only to use his native language ability, common sense etc. to answer the questions about the English stories. After a time, he gets so good at obeying “the programs” and the “programmers” get so good at writing them that “the answers to the Chinese questions and English questions are equally good” (Searle, 1980, p. 418). Searle suggests that, in the case of the English stories, there is real understanding going on (both with the subject and the people outside the room) while in the Chinese case the subject is simply following formal rules in order to manipulate meaningless (from the subjects point of view) symbols. Thus, outside the room is the only place where there is true understanding occurring. 

Searle draws an analogy between the person manipulating the Chinese symbols and programs created by artificial intelligence (AI) researchers and concludes: “…in the literal sense, the programmed computer understands what [a] car and [an] adding machine understands, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.” And what’s more, “[t]he example shows that there could be two ‘systems,’ both of which pass the Turing test, but only one of which understands” (Searle, 1980, p. 419). 

The Turing test is a thought experiment created by British mathematician, cryptologist and AI pioneer, Alan Turing. In Turing’s scenario, a person sits at a computer terminal and communicates with two other participants in the test exclusively through typed responses using the terminal. One of the other two participants is a human and the other is a computer program designed to posses some commonsense, real-world “knowledge” and to understand and produce English language conversation via text. The question is: Can the first person tell which of the other two is human and which is the program? If the program can produce responses that are sufficiently human-like that it can pass for a human, the program is said to have “passed” the Turing test.  

 

1.3. SAM & PAM: A sham?

Searle posits two flavors of AI, namely: STRONG & weak. According to weak AI, computers and their programs are tools to aid cognitive science researchers in testing hypotheses about mind/brain. They make no claims that these programs think or understand. STRONG AI, on the other hand, claims, “the appropriately programmed computer really is a mind…and can be literally said to understand and have other cognitive states” (Searle, 1980, p. 417). Now let’s look at a specific example of two AI programs and some of the claims of their creators.

SAM & PAM are story “understanding” programs that were written at Yale University, under the direction of Roger Schank. These programs display their ability to comprehend simple stories by first reading them and then answering questions that sometimes require the use of logical inference and other human thought processes.

weak claims: 

• “…cognitive simulation” 

• “…they are all rather artificial” 

• “PAM…really doesn’t know what a story is about”

STRONG claims:

• “SAM has achieved a reasonable depth of comprehension”

• “…capable of making inferences”

• “…a reasonable level of understanding”

• “…fills in causal relations”

• “…PAM’s understanding of…motivation”

• “…PAM’s knowledge of goal relationships”

• “…PAM has made a prediction”

The primary “knowledge” structure is the Script, which consists of:

• Stereotypical or default sequences of events

• Exceptions and possible error situations 

• Both procedural and declarative descriptions (Schank & Riesbeck, 1981).

According to Searle, the Chinese Room thought experiment shows that the claims of STRONG AI are invalid because “whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all” (Searle, 1980, p. 418). The Chinese room, according to Searle, lacks an essential quality of human consciousness, namely, intentionality, which consists of “all of those mental states (whether conscious or unconscious) that refer to, or are about, the world apart from the mind.” 

 

 

 

 

 

 

 

1.4. Searle’s Reasoning Process in Syllogistic Form

Premise 1: Brains cause minds.

Premise 2: Syntax is not sufficient for semantics.

Premise 3: Computer programs are entirely defined by their formal, or syntactical,

structure. 

Premise 4: Minds have mental contents; specifically, they have semantic contents.

Conclusion 1: No computer program by itself is sufficient to give a system a mind. 

(This follows from premises 2, 3 and 4) 

Conclusion 2: The way that brain functions cause minds cannot be solely in virtue of

running a computer program. (Follows from conjoining the first premise with our

first conclusion)

Conclusion 3: Anything else that caused minds would have to have causal powers at least

equivalent to those of the brain. (A trivial consequence of premise 1)

Conclusion 4: For any artifact that we might build which had mental states equivalent to

human mental states, the implementation of a computer program would not by

itself be sufficient. Rather the artifact would have to have powers equivalent to

the powers of the human brain. (Follows from conclusions 1 and 3)

(Searle, 1984, pp. 39-41).

  So we can see that Searle does believe that a manmade artifact could, in principle, be built that could justifiably be said to think, understand, have intentions, etc. “[A]ssuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours…if you can exactly duplicate the causes, you could duplicate the effects” (Searle, 1980, p. 422).

 

1.5. Critique 

Basically, Searle’s argument boils down to this: ”[Y]ou can’t understand Chinese if you don’t know the meanings of any of the words in Chinese” (Searle, 1980, p. 454). Sounds true enough. You won’t catch me arguing with that. I don’t think the guy in the room understands Chinese. He doesn’t even have to know what Chinese is as long as he can follow the rules.

Let’s think about Searle’s rules:

1. Searle has basically ceded the whole Newell & Simon classical AI paradigm that intelligent behavior, in this case, in the form of perfect communication skills in a human language, can be produced by a formal system using rules to manipulate symbols. No actual AI program has ever been able to achieve this level of ability. So, by posing his thought experiment in Newell & Simon terms, Searle is accepting, in principle, the validity of their model. 

Any researcher working on computational linguistics would gladly sell his soul to Noam Chomsky for a set of rules that could pass the Turing test. Indeed, the reason Searle finds the AI researchers totally nonplused by his argument is that the scenario he has described would be the fulfillment of many of their aspirations. It would certainly cause the grant money to flow. Let’s face it, a lot of researchers just need results, and to that end, an engineering approach rather than a psychologically plausible one would be just fine. It is only because Searle has endowed the Chinese Room with such great ability, i.e., passing the Turing test by performing as well as human speakers of Chinese, that there is any philosophical dilemma. Thus, his argument degenerates to that of some compulsive nit-picker who complains that, although airplanes are useful inventions, they can’t really be said to fly because they don’t flap their wings. 

Yes, I realize that the Turing test uses nothing other than “naked behaviorism” as the sole criteria for determining an artifact’s intelligence but, really, what else can we use for such an evaluation? 

2. How does Searle determine that a person has intentionality and understanding but that a computer program doesn’t? Here’s where he’s cheating. He uses two different methods. For people, he observes their behavior, and through projection, he ascribes intentionality to them the same way McCarthy ascribes it to thermostats (Searle, 1980, p. 420). Of course, they will only be granted this status if their responses fall within a certain range of expectations. All Searle needs to know about software is that it is software: “My objection would hold against any program at all, qua formal computer program” (Searle, 1980, p. 452).

It is only the fact that Searle knows the man in the room understands no Chinese and is only following formal rules that any philosophical dilemma has arisen. Without this a priori knowledge, he would have used the same behavioral criteria that he uses to determine whether a human exhibits intentionality and understanding, and concluded that there must be a fluent Chinese speaker in the room producing the responses.

I call this the “Argument from Detailed Knowledge.” It goes like this: Once I understand the inner workings of something, it looses its charm, mystery and humanity. It becomes just another mechanistic artifact. If science describes the functioning of the human body in purely physical terms, as though we’re just soul-less automatons, something is missing. Something has been lost. Surely we are more than some Newtonian “meat machine” programmed by Darwinian selection. Aren’t we? There’s got to be some non-material essence to validate my ennobled self-image. Doesn’t there?    

3. Searle has characterized the man in the room as a mere processor of symbols, blindly following rules with no understanding going on at his level, but yet giving the appearance of intelligence on another level. While I do agree with Searle that this is a fair description of the current state of AI programs, I also can’t help noticing that he might just as well have been describing a brain. Aren’t neurons, like the man in the room, mere processors of signals, blindly following their own biologically imposed rules with no understanding going on at that level but yet producing intelligence on another level? 

It boils down to this: Brains don’t understand but minds do. Similarly, the man in the Chinese room doesn’t understand the symbols he is pushing around, but the room, (analogous to the brain) exhibits understanding (a mind) through its flawless Chinese communication skills. 

Compare these two quotes:

1. “Because the formal symbol manipulations by themselves don’t have any intentionality; they are quite meaningless; they aren’t even symbol manipulations, since the symbols don’t symbolize anything” (Searle, 1980, p. 422). 

2. “The brain uses stereotyped electrical signals to process all the information it receives and analyzes. The signals are virtually identical in all nerve cells. They are symbols that do not resemble in any way the external world they represent…any more than the letters d o g, taken together resemble a spotted Dalmation” (Kuffler, Nicholls, & Martin, 1984).

The prosecution rests. 

 

1.6. Discussion 

I agree that no AI program “understands” the symbols it pushes around. No program is self-conscious and none can even pass the Turing test. But, no current AI program has the ability that Searle has given to his mythical Chinese Room, either. Using the same criteria we use to ascribe cognitive states to people, the Chinese room would be considered intelligent by 99,999 out of 100,000 people. (The one dissenter being, statistically, a philosopher.) Whenever you are dealing with a hierarchical system wherein there is a level that can be described “semantically” you can always look below that level and find a “syntactic” level. In this case, the man in the room represents the syntactic level and to complain that he has no understanding of what he is doing is analogous to Searle lamenting the fact that minds have intentionality but neurons are just dumb processors of signals. 

Searle has suggested that more progress could be made in cognitive science if its models were more biologically realistic. This is a very intuitively appealing idea: After all, if you simulate the brain at a sufficient level of detail, the mind is sure to follow. In the next section, I will discuss the brain with a focus on the function of neurons and their value as a computational model.

 

2. Brain: The Machinery of the Mind

“[The brain] is an evolutionary development over hundreds of millions of years…this human brain that does not look too distinguished on the outside, weighing about 1.5 kilograms, is without any qualification the most highly organized and the most complexly organized matter in the universe” (Eccles 1977).

Given this proclamation, let’s look at a few statistics about the brain. It is estimated that the number of neurons in the brain is on the order of 1¹º to 1¹¹. Each neuron receives input from 1,000 to 100,000 neurons and sends signals to 1,000 to 100,000 other neurons. “…[T]here are perhaps hundreds of kinds of neurons, …many different kinds of neurotransmitters and…there are different systems in which different of these neurotransmitters are dominant” (Rumelhart & McClelland, 1986, p. 137). 

The firing rates of neurons are measured on the order of milliseconds – “perhaps 10s of milliseconds. Thus, the basic hardware of the brain is some 1 to-the-sixth times slower than that of serial computers” (Rumelhart & McClelland, 1986, p. 130).

For example, when an object enters our visual field we can recognize it in approximately half a second. It takes about 25-50 milliseconds for information to be sent from the photoreceptors to the cortex “and several hundred milliseconds are required for the motor system to produce a response, which leaves about 200-300 milliseconds for visual processing” (Sejnowski, 1986, p.378).

At these speeds, about 100 serial time steps are all there is time for. This is what is known as the 100-step program constraint, which is a bottleneck for computer simulation “because the best AI programs are not nearly as general and require millions of computational time steps” (Feldman, 1986, p. 277). This constraint also limits the amount of information passed between processing elements to a few bits rather than the large symbolic structures that are ubiquitous in traditional AI programs. It is the massive parallelism of neural processing that reconciles the apparent paradox of the slowness of neuron firing rates and the speeds at which things get done.

 

 2.1. Neurons: The DPU’s (Distributed Processing Units) of the Brain

  Although there are many different kinds of neurons, they can be grouped into two general categories: 

  1. “Interneuron” cells. These are for local processing and “have all of their inputs and outputs limited to a very small region (usually a few hundred microns)” (Stevens, 1985, p. 287). 2. Output cells. These carry signals across large regions of the brain, to muscles and from sensory organs back to the brain. 

Each neuron has a large (about 3 microns) central body called the soma. This is where the metabolism and genetic information, necessary for the cell’s survival, reside. Lipids and proteins are synthesized by the soma and these are used in the maintenance of the synapses. Attached to the soma are the dendrites. It is the job of the dendrites to receive signals from other neurons across the synapse. 

 The dendrites can be several feet in length and have a range of different shapes. These shapes have a dramatic effect on the behavior of the neuron, which partially accounts for the many different “personalities” among neurons. 

  While the dendrite can be viewed as a passive collector of inputs, it is the axon that is the “final output channel of the cell” (Stevens, 1985, p. 289). When the “resting potential” in the soma goes beyond a certain threshold a spike, an “action potential” is sent along the axon to the synapse. Thus, an axon can be described as a “nonlinear threshold device” (Stevens, 1985, p. 289).

A synapse can have an excitatory or inhibitory effect on the cell to which it is connected. If the effect is excitatory, neurotransmitters are released thus connecting the cells and allowing the signal to be transmitted. 

  Neurons may have both digital and analog processes that interact in different ways depending on the type of neuron: “Output cells have graded analog inputs with digital, all-or-none final outputs; interneurons have graded analog inputs but also have graded analog output” (Stevens, 1985, p. 289). 

 

3. Computer Models of Mind and Brain

“For a few behaviors there is general agreement for all species about the hippocampal EEG correlates” (O’Keef & Nadel, 1978¸ p. 163). “Conditioning and discrimination experiments can be used to generate a series of behaviors with which the hippocampal EEG can be correlated” (O’Keef & Nadel, 1978¸ p. 186). Interesting, but what do we know about EEG correlates with “higher” brain functions?

 

3.1. The Mind as Electromagnetic Wave

Recently, using EEGs, a few brain scientists have been trying to correlate wave patterns with words. Experiments were conducted in which subjects listened to identical-sounding words used as both nouns and verbs, e.g., “(The) rose” and “(He) rows.” 

“When the EEG responses were averaged over many subjects, the brain-wave pattern produced by the noun was consistently different from that produced by the verb” (Lerneṛ 1984, p.36). For nouns: “the potential fields peaked positively near the front of the brain and then negatively near the back. But when words were interpreted as verbs, the field pattern was reversed” (Lerneṛ 1984, p.37). Even more interesting, was an experiment where words were grouped into six categories by connotation e.g., “good” words like beauty and “bad” words like crime. “Each word-type gave rise to a distinctive EEG pattern that could be recognized by a computer at a rate far better than chance” (Lerneṛ 1984, p. 36). 

According to one hypothesis: “Just as a sound wave at a given frequency causes a tuning fork that is tuned to the same frequency to vibrate, the electromagnetic wave pattern associated with a familiar sensory stimulus causes millions of brain cells to generate a similar pattern that has somehow been stored in the chemical structure of these neurons. In the brain, resonance is nonlinear: Activation of a stored pattern occurs not only when the incoming wave pattern is identical but even when it is similar” (Lerneṛ 1984, p. 37). As more and better stimulus/response correlations are found our knowledge of the brain will certainly become much more predictive – a must for any enterprise wishing to be considered a science. 

 

3.2. Self-organizing Systems

  One of the greatest benefits of neurally inspired models is their ability to learn by adjusting their patterns of connectivity. The learning is a consequence of the nature of the model and does not require that explicit rules or heuristics be given to it a priori. An example of this is the “Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position” (Fukushima, 1980, p. 193).

The structure of the system has been inspired by some properties of the visual nervous system in vertebrates. “During the process of self-organization, the network is repeatedly presented with a set of stimulus patterns to the input layer, but it does not receive any other information about the stimulus patterns” (Fukushima, 1980, p. 197). The network “acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without being affected by their position nor by small distortion of their shapes” (Fukushima, 1980, p. 193). Of course, this kind of thing has been done using traditional symbolic models, but the beauty of the Neocognitron is that programmers don’t have to concoct a bunch of “feature detection” mechanisms. The system’s pattern recognition ability emerges “for free” as a consequence of its organization.

 

3.3. Emergent Behavior

  “Often a network’s emergent properties are much more complex than the network components from which they arise. In a good network model, the whole is far greater than the sum of its parts” (Grossberg, 1987, p. 24).

  “I feel it is critical to focus on collective phenomena, particularly on the idea that some information or knowledge or ideas can exist at the level of collective activities, while being totally absent at the lowest level. In fact, one can even go so far as to say that no information exists at that lowest level” (Hofstadter, 1985, p.646).

  This kind of language pervades the literature on neural modeling and ranges from claims that the whole is “far greater than,” to claims that it is just “different from” the sum of the parts (Rumelhart & McClelland, 1986, p. 128). The idea is somewhat troublesome and leads to suspicions that they’re proposing some thinly veiled bit of magic, mysticism, perpetual motion, etc. Is this a scientifically valid point of view, or is it just passing the buck?

 There seems to be some support for the idea of “epiphenomena” in physics: If we put two molecules in a box we’ll get a collision every now and then. We can add 10 more or even 1,000 more and continue to get collisions but the behavior is still “inherited” i.e., it can be explained by noting the properties of individual molecules. When we put a billion billion molecules in the box, we can get sound waves. Thus, we have “emergent” behavior, i.e., it is not explained by looking solely at the properties of individual molecules. The phenomenon has arisen from the collective behavior of the molecules (Allman, 1986, p.24). Similarly, in fluid dynamics, “turbulence is not predicted by the knowledge of the elements of the system; it is inherent in the interactions among these elements” (Rumelhart, Smolensky, McClelland, & Hinton, 1986, p.56). I think, given our current understanding of the workings of the brain, the perspective of mind as a “synergistic” consequence of the brain must be viewed as a “speculative” hypothesis, or a useful metaphor. 

 

3.4. A Proposal for a Hybrid Symbolic/Subsymbolic Model

  “The subsymbolic approach [is] dependent upon other research paradigms in the cognitive sciences, and…certainly in the short term, the subsymbolic paradigm cannot replace these other research paradigms” (Smolensky, 1987, p.10).

  Smolensky suggests a model wherein conscious rule application would be formalized at the conceptual level while intuition (unconscious rule application) would be formalized at the subconceptual level. He postulates that, for humans, language ability is the key to the ability to perform in a domain where one is lacking in expertise but can function by following rules expressed in, say, English. This “trick” must be exploited by the subsymbolic paradigm if a hybrid system is to be created. Note: This is considered to be a necessary but not a sufficient condition for relating the two levels. 

  The progression from the conscious to the intuitive through experience would proceed as follows: “The rule interpretation process requires that the retrieved linguistically coded rule be maintained in memory while it is being interpreted. By contrast, after connections have been developed to perform the task directly, there is no correspondingly stable [linguistically coded] pattern formed during the performance of the task. Thus the loss of conscious phenomenology with expertise can be understood naturally” (Smolensky, 1987, p. 17). The problem with implementing such a system is that the connectionist paradigm cannot yet encode linguistic expressions as patterns of activity in a form that would allow language fluency to the degree necessary for this model.

 

4. Conclusions

Given our lack of understanding of the brain’s complex processes (to say nothing of its “offspring,” the mind) in relation to the conceptual level of consciousness, an attempt to model this level seems to be ad hoc-ism run amuck. Because, if we accept the idea that the conceptual level of consciousness is the result of sub-symbolic processes (patterns of neurons firing), then any attempt at conceptual modeling involves the merely analogical description of the output of a complex, non-linear, hierarchical and ill-understood system. Thus, the high-level symbolic approach has only a tip-of-the-iceberg focus.  

Oh, I know, we didn’t achieve flight by building machines that flapped their wings. But wing-flapping is only one means to an end. By flapping their wings, birds achieve a “resolution of forces.” Once we were able to understand the interactions of thrust, drag, gravity and lift, we jolly well flew (Simons, 1985, p. 4). At this stage, AI is still fixated on wing-flapping models while the underlying principles still elude us. Maybe connectionism will provide the “shot in the arm” that is desperately needed in AI circles.

In light of all this, does it make sense to try and model minds at all? If it’s wrong headed to try to model concepts, which can, at least partially, be elucidated by the subjects of “introspective” psychological experiments, why tackle brains? Only because of a strong intuition/belief/assumption that, if you can get the form right, the content will follow; the content is there because, and only because, of the form. (Not ruling out the possibility of “non-wing-flapping” forms, however, as long as the essentials are there.)  

Given the properties of neurons, it would seem that they would be very interesting subjects to model some kind of computational device after. Until recently, this kind of model has been generally viewed as too complex, ill-defined or just plain irrelevant to the task of explaining cognitive phenomena. There are elements of both reductionist and holistic paradigms that inspire the neural modelers. Holism: Mind is an emergent, epiphenomenon that arises from the brain’s non-linear complexity caused by the massively parallel, collective behavior of networks of simple neurons. Reductionism: Mind is a function of brain. The human mind will be explained when the brain is understood and to the extent that it is understood. Understanding will be achieved by discovering more about the properties of neurons and their interconnections. “The gain in brain is mainly in the stain.”

It must be clear by now that I think connectionism is the way to go if were striving for “replication” of mental processes. If all we need is “simulation” or  “imitation,” then I think progress along symbolic lines will be just fine. With the renewed interest in neural nets, reminiscent of the hype of expert systems that began a couple of years ago, we should know relatively soon whether neural modeling will open any doors to the mind. Until then, I believe the question of the ultimate validity of connectionism is still open. 

 

5. Highlights from the History of the Brain 

c. 40,000 B.C. The human brain evolves to its present form. 

c. 430 B.C. Hippocrates, the patron of physicians, calls the brain the organ of thought. 

c. 335 B.C. Aristotle, watching headless chickens running around, decides the heart is

the seat of consciousness. 

1664 Thomas Willis, an English physician, wrote the first accurate treatise on the brain

and showed that nerves emanated from that organ. 

1860 Pierre Paul Broca unveils the speech center beforethe Paris Anthropological

Society.

 1874 German neurologist Carl Wernicke identifies an area specialized for speech

comprehension in the left hemisphere.

 1901 Santiago Ramon y Cajal notices that neurons are separated by tiny gaps, or

synaptic clefts. 

 1906 Sir Charles Sherrington describes how reflexes are“wired” in the brain.

 1921 Otto Loewi identifies acetylcholine, the first known neurotransmitter.

 1929 Hans Berger records brain waves from a persons scalp.

1940s Some of Wilder Penfield’s patients have interesting “flashbacks” during brain

surgery.

1949 Donald O. Hebb describes the “neural net.”

1950 Lashley gives up on the engram, concludes memories are not localized. 

1952 Robert Heath implants deep brain electrodes in a human being.

Alan Hodgkin and Andrew Huxley describe how neurons fire.

James Olds and Peter Milner activate a rat’s “pleasure center.” 

1957 Vernon Mountcastle shows that neurons are arranged in columns.

1961 The first “split brain” operation is performed by Roger Sperry and Joseph Bogen.

1963 Jose Delgado becomes the first neuro-physiologist/matador, stopping an electrode

equipped bull dead in his tracks via radio remote control.

1973 First PET scan shows the metabolic activity inside a dog’s brain.

The opiate receptor is discovered by Candice Pert and Soloman Snyder.

1982 First human “brain transplant” (actually, a graft of dopamine-rich tissue from the

patient’s adrenal gland) is performed in Stockholm; fails to alleviate the patient’s

Parkinson’s disease.

1987 Incisive critique of John Searle’s “Minds, Brains, and Programs” is delivered by

Chris Armstrong. ;-) (Hooper, & Teresi, pp. xvii-xix, 1986).

 

6. Thoughts to Ponder

• “Minds are what brains do” (Marvin Minsky).

• “We understand human mental processes only slightly better than a fish

understands swimming” (John McCarthy).

• “We have an inherited cultural resistance to treating the conscious mind as a

  biological phenomenon like any other” (John Searle).

• “Thoughts are not weightless and ethereal. When you have a thought, brain

  activity is actually going on” (John Searle). 

• “Until words like ‘concept’ have become terms as scientifically legitimate as,

  say, ‘neuron’ or ‘cerebellum,’ we will not have come anywhere close to

  understanding the brain” (Douglas Hofstadter). 

• “If the brain was so simple [that] we could understand it, we would be so simple

  that we couldn’t” (Lyall Watson).

• “Ten billion neurons, ten-to-the-fourteenth different connections…That’s more

  than enough to contain a ‘soul’“ (some anonymous neurologist).

 

References

Allman, W. F. (1986, May). Mindworks. Science, 23-31.

Eccles, J. C. (1977). Understanding the Brain. (2nd ed.). New York: 

McGraw-Hill Book Company. 

Feldman, J. A. (1985, April). Connections: Massive parallelism in natural and artificial

intelligence. BYTE, 277-284

Fukushima, K. (1980). Neocognitron. In: Biological Cybernetics 36:193-202.

Grossberg, S. (1987). Competitive Learning: From Interactive Activation to Adaptive

Resonance. In: Cognitive Science 11: 23-63.

Hofstadter, D.R. (1985). Variations on a theme as the crux of creativity. In

Metamagical Themas: Questing for the essence of mind and pattern (pp. 232-259).

New York: Basic Books. 

Hofstadter, D.R. (1985). “Waking up from the boolean dream In Metamagical Themas:

Questing for the essence of mind and pattern (pp. 631-665). New York: Basic Books.

Hooper, J., & Teresi, D. (1986). The 3-Pound Universe. New York: Macmillan.

Kuffler, S., Nicholls, J., & Martin A. (1984). From Neuron to Brain: A Cellular Approach

to the Function of the Nervous System. (2nd ed.). Sunderland, MA:

Sinauer Associates, Inc.

Learner, E. J. (1984, August). Why can’t a computer be more like a brain? 

High Technology, 34-37.

O’Keefe, J., & Nadel, L. (1978). The Hippocampus as a Cognitive Map.

Oxford: Clarendon Press.

Rumelhart, D.E., & McClelland, J.L. (1986). PDP Models and General Issues in

Cognitive Science. In J.L. McClelland, D.E. Rumelhart (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations (pp. 110-146). Cambridge, MA: MIT Press/Bradford Books. 

Rumelhart, D.E., Smolensky , P., McClelland, J.L., & Hinton, G.E. (1986). 

Schemata and Sequential Thought Processes in PDP Models. In J.L. McClelland,

D.E. Rumelhart (Eds.), Parallel Distributed Processing: Explorations in the

Microstructure of Cognition. Volume 2: Psychological and Biological Models 

(pp 7-57). Cambridge, MA: MIT Press/Bradford Books.

Schank, R., & Riesbeck, C. (1981). Inside Computer Understanding: Five Programs

Plus Miniatures. Hillsdale, N.J.: Lawrence Erlbaum Associates. 

Searle, J. R. (1980). Minds, Brains and Programs. The Behavioral and

Brain Sciences, 3, 417-457.

Searle, J. R. (1984). Minds, Brains and Science. Cambrigde, MA: 

Harvard University Press.

Sejnowski, T.J. (1986). Open Questions About Computation in Cerebral Cortex.

In J.L. McClelland, D.E. Rumelhart (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models (pp.372-389). Cambridge, MA: MIT Press/Bradford Books.

Simons, M. (1985). Model Aircraft Aerodynamics. Sacramento, CA: Argus Books.

Smolensky, P. (1987). On the proper treatment of connectionism.

Draft copy: Soon to be published in Behavioral and Brain Sciences. 

Stevens, J. K. (1985, April). Reverse Engineering the Brain: Chipping away at the brain. 

BYTE, 287-299. 

Risk Aversion in Transhumanism

Risk Aversion in Transhumanism

A new essay is up on my website for the book I’m writing about Zoltan Istvan‘s book, The Transhumanist Wager.

Here’s my latest essay about the novel, The Transhumanist Wager by Zoltan Isvan.

 

http://transhumanistwagerguide.com/essays/2013/11/12/the-origins-and-implications-of-the-three-laws-of-transhumanism

For those who have noted some intersections between some religious/spiritual ideas/ideals and the Transhumanist world-view-thingy, here’s a VERY GROOVY article by the VERY GROOVY Giulio Prisco. Word!

For those who have noted some intersections between some religious/spiritual ideas/ideals and the Transhumanist world-view-thingy, here’s a VERY GROOVY article by the VERY GROOVY Giulio Prisco. Word!

http://turingchurch.com/2013/10/26/zoe-bachs-quantum-zen-as-a-third-way-scientific-religion/

But which one is ME? Letting go of our mono-being identity orientation: Poly-beings, here we come!

 This essay was published on October 14, 2013 at ImmortalLife.info 

http://immortallife.info/articles/entry/but-which-one-is-me-letting-go-of-our-mono-being-identity-orientation-poly

But which one is ME? 

Letting go of our mono-being identity orientation: 

Poly-beings, here we come!

 

by Chris T. Armstrong

 

Audience for this essay: You can save yourself from wasting some of your finite and far too short lifespan by totally ignoring this essay if you fall into any of the following three categories:

 

1. You don’t accept the possibility, even in principle, of a future wherein it will be possible via a “whole brain emulation” to upload the ENTIRETY of a human being’s consciousness to a non-biological substrate/processing-medium, whether it is because you think that this process will not be able to capture some special x-factor or soul-like essence or for ANY OTHER reason.

 

2. You totally accept, in principle, that we will one day be able to achieve a “whole brain emulation” and by uploading that emulation into a non-biological substrate we can achieve a full “continuity of consciousness.” Further, you have no problem accepting that multiple copies of the same consciousness would be full and completely valid instantiations of the “original” consciousness and not “mere copies” that are somehow less-than the original.

 

3. You choose to direct your energies toward near-term solutions to ACTUAL problems and regard this kind of conjecture as just so much “chin music” and premature, speculative, philosophical jibber-jabber.

 

Now, for those of you still with me, here we go… 

 

Thought Experiment:

 

You are living in a future time, around 2045, when mind-uploading has just recently become a routine, safe, and successful way to copy one’s consciousness to a “platform” other than the biological one in which it was born. You are accompanying a friend to a mind-uploading center so she can undergo this procedure. This is a very new thing and people are still trying to comprehend all the implications involved. You have some doubts and questions about it yourself and don’t fully understand the whole procedure, but you are there to support your friend in her decision and not to debate the issues.

 

Your friend goes in and you, along with other friends and family, watch as she lies motionless while her brain is scanned. After this is finished, there is a long period of processing that needs to be done. A MONSTROUS amount of scanned brain data about your friend’s “connectome,” all the levels of neurotransmitter concentrations at each synapse, and all relevant neuronal micro-structures, the combination of which comprises your friend’s unique “consciousness-pattern,” is instantiated in the new non-biological substrate. A technician comes in and tells everyone that they may want to go have lunch and do some shopping or something and come back in the afternoon…

 

When you come back, the first-phase is completed. Your friend’s consciousness has been uploaded into its new platform and has been connected to visual, auditory and speech interfaces. All of you are there, crowded around a prosthetic head, waiting to greet the newly uploaded consciousness…your friend. The switch is thrown and the head, that will later be put on a “whole body prosthesis,” comes to “life,” looks around, gets a huge smile and says, “It WORKED!” and everyone’s tensions and anxieties are melted away as they all begin laughing, crying, clapping and cheering.

 

After they finally quiet down, they ask a thousand questions about what it feels like and if it hurt…all the typical stuff…and then finally, your friend’s skeptical uncle moves forward and begins something akin to a deposition or cross-examination: firing fact-oriented questions in rapid succession as a sort of test to see if this disembodied head really seems to be your friend, his niece. After several successful answers to what seem to her to be silly and obvious questions, your friend says, “Come on uncle, it’s ME!”

 

After a while, a technician comes in and begins talking with your friend about some details of the last thing they talked about just before the brain scanning began. It was a story about a mythical and magical origami crane that bestows eternal life and good luck on all those who bring it into existence via the intricate origami folding ritual. All the friends and family were there when the technician told this story before the brain scan as well. Soon, the technician says, “Yep, that’s about it. See you after the next phase,” and he leaves. “Why does that guy keep talking about some freaky paper crane?,” someone asks.

 

Your friend explains, “That’s kind of a tradition that has evolved, mostly for the benefit of friends and family, so they can see something immediate that reassures them that I’m the same person who he told that story to earlier. They tell each person who will be uploading a short story that was uniquely created just for them so the person would have no way of knowing such a story except through this conversation just before uploading.”

 

“It’s also just a quick check for the people running the uploading process, because if I didn’t recognize the person who told me the story or couldn’t recall the prior conversation at all, they would know immediately that something went seriously wrong with the process. But, for the most part, they depend on these interactions with friends and family, like we’re having right now, for feedback about the authenticity of the uploaded consciousness, relative to the ‘original’ one.”

 

“They told me that philosophers are still very busy debating all kinds of arcane issues surrounding identity, dualism, soul, and such, but as far as these technicians and doctors are concerned, if everyone who knows me says I’m exactly the same person as before I uploaded, that’s good enough for them. They don’t worry too much about the philosophers and their ‘busy-work.’ They figure, at least it keeps them off the streets.”

 

After a few more hours of visiting with your friend, a technician comes in and tells everyone that they have a lot to do to get your friend’s new brain connected up to the “whole body prosthetic” and all of its enhanced sensory apparatus and you can come back in about 24 hours to see the finished “product.” As you walk out, you can see through a glass window your friend’s biological body in a medically induced coma and you get a very odd and jarring feeling of cognitive dissonance since you were, only seconds before, talking with your friend…or were you? This question haunts you through the night and into the next day.

 

Before you leave the uploading center, you are all asked to meet with several specialists who interview each of you to get your impressions of your interactions with your friend and whether anything seemed amiss or if there was no difference between this uploaded consciousness and your biological friend.

 

The next afternoon you all reconvene at the uploading center. You are brought into a room and there stands your friend in her new prosthetic body. Everyone rushes over to her and hugs her and the tears and laughs begin to flow again. Everyone, that is, except for that same curmudgeonly uncle who remains standoffish for a while. Eventually, your friend notices his hesitance and yells to him, “Come over here and give your favorite niece a hug, you crusty old Luddite!” Somewhat embarrassed, he shuffles over and hugs this “thing” that he’s now supposed to accept as his niece. She hugs him tightly and whispers something in his ear that makes him laugh and cry at the same time. He’s beginning to see that this “thing” is seeming more and more niece-like and he is starting to feel that he might eventually accept “her” as such.

 

Your friend begins showing off her new body’s capabilities, a bit carefully at first since she’s still getting used to the different feel of it and how to coordinate it all. She tells you that she can already feel greatly enhanced potential in this body that she hasn’t even fully explored yet. Not to mention, how incredibly sturdy it is and practically impervious to physical destruction. She talks about her plans to eventually try out other body configurations. The recommended first step is to upload into a humanoid biped body, since one’s brain already knows how to operate that kind of configuration. Eventually people learn how to thrive in many kinds of body-styles or may even, at times, choose no body at all, preferring to exist for a time in an idealized virtual world of their own design.

 

After being challenged to the obligatory arm-wrestling matches by several large males and even a couple females, which she wins “hands-down,” a technician comes in to go over a few final points and mentions that, according to protocol and your friend’s wishes, her bio-body will be kept alive and in a coma for 30 days, after which time, it will be terminated. When you hear this, you are STUNNED. You hadn’t thought about what would happen to her old body. Maybe you assumed that it would be kept alive or maybe even woken up or who knows what you thought, but you blurt out, “What? Their gonna KILL you? All that will be left is this COPY of you? What will happen to YOU?” The room goes silent. People look a bit shocked. The uncle has a faint look of yeah-what-about-THAT? on his face.

 

Your friend breaks the silence, “What do you mean what will happen to ME? I’m RIGHT HERE! That’s just my old body in that room over there. I have 30 days to change my mind, but as of right now, I would NEVER choose to go back and “live” in that failing old ‘flesh coffin.’” The technician adds, “She has chosen this option. Some people DO choose to live along side their bio-bodies, or copy their consciousness-pattern into multiple synth-bodies.”

 

You interrupt, “But which one would be HER?” “They all would be her/me,” the technician and your friend answer in near-perfect unison. Undeterred, you say, “But, if her body dies, which one is the REAL her and not just a COPY? Where will her true self live?” The uncle chimes in, “What they’re trying to get us to swallow is that they’re ALL her true self. None of them are more “true” than any other.” Your friend interjects, “By George, I think he’s getting it” and gives her dear uncle a quick wink. 

 

“Ok you two, I admit that I had some doubts as well, but after I was put into my new synth-body and before you all came back to see the unveiling, I availed myself of an option that all uploaded people have open to them. I asked that my old body be awakened so that we could have bit of a chat. And let me tell you, it’s a SERIOUSLY surreal experience to have a chat with YOURSELF…talk about finishing each other’s sentences…wow! Anyway, I had decided beforehand that I might want to do this, so it wasn’t a total shock to, well, EITHER of me. 

 

The reason I did this was to get confirmation from the world’s number one expert on me…ME…that my uploaded, synth-self, was truly me in every discernible way. After we both got a bit more used to our initial “identity disorientation,” the bio-me grilled me for a couple of hours about my future plans, my deeply held principles and most private thoughts, fears, and secrets…everything she could think of to determine whether she was me and I was her. We were both a little surprised at how quickly we were able to accept each other as…each other, or us, or WE…I guess we need to expand the range of pronouns available for these kinds of situations. 

 

So, bottom line: We both agreed that for us, in spite of some remaining instinctual qualms to the contrary, there was no point in the bio-me remaining active in the world, now that a synth-me was here to take “our” life to the next level. We both accepted that there was no important difference between each other and the bio-me accepted that “she” would REALLY be living on as a synth-her/me. And I accepted that all that was “her” was REALLY me as well.” 

 

You and the uncle are silent for a while and then slowly begin to speak…the discussion goes on, but with a much less contentious tone…minds are blown…lives are changed…

 

Discussion

 

Humans have evolved on this planet as mono-beings: We have only one version of our consciousness, and it is located in our biological body in which “we” were born and live, until that body wears out or is destroyed. That’s how it has always been for all(?) living things on this planet and of course it’s hard for us to imagine it being any other way. But once we can truly have multiple selves, the ONE-TRUE-SELF paradigm becomes inoperative and obsolete, or at least, merely one “historical” paradigm among newer ones.

 

At present, we have a fundamental difficulty, an aversion really, to accepting the idea that there could be multiple, and fully “valid” versions of ourselves existing SIMULTANEOUSLY: poly-beings. I’ve had many discussions with people about this and almost no one can break free from the, quite natural and instinctual feeling that, regarding their sense of self, as in the movie, Highlander, “there can be only ONE.” 

 

It’s totally understandable. It feels natural to me too, since I have only known mono-being-ness. It feels like something as natural as breathing — an existential axiom  — that we have a singular identity. We’ve never had any experience other than that, at least not in our “normal” (“neuro-typical”) existence, apart from the realm of chemically induced psychedelic or otherwise “altered-state” experiences.

 

We say things like: “I am the REAL me and all uploaded versions are obviously only copies.” Some people hold on to the “only one” intuition so strongly that they assume that all of the copies would be “in sync” somehow and be experiencing exactly the same things in unison…because OBVIOUSLY there can be only one version of their consciousness existing and experiencing the world at one time. 

 

Some think that the “mere copies” will be somewhat zombie-esque: conscious to some degree but lacking some special human quality that only THEY, the original, possess. And these are not only religious people who reject the idea of mind-uploading outright because they believe it cannot capture the “soul.” Often, these are people who THINK they’ve accepted the idea, in principle, that a brain can be copied, uploaded, and could be a COMPLETE consciousness. 

 

Even people who are not arguing that the copies will not be accurate enough in a physical/functional way are often unable to think of the copies as TRUE versions of themselves. They still feel that SOMETHING must be missing and/or their original bio-version can be the only real “them”…for some reason. They are still holding on to their inherent mono-being bias when thinking about the idea of multiple-selves.

 

A Pragmatic Proposal

 

No, I am not contending that I have discovered some groundbreaking property of consciousness or a new paradigm of identity. I am proposing that we adopt this point of view as a useful tool, until we have reason to believe that the reality is otherwise. 

 

The proposition is this:

 

A sufficiently detailed copy of a person’s brain — not a rough approximation, simulation, or software engineer’s conception — is, in EVERY sense YOU. If you make 1,000 ACCURATE copies, every one of them is REALLY YOU, just as much as the original bio-version of you is you. 

 

They will all feel and believe, just as strongly as the bio-original does, that THEY are the true you. They may even have doubts about the validity of the other 999 copies AND the bio-original. They will ALL have the same ego-centered sense of self as the bio-original has. They will ALL naturally balk at the idea that it’s no big deal to be “killed” because “they” will live on through the copies. They will resist this idea unless they can come to some philosophical acceptance of the validity and PARALLEL SAMENESS of ALL the copies: The same philosophical struggle you may be going through right now.

 

The uploaded selves are not “clones” or the same as identical twins. Neither of those kinds of entities have an exact copy of someone else’s life experiences, memories, personality, etc. They are DISTINCT individuals. An uploaded consciousness is not “merely a copy” that is somewhat less-than the original, it is an EXACT duplicate of the original individual. 

 

An uploaded consciousness will be EXACTLY as happy as the original was to be alive and to be interacting with his/her loved ones (gender will become an irrelevant/fluid/optional property of an uploaded being) and every bit of your consciousness that you consider to be YOU will be there, planning your future, making your same dumb jokes, suffering the same insecurities and displaying your same strengths. This upload that you may be tempted to think of as a mere clone or copy, but not YOU, will express his/her/its happiness over the success of the upload process and the opportunity it will have to live on, in a non-biological and more durable form.

 

Embracing the Poly-Being Opportunities

 

In order to embrace the full range of implications of mind-uploading, we need to abandon our either/or intuition in favor of a both/and view. Asking which one is the REAL you is akin to asking a gay couple: “Which one of you is the man/woman?” You’re trying to shoehorn something completely new into an old model. What is needed is a recognition of the new model and a conceptual adjustment that is necessary in order to integrate this newness into one’s worldview. 

 

In order to determine if you are taking this idea 100% seriously…TOTALLY buying into it…ask yourself if you would be fine with letting your original bio-body die and believe that you, the REAL and ACTUAL you, in EVERY sense, would live on as the uploaded duplicate. The duplicate would feel that it was you and all the behavioral evidence it exhibits would lead people to accept that it truly IS you. It, YOU, will carry on with your life without missing a BEAT, because there is no BEAT to miss. It IS you. 

 

A Multiplicity of Possibilities

 

We can imagine all kinds of sci-fi variations on this theme. We could, perhaps, network all versions together and each could take in all the experiences of the others and choose which memories collected through all the versions’ experiences to integrate into their own version. That is, until they each start becoming different enough that they may want to break away from this network and go off on their own. And if they are not linked, we have to accept the idea that, yes, they BEGAN as many duplicates of one-self, but if they travel off somewhere and have different experiences, learn different things, and make new relationships with other consciousnesses, they will begin to diverge and become more and more different from the original and become truly autonomous variations on the original theme. 

 

Several consciousnesses may be able to merge into a collective entity for a while and then go back to being individuals again. Or imagine merging with your lover on such a fundamental level. You won’t have to imagine what they’re feeling or guess at their perspective. You will be THEM and they will be YOU, while in that merged-state. 

 

And, of course, one of the most valuable aspects of mind-uploading will be the ability to back-up your mind-file and keep it in one or more safe and “off site” locations.

 

These kinds of possibilities are what we MUST consider if we are to fully accept the idea of mind-uploading as conceived of by Transhumanists. 

 

Hypothesize and Test

 

Of course, this all rests upon the assumption that the copy is of sufficient resolution/detail to faithfully capture the true functionality of the brain. And that rests further on the reductionist/materialist hypothesis that the totality of our consciousness is an emergent property of the complexity of the brain; that “Minds are what brains do.” — AI pioneer, Marvin Minsky

 

On the other hand, for dualists, who believe that consciousness exists outside/independently of the physical brain (and to some, beyond the fathomability or scope of science), no amount of cleverly engineered brain-like artifacts will be sufficient to allow consciousness to emerge therefrom or reside therein (unless maybe the consciousness “wants” to reside there for some reason or is PUT there by a god). In this view, “Looking for consciousness in the brain is like looking inside a radio for the announcer.” — speculative physicist, Nassim Haramein.

 

Yes, both sides are functioning from positions of hypothesis rather than solidly demonstrated and completely settled FACT, but the big difference between these two positions is: At least the transhumanists are planning to RUN THE EXPERIMENT…to put their hypothesis to the test by working toward mind-uploading to see if we can achieve a “continuity of consciousness” — across bio and non-bio “platforms” — through science rather than just believing and hoping that our consciousness AUTOMATICALLY lives on without the brain, in the form of an immortal soul, as most religions contend.

 

To sum up: 

 

We are currently mono-beings by default; no choice in the matter; dictated by evolution. 

 

Mind-uploading technologies will enable us to become poly-beings by choice; multiple instantiations of THE SAME consciousness functioning simultaneously, but independently. 

 

At the time they are created, they are all equivalently YOU, but immediately begin having their own experiences and developing their own new histories unless a choice is made to keep them all IN SYNCH and “the same” by updating all of them (including non-operational backups) periodically with the experiences of only ONE of them.   

 

If asked, all selves will demonstrate the same will to live and the same stubborn, ego-driven insistence that they are YOU as much as YOU will. 

 

Regardless of how many paradox-laden feelings and intuitions we may have to the contrary, at the time of SUCCESSFUL uploading, the original and ALL copies, all posses equivalent YOU-NESS. There is no logical basis for considering the bio-self to have more PERSONHOOD-STATUS than the uploads. Only a preexisting bias against poly-being-ness would cause you to hold that view.

 

New conceptions of “individual” and property rights and many other legal and social “entanglements” will need to be resolved in order to deal with the novel implications of this “expanded identity-matrix.” 

 

Instead of the motto on the Seal of The United States: E Pluribus Unum – “Out of many, one.”

 

The motto of the mind-uploading era could be: E Unum Pluribus – “Out of one, many.”

 

My new website is up!

This is the website for the book (in progress), At Any Cost: A Guide to The Transhumanist Wager and the Ideas of Zoltan Istvan, by Chris T. Armstrong. I will be posting essays on various topics related to my book, Zoltan’s book, and his ideas that come up during the writing process. http://transhumanistwagerguide.com/Image

 

Follow

Get every new post delivered to your Inbox.