Artificial Intelligence and Human Nature

What awaits is not oblivion but rather a future which, from our present vantage point, is best described by the words “postbiological” or even “supernatural.” It is a world in which the human race has been
swept away by a tide of cultural change, usurped by its own artificial progeny. –Hans Moravec, Mind Children
Charles T. Rubin is a professor of political science at Duquesne University. An earlier version of this essay was presented at “The Ethical Dimensions of Biotechnology,” a conference organized by the Henry Salvatori Center for the Study of Individual Freedom in the Modern World, Claremont McKenna College.
We are dreaming a strange, waking dream; an inevitably brief interlude sandwiched between the long age of low-tech humanity on the one hand, and the age of human beings transcended on the other … We
will find our niche on Earth crowded out by a better and more competitive organism. Yet this is not the end of humanity, only its physical existence as a biological life form.
–Gregory Paul and Earl D. Cox, Beyond Humanity
The cutting edge of modern science and technology has moved, in its aim, beyond the relief of man’s estate to the elimination of human beings. Such fantasies of leaving behind the miseries of human life are of course not new; they have taken many different forms in both ancient and modern times. The chance of their success, in the hands of the new scientists, is anyone’s guess. The most familiar form of this vision in our times is genetic engineering: specifically, the prospect of designing better human beings by improving their biological systems.
But even more dramatic are the proposals of a small, serious, and accomplished group of toilers in the fields of artificial intelligence and robotics. Their goal, simply put, is a new age of post-biological life, a world of intelligence without bodies, immortal identity without the limitations of disease, death, and unfulfilled desire. Most remarkable is not their prediction that the end of humanity is coming but their wholehearted advocacy of that result. If we can understand why this fate is presented as both necessary and desirable, we might understand something of the confused state of thinking about human life at the dawn of this new century—and perhaps especially the ways in which modern science has shut itself off from serious reflection about the good life and good society.
The Road to Extinction
The story of how human beings will be replaced by intelligent machines goes something like this: As a long-term trend beginning with the Big Bang, the evolution of organized systems, of which animal life and human intelligence are relatively recent examples, increases in speed over time. Similarly, as a long-term trend beginning with the first mechanical calculators, the evolution of computing capacity increases in speed over time and decreases in cost. From biological evolution has sprung the human brain, an electro-chemical machine with a great but finite number of complex neuron connections, the product of which we call mind or consciousness. As an electro-chemical machine, the brain obeys the laws of physics; all of its functions can be understood and duplicated. And since computers already operate at far faster speeds than the brain, they soon will rival or surpass the brain in their capacity to store and process information. When that happens, the computer will, at the very least, be capable of responding to stimuli
in ways that are indistinguishable from human responses. At that point, we would be justified in calling the machine intelligent; we would have the same evidence to call it conscious that we now have when giving such a label to any consciousness other than our own.
At the same time, the study of the human brain will allow us to duplicate its functions in machine circuitry. Advances in brain imaging will allow us to “map out” brain functions synapse by synapse, allowing individual minds to be duplicated in some combination of hardware and software. The result, once again, would be intelligent machines.
If this story is correct, then human extinction will result from some combination of transforming ourselves voluntarily into machines and losing out in the evolutionary competition with machines. Some humans may survive in zoo-like or reservation settings. We would be dealt with as parents by our machine children: old where they are new, imperfect where they are self-perfecting, contingent
creatures where they are the product of intelligent design. The result will be a world that is remade and reconstructed at the atomic level through nanotechnology, a world whose organization will be shaped by an intelligence that surpasses all human comprehension.
Nearly all the elements of this story are problematic. They often involve near metaphysical speculation about the nature of the universe, or technical speculation about things that are currently not remotely possible, or philosophical speculation about matters, such as the nature of consciousness, that are topics of perennial dispute. One could raise specific questions about the future of Moore’s Law, or the mind-body problem, or the issue of evolution and organized complexity. Yet while it may be comforting to latch on to a particular scientific or technical reason to think that what is proposed is impossible, to do so is to bet that we under-stand the limits of human knowledge and ingenuity, which in fact we cannot know in advance. When it comes to the feasibility of what might be coming, the “extinctionists” and their critics are both speculating.
Nevertheless, the extinctionists do their best to claim that the “end of humanity … as a biological life form” is not only possible but necessary. It is either an evolutionary imperative or an unavoidable result of the technological assumption that if “we” don’t engage in this effort, “they” will. Such arguments are obviously thin, and the case that human beings ought to assist enthusiastically in their own extinction makes little sense on evolutionary terms, let alone moral ones. The English novelist Samuel Butler, who considered the possibility that machines were indeed the next stage of evolution in his nineteenth-century novel Erewhon (“Nowhere”), saw an obvious response: his Erewhonians destroy most of their machines to preserve their humanity.
“Just saying no” may not be easy, especially if the majority of human beings come to desire the salvation that the extinctionist prophets claim to offer. But so long as saying no (or setting limits) is not impossible, it makes sense to inquire into the goods that would supposedly be achieved by human extinction rather than simply the mechanisms that may or may not make it possible. Putting aside
the most outlandish of these proposals—or at least suspending disbelief about the feasibility of the science—it matters greatly whether or not we reject, on principle, the promised goods of post-human life. By examining the moral case for leaving biological life behind—the case for merging with and then becoming our machines—we will perhaps understand why someone might find this prospect appealing, and therefore discover the real source of the supposed imperative behind bringing it to pass.

About basicrulesoflife

Year 1935. Interests: Contemporary society problems, quality of life, happiness, understanding and changing ourselves - everything based on scientific evidence.
This entry was posted in Artificial Intelligence. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.