The Future of Humanity

Nick Bostrom, Future of Humanity Institute, Faculty of Philosophy & James Martin 21st Century School, Oxford University, http://www.nickbostrom.com
Published in New Waves in Philosophy of Technology, eds. Jan-Kyrre Berg Olsen, Evan Selinger, & Soren Riis (New York: Palgrave McMillan, 2009)
Abstract
The future of humanity is often viewed as a topic for idle speculation. Yet our beliefs and assumptions on this subject matter shape decisions in both our personal lives and public policy – decisions that have very real and sometimes unfortunate consequences. It is therefore practically important to try to develop a realistic mode of futuristic thought about big picture questions for humanity. This paper sketches an overview of some recent attempts in this direction, and it offers a brief discussion of four families of scenarios for humanity’s future: extinction, recurrent collapse, plateau, and posthumanity.
What features of the human condition are fundamental and important? On this there can be reasonable disagreement. Nonetheless, some features qualify by almost any standard. For example, whether and when Earth-originating life will go extinct, whether it will colonize the galaxy, whether human biology will be fundamentally transformed to make us posthuman, whether machine intelligence will surpass biological intelligence, whether population size will explode, and whether quality of life will radically improve or deteriorate: these are all important fundamental questions about the future of humanity
Climate change, national and international security, economic development, nuclear waste disposal, biodiversity, natural resource conservation, population policy, and scientific and technological research funding are examples of policy areas that involve long time-horizons. Arguments in these areas often rely on implicit assumptions about the future of humanity. By making these assumptions explicit, and subjecting them to critical analysis, it might be possible to address some of the big challenges for humanity in a more well-considered and thoughtful manner.
The universe started with the Big Bang an estimated 13.7 billion years ago, in a low-entropy state. The history of the universe has its own directionality: an ineluctable increase in entropy. During its process of entropy increase, the universe has progressed through a sequence of distinct stages. In the eventful first three seconds, a number of transitions occurred, including probably a period of inflation, reheating, and symmetry breaking. These were followed, later, by nucleosynthesis, expansion, cooling, and formation of galaxies, stars, and planets, including Earth (circa 4.5 billion years ago). The oldest undisputed fossils are about 3.5 billion years old, but there is some evidence that life already existed 3.7 billion years ago and possibly earlier. Evolution of more complex organisms was a slow process. It took some 1.8 billion years for eukaryotic life to evolve from prokaryotes, and another 1.4 billion years before the first multicellular organisms arose.
From the beginning of the Cambrian period (some 542 million years ago), “important developments” began happening at a faster pace, but still enormously slowly by human standards. Homo habilis – our first “human-like ancestors” – evolved some 2 million years ago; Homo sapiens 100,000 years ago. The agricultural revolution began in the Fertile Crescent of the Middle East 10,000 years ago, and the rest is history. The size of the human population, which was about 5 million when we were living as hunter-gatherers 10,000 years ago, had grown to about 200 million by the year 1; it reached one billion in 1835 AD; and today over 6.6 billion human beings are breathing on this planet. From the time of the industrial revolution, perceptive individuals living in developed countries have noticed significant technological change within their lifetimes.
 All techno-hype aside, it is striking how recent many of the events are that define what we take to be the modern human condition. If compress the time scale such that the Earth formed one year ago, then Homo sapiens evolved less than 12 minutes ago, agriculture began a little over one minute ago, the Industrial Revolution took place less than 2 seconds ago, the electronic computer was invented 0.4 seconds ago, and the Internet less than 0.1 seconds ago – in the blink of an eye.  Almost all the volume of the universe is ultra-high vacuum, and almost all of the tiny material specks in this vacuum are so hot or so cold, so dense or so dilute, as to be utterly inhospitable to organic life. Spatially as well as temporally, our situation is an anomaly. Given the technocentric perspective adopted here, and in light of our incomplete but substantial knowledge of human history and its place in the universe, how might we structure our expectations of things to come?
The remainder of this paper will outline four families of scenarios for humanity’s future:
• Extinction
• Recurrent collapse
• Plateau
• Posthumanity
 
Extinction
Unless the human species lasts literally forever, it will some time cease to exist. In that case, the long-term future of humanity is easy to describe: extinction. An estimated 99.9% of all species that ever existed on Earth are already extinct.  There are two different ways in which the human species could become extinct: one, by evolving or developing or transforming into one or more new species or life forms, ufficiently different from what came before so as no longer to count as Homo sapiens; the other, by simply dying out, without any meaningful replacement or continuation. Of course, a transformed ontinuant of the human species might itself eventually terminate, and perhaps there will be a point where all life comes to an end; so scenarios involving the first type of extinction may eventually converge into the second kind of scenario of complete annihilation. We postpone discussion of transformation scenarios to a later section, and we shall not here discuss the possible existence of fundamental physical limitations to the survival of intelligent life in the universe. This section focuses on the direct form of extinction (annihilation) occurring within any very long, but not astronomically long, time horizon – we could say one hundred thousand years for specificity.
 Human extinction risks have received less scholarly attention than they deserve. In recent years, there have been approximately three serious books and one major paper on this topic. John Leslie, a Canadian philosopher, puts the probability of humanity failing to survive the next five centuries to 30% in his book End of the World. His estimate is partly based on the controversial “Doomsday argument” and on his own views about the limitations of this argument. Sir Martin Rees, Britain’s Astronomer Royal, is even more pessimistic, putting the odds that humanity will survive the 21st century to no better than 50% in Our Final Hour.
Richard Posner, an eminent American legal scholar, offers no numerical estimate but rates the risk of extinction “significant” in Catastrophe. And I published a paper in 2002 in which I suggested that assigning a probability of less than 25% to existential disaster (no time limit) would be misguided. The concept of existential risk is distinct from that of extinction risk. As I introduced the term, an existential disaster is one that causes either the annihilation of Earth-originating intelligent life or the permanent and drastic curtailment of its potential for future desirable development.
 It is possible that a publication bias is responsible for the alarming picture presented by these opinions. Scholars who believe that the threats to human survival are severe might be more likely to write books on the topic, making the threat of extinction seem greater than it really is. Nevertheless, it is noteworthy that there seems to be a consensus among those researchers who have seriously looked into the matter that there is a serious risk that humanity’s journey will come to a premature end.
 The greatest extinction risks (and existential risks more generally) arise from human activity. Our species has survived volcanic eruptions, meteoric impacts, and other natural hazards for tens of thousands of years. It seems unlikely that any of these old risks should exterminate us in the near future. By contrast, human civilization is introducing many novel phenomena into the world, ranging from nuclear weapons to designer pathogens to high energy particle colliders. The most severe existential risks of this century derive from expected technological developments. Advances in biotechnology might make it possible to design new viruses that combine the easy contagion and mutability of the influenza virus with the lethality of HIV. Molecular nanotechnology might make it possible to create weapons systems with a destructive power dwarfing that of both thermonuclear bombs and biowarfare agents. Superintelligent machines might be built and their actions could determine the future of humanity – and whether there will be one. Considering that many of the existential risks that now seem to be among the most significant were conceptualized only in recent decades, it seems likely that further ones still remain to be discovered.
 The same technologies that will pose these risks will also help us to mitigate some risks. Biotechnology can help us develop better diagnostics, vaccines, and anti-viral drugs. Molecular nanotechnology could offer even stronger prophylactics. Superintelligent machines may be the last invention that human beings ever need to make, since a superintelligence, by definition, would be far more effective than a human brain in practically all intellectual endeavors, including strategic thinking, scientific analysis, and technological creativity. In addition to creating and mitigating risks, these powerful technological capabilities would also affect the human condition in many other ways.
 Extinction risks constitute an especially severe subset of what could go badly wrong for humanity. There are many possible global catastrophes that would cause immense worldwide damage, maybe even the collapse of modern civilization, yet fall short of terminating the human species. An all-out nuclear war between Russia and the United States might be an example of a global catastrophe that would be unlikely to result in extinction. A terrible pandemic with high virulence and 100% mortality rate among infected individuals might be another example: if some groups of humans could successfully quarantine themselves before being exposed, human extinction could be avoided even if, say, 95% or more of the world’s population succumbed. What distinguishes extinction and other existential catastrophes is that a comeback is impossible. A non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback: a giant massacre for man, a small misstep for mankind.
 An existential catastrophe is therefore qualitatively distinct from a “mere” collapse of global civilization, although in terms of our moral and prudential attitudes perhaps we should simply view both as unimaginably bad outcomes. One way that civilization collapse could be a significant feature in the larger picture for humanity, however, is if it formed part of a repeating pattern. This takes us to the second family of scenarios: recurrent collapse.
Recurrent collapse
Environmental threats seem to have displaced nuclear holocaust as the chief specter the public imagination. Current-day pessimists about the future often focus on the environmental problems facing the growing world population, worrying that our wasteful and polluting ways are unsustainable and potentially ruinous to human civilization. The credhaving handed the environmental movement its initial impetus is often given to Rachel Carson, whose book Silent Spring (1962) sounded the alarm on pesticides and synthetic chemicals that were being released into the environment with allegedly devastating effects wildlife and human health. The environmentalist forebodings swelled over the decade.
Paul Ehrlich’s book Population Bomb, and the Club of Rome report Limits to Growth, which sold 30 million copies, predicted economic collapse and mass starvation in ineties as the results of population growth and resource depletion.  In recent years, the spotlight of environmental concern has shifted to global climate change. Carbon dioxide and other greenhouse gases are accumulating in the atmosphere, where they are expected to cause a warming of Earth’s climate and a concomitant rise in seewater levels. The more recent report by the United Nations’ Intergovernmental Panel on Climate Change, which represents the most authoritative assessment of current scientific opinion, attempts to estimate the increase in global mean temperature that would be expected by the end of this century under the assumption that no efforts at mitigation are made. The final estimate is fraught with uncertainty because of uncertainty about what the default emissions of greenhouse gases will be over the century, uncertainty about the climate sensitivity parameter, and uncertainty about other factors. The IPCC therefore expresses its assessment in terms of six different climate scenarios based on different models and different assumptions. The “low” model predicts a mean global warming of +1.8°C (uncertainty range 1.1°C to 2.9°C); the “high” model predicts warming by +4.0°C (2.4°C to 6.4°C).
Estimated sea level rise predicted by these two most extreme scenarios among the sixconsidered is 18 to 38 cm, and 26 to 59 cm, respectively. Even the Stern Review on the Economics of Climate Change, a report prepared for the British Government which has been criticized by some as overpessimistic, estimates that under the assumption of business-as-usual with regard to emissions, global warming will reduce welfare by an amount equivalent to a permanent reduction in per capita consumption of between 5 and 20%.
In absolute terms, this would be a huge harm. Yet over the course of the twentieth century, world GDP grew by some 3,700%, and per capita world GDP rose by some 860%. It seems safe to say that (absent a radical overhaul of our best current scientific models of the Earth’s climate system) what ever negative economic effects global warming will have, they will be other factors that will influence economic growth rates in this century.
Posthumanity
Inventor and futurist Ray Kurzweil has argued for the singularity hypothesis on somewhat different grounds. His most recent book, The Singularity is Near, is an update of his earlier writings. It covers a vast range of ancillary topics related to radical future technological prospects, but its central theme is an attempt to demonstrate “the law of accelerating returns”, which manifests itself as exponential technological progress. Kurzweil plots progress in a variety of areas, including computing, communications, and biotechnology, and in each case finds a pattern similar to Moore’s law for microchips:
performance grows as an exponential with a short doubling time (typically a couple of years).
Extrapolating these trend lines, Kurzweil infers that a technological singularly is due around the year 2045. While machine intelligence features as a prominent factor in Kurzweil’s forecast, his singularity scenario differs from that of Vinge in being more gradual: not a virtually-overnight total transformation resulting from runaway self-improving artificial intelligence, but a steadily accelerating pace of general technological advancement.
 Several critiques could be leveled against Kurzweil’s reasoning. First, one might of course doubt that present exponential trends will continue for another four decades. Second, while it is possible to identify certain fast-growing areas, such as IT and biotech, there are many other technology areas where progress is much slower. One could argue that to get an index of the overall pace of technological development, we should look not at a hand-picked portfolio of hot technologies; but instead at economic growth, which implicitly incorporates all productivity-enhancing technological innovations, weighted by their economic significance. In fact, the world economy has also been growing at a roughly exponential rate since the Industrial Revolution; but the doubling time is much longer, approximately 20 years. Third, if technological progress is exponential, then the current rate of technological progress must be vastly greater than it was in the remote past. But it is far from clear that this is so. Vaclav Smil – the historian of technology who, as we saw, has argued that the past six generations have seen the most rapid and profound change in recorded history – maintains that the 1880s was the most innovative decade of human history
The longer term
The cumulative probability of extinction increases monotonically over time. One might argue, however, that the current century, or the next few centuries, will be a critical phase for humanity, such that if we make it through this period then the life expectancy of human civilization could become extremely high. Several possible lines of argument would support this view. For example, one might believe that superintelligence will be developed within a few centuries, and that, while the creation of superintelligence will pose grave risks, once that creation and its immediate aftermath have been survived, the new civilization would have vastly improved survival prospects since it would be guided by superintelligent foresight and planning. Furthermore, one might believe that self-sustaining space colonies may have been established within such a timeframe, and that once a human or posthuman civilization becomes dispersed over multiple planets and solar systems, the risk of extinction declines. One might also believe that many of the possible revolutionary technologies (not only superintelligence) that can be developed will be developed within the next several hundred years; and that if these technological revolutions are destined to cause existential disaster, they would already have done so by then.

About basicrulesoflife

Year 1935. Interests: Contemporary society problems, quality of life, happiness, understanding and changing ourselves - everything based on scientific evidence. Artificial Intelligence Foundation Latvia, http://www.artificialintelligence.lv Editor.
This entry was posted in All Posts, Are We doomed?, Human Evolution. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s