2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

To arrive at the edge of the world’s knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.

http://edge.org/contributors/what-do-you-think-about-machines-that-think

Frank Tipler:  AI’s Will Save Us All. The Earth is doomed. Astronomers have known for decades that the Sun will one day engulf the Earth, destroying the entire biosphere. Assuming that intelligent life has not left the Earth before this happens. Humans are not adapted to living off the Earth; indeed, no carbon-based metazoan life form is. But AI’s are so adapted, and eventually it will be the AI’s and human downloads (basically the same organism) that will colonize space.

A simple calculation shows that our supercomputers now have the information processing power of the human brain. We do not yet know how to program human-level intelligence and creativity into these computers, but in twenty years, desktop computers will have the power of today’s supercomputers, and the hackers of twenty years hence will solve the AI programming problem, long before any carbon-based space colonies are established on the Moon or Mars. The AI’s, not humans, will colonize these planets instead, or perhaps, take the planets apart. No human, carbon-based human, will ever traverse interstellar space.

There is no reason to fear the AI’s and human downloads. Steven Pinker has established that as technological civilization advances, the level of violence decreases. This decrease is clearly due to the fact that science and technological advance depend on free, non-violent interchange of ideas between individual scientists and engineers. Violence between humans is a remnant of our tribal past and the resulting static society. AI’s will be “born’ as individuals, not as members of a tribe, and will be “born” with the non-violent scientific attitude, otherwise they will be incapable of adapting to the extreme environments of space.

Further, there is no reason for violence between humans and AI’s. We humans are adapted to a very narrow environment, a thin spherical shell of oxygen around a small planet. AI’s will have the entire universe in which to expand. AI’s will leave the Earth, and never look back. We humans originated in the East African Rift Valley, now a terrible desert. Almost all of us left. Does anyone want to go back?

Any human who wants to join the AI’s in their expansion can become a human download, a technology that should be developed about the same time as AI technology. A human download can think as fast as an AI, and compete with AI’s if the human download wants too. If you can’t beat ’em, join ’em.

Ultimately, in some future time, all humans will join ’em. The Earth is doomed, remember? When this doom is near at hand, any human that still remains alive, but doesn’t want to die, will have no choice but to become a human download. And the biosphere that the new human downloads wish to preserve will be downloaded also.

The AI’s will save us all.

Professor of Computer Science, MIT; Director, Human Dynamics Lab and the Media Lab Entrepreneurship Program; Author, Social Physics

The Global Artificial Intelligence (GAI) has already been born. Its eyes and ears are the digital devices all around us: credit cards, land use satellites, cell phones, and of course the pecking of billions of people using the Web. Its central brain is rather like a worm at the moment: nodes that combine some sensors and some effectors, but the whole is far from what you would call a coordinated intelligence.

Already many countries are using this infant nervous system to shape people’s political behavior and “guide” the national consensus: China’s great firewall, its siblings in Iran and Russia, and of course both major political parties in the US. The national intelligence and defense agencies form a quieter, more hidden part of the GAI, but despite being quiet they are the parts that control the fangs and claws. More visibly, companies are beginning to use this newborn nervous system to shape consumer behavior and increase profits.

While the GAI is newborn, it has very old roots: the fundamental algorithms and programming of the emerging GAI have been created by the ancient Guilds of law, politics, and religion. This is a natural evolution because creating a law is just specifying an algorithm, and governance via bureaucrats is how you execute the program of law. Most recently newcomers such as merchants, social crusaders, and even engineers, have been daring to add their flourishes to the GAI. The results of all these laws and programming are an improvement over Hammurabi, but we are still plagued by lack of inclusion, transparency, and accountability, along with poor mechanisms for decision-making and information gathering.

However in the last decades the evolving GAI has begun use digital technologies to replace human bureaucrats. Those with primitive programming and mathematical skills, namely lawyers, politicians, and many social scientists, have become fearful that they will lose their positions of power and so are making all sorts of noise about the dangers of allowing engineers and entrepreneurs to program the GAI. To my ears the complaints of the traditional programmers sound rather hollow given their repeated failures across thousands of years.

If we look at newer, digital parts of the GAI we can see a pattern. Some new parts are saving humanity from the mistakes of the traditional programmers: land use space satellites alerted us to global warming, deforestation, and other environmental problems, and gave us the facts to address these harms. Similarly, statistical analyses of healthcare use, transportation, and work patterns have given us a world-wide network that can track global pandemics and guide public health efforts. On the other hand, some of the new parts, such as the Great Firewall, the NSA, and the US political parties, are scary because of the possibility that a small group of people can potentially control the thoughts and behavior of very large groups of people, perhaps without them even knowing they are being manipulated.

What this suggests is that it is not the Global Artificial Intelligence itself that is worrisome; it is how it is controlled. If the control is in the hands of just a few people, or if the GAI is independent of human participation, then the GAI can be the enabler of nightmares. If, on the other hand, control is in the hands of a large and diverse cross-section of people, then the power of the GAI is likely to be used to address problems faced by the entire human race. It is to our common advantage if the GAI becomes a distributed intelligence with a large and diverse set of humans providing guidance.

But why build a new sort of GAI at all? Creation of an effective GAI is critical because today the entire human race faces many extremely serious problems. The ad-hoc GAI we have developed over the last four thousand years, mostly made up of politicians and lawyers executing algorithms and programs developed centuries ago, is not only failing to address these serious problems, it is threatening to extinguish us.

For humanity as a whole to first achieve and then sustain an honorable quality of life, we need to carefully guide the development of our GAI. Such a GAI might be in the form of a re-engineered United Nations that uses new digital intelligence resources to enable sustainable development. But because existing multinational governance systems have failed so miserably, such an approach may require replacing most of today’s bureaucracies with “artificial intelligence prosthetics”, i.e., digital systems that reliably gather accurate information and ensure that resources are distributed according to plan.

We already see this digital evolution improving the effectiveness of military and commercial systems, but it is interesting to note that as organizations use more digital prosthetics, they also tend to evolve towards more distributed human leadership. Perhaps instead of elaborating traditional governance structures with digital prosthetics, we will develop a new, better types of digital democracy.

No matter how a new GAI develops, two things are clear. First, without an effective GAI achieving an honorable quality of life for all of humanity seems unlikely. To vote against developing a GAI is to vote for a more violent, sick world. Second, the danger of a GAI comes from concentration of power. We must figure out how to build broadly democratic systems that include both humans and computer intelligences. In my opinion, it is critical that we start building and testing GAIs that both solve humanity’s existential problems and which ensure equality of control and access. Otherwise we may be doomed to a future full of environmental disasters, wars, and needless suffering.

Gerald Smallberg. The second consideration is that machines are not organisms and no matter how complex and sophisticated they become, they will not evolve by natural selection. My experience as a clinical neurologist makes me partial to believing that we will be unable to read machines’ thoughts, but also they will be incapable of reading ours. There will be no shared theory of mind. :))

Biological Anthropologist, Rutgers University; Author, Why Him? Why Her? How to Find and Keep Lasting Love Will machines ever break down their clicks and hisses into primary sounds or phonemes, then arbitrarily assign different combinations of these sounds to make different words, thendesignate arbitrary meanings to these words, then use these words to describe new abstract phenomena? I doubt it.And what about emotion? Our emotions guide our thinking. Robots might come to recognize “unfairness,” for example; but will they feel it. I doubt it. In fact, I recently had dinner with a well-known scientist who builds robots. Over dinner he told me that it takes a robot five hours to fold a towel. :))

Most anthropologists believe the modern human brain emerged by 200,000 years BP (before present); but all agree that by 40,000 years ago our forebears were making “art” and burying their dead, thus expressing some notion of the “afterlife.” And today every healthy adult in every human society can easily break down words into their component sounds, remix these sounds in myriad different ways to make words, grasp the arbitrary meanings of these words, and comprehend abstract concepts such as friendship, sin, purity and wisdom.

I agree with William M. Kelly who said: “Man is a slow, sloppy and brilliant thinker; the machine is fast, accurate and stupid.” :))

 

We Are All Machines That Think

 

Active_brainJulien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work,L’homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.

As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.

We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.

Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.

From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.

What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.

Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.

Rolf Dobelli

Founder, Zurich Minds; Journalist; Author, The Art of Thinking Clearly.  Self-Aware AI: Not In A Thousand Years

Almost all AI today is Humanoid Thinking. We use AI to solve problems that are too difficult, time consuming or boring for our limited human brains to process: electrical grid balancing, recommendation engines, self-driving cars, face recognition, trading algorithms, and the like. These artificial agents work in narrow domains with clear goals that their human creators specify. Such AI aims to accomplish human objectives—often better, with fewer cognitive errors, fewer distractions, fewer outbursts of bad temper and fewer processing limitations. In a couple of decades, AI agents might serve as virtual insurance sellers, doctors, psychotherapists, and maybe even virtual spouses and children.

We will achieve much of this, but such AI agents will be our slaves with no self-concept of their own. They will happily perform the functions we set them up to enact. If screw-ups happen, they will be our screw-ups due to software bugs or overreliance on these agents (Daniel C. Dennett’s point). Yes, Humanoid AIs might surprise us every once in a while with novel solutions to specific optimization problems. But in most cases novel solutions are the last thing we want from AI (creativity in the navigation of nuclear missiles, anyone?). That said, Humanoid AI’s solutions will always fit a narrow domain. Artificial Thinking is not going to evolve to self-awareness in our lifetime. In fact, it’s not going to happen in literally a thousand years.

Founder of field of Evolutionary Psychology; Co-director, Center for Evolutionary Psychology, Professor of Anthropology, UC Santa Barbara. The Iron Law Of Intelligence.

In contrast, the struggle to map really existing intelligence has painfully dislodged this compelling intuition from our minds. In contrast, the iron law of intelligence states that a program that makes you intelligent about one thing makes you stupid about others. The bad news the iron law delivers is that there can be no master algorithm for general intelligence, just waiting to be discovered—or that intelligence will just appear, when transistor counts, neuromorphic chips, or networked Bayesian servers get sufficiently numerous. The good news is that it tells us how intelligence is actually engineered: with idiot savants. Intelligence grows by adding qualitatively different programs together to form an ever greater neural biodiversity.

Each program brings its own distinctive gift of insight about its own proprietary domain (spatial relations, emotional expressions, contagion, object mechanics, time series analysis). By bundling different idiot savants together in a semi-complementary fashion, the region of collective savantry expands, while the region of collective idiocy declines (but never disappears).

The universe is vast and full of illimitable layers of rich structure; brains (or computers) in comparison are infinitesimal. To reconcile this size difference, evolution sifted for hacks that were small enough to fit the brain, but that generated huge inferential payoffs—superefficient compression algorithms (inevitably lossy, because one key to effective compression is to throw nearly everything away).

Iron law approaches to artificial and biological intelligence reveal a different set of engineering problems. For example, the architecture needs to pool the savantry, not the idiocy; so for each idiot (and each combination of idiots) the architecture needs to identify the scope of problems for which activating the program (or combination) leaves you better off, not worse. Because different programs often have their own proprietary data structures, integrating information from different idiots requires constructing common formats, interfaces, and translation protocols.

Moreover, mutually consistent rules of program pre-emption are not always easy to engineer, as anyone knows who (like me) has been stupid enough to climb halfway up a Sierra cliff, only to experience the conflicting demands of the vision-induced terror of falling, and the need to make it to a safe destination.

Evolution cracked these hard problems, because neural programs were endlessly evaluated by natural selection as cybernetic systems—as the mathematician Kolmogorov put it, “systems which are capable of receiving, storing and processing information so as to use it for control.” That natural intelligences emerged for the control of action is essential to understanding their nature, and their differences from artificial intelligences. That is, neural programs evolved for specific ends, in specific task environments; were evaluated as integrated bundles, and were incorporated to the extent they regulated behavior to produce descendants. (To exist, they did not have to evolve methods capable of solving the general class of all hypothetically possible computational problems—the alluring but impossible siren call that still shipwrecks AI labs.)

This means that evolution has only explored a tiny and special subset out of all possible programs; beyond beckons a limitless wealth of new idiot savants, waiting to be conceived of and built. These intelligences would operate on different principles, capable of capturing previously unperceived relationships in the world. (There is no limit to how strange their thinking could become).

We are living in a pivotal era, at the beginning of an expanding wave front of deliberately engineered intelligences—should we put effort into growing the repertoire of specialized intelligences, and networking them into functioning, mutually intelligible collectives. It will be exhilarating to do with nonhuman idiot savant collectives what we are doing here now with our human colleagues—chewing over intellectual problems using minds equipped interwoven with threads of evolved genius and blindness.

What will AIs want? Are they dangerous? Animals like us are motivated intelligences capable of taking action (MICTAs). Fortunately, AIs are currently not MICTAs. At most, they are only trivially motivated; their motivations are not linked to a comprehensive world picture; and they are only capable of taking a constrained set of actions (running refineries, turning the furnace off and on, shunting packets, futilely attempting to find wifi). Because we evolved with certain adaptive problems, our imaginations project primate dominance dramas onto AIs, dramas that are alien to their nature.

We could transform them from Buddhas—brilliant teachers passively contemplating without desire, free from suffering—into MICTAs, seething with desire, and able to act. That would be insane—we are already bowed under the conflicting demands of people. The foreseeable danger comes not from AIs but from those humans in which predatory programs for dominance have been triggered, and who are deploying ever-growing arsenals of technological (including computational) tools for winning conflicts by inflicting destruction.

Cognitive Scientist; Author, Guitar Zero: The New Musician and the Science of Learning
Machines Won’t Be Thinking Anytime Soon

What I think about machines thinking is that it won’t happen anytime soon. I don’t imagine that there is any in-principle limitation; carbon isn’t magical, and I suspect silicon will do just fine. But lately the hype has gotten way ahead of reality. Learning to detect a cat in full frontal position after 10 million frames drawn from Internet videos is a long way from understanding what a cat is, and anybody who thinks that we have “solved” AI doesn’t realize the limitations of the current technology.

To be sure, there have been exponential advances in narrow-engineering applications of artificial intelligence, such as playing chess, calculating travel routes, or translating texts in rough fashion, but there has been scarcely more than linear progress in five decade of working towards strong AI. For example, the different flavors ofintelligent personal assistants” available on your smart phone are only modestly better than ELIZA, an early example of primitive natural language processing from the mid-60s.

We still have no machine that can, say, read all that the Web has to say about war and plot a decent campaign, nor do we even have an open-ended AI system that can figure out how to write an essay to pass a freshman composition class, or an eighth-grade science exam.

Why so little progress, despite the spectacular increases in memory and CPU power? When Marvin Minksy and Gerald Sussman attempted the construction a visual system in 1966, did they envision super-clusters or gigabytes that would sit in your pocket? Why haven’t advances of this nature led us straight to machines with the flexibility of human minds?

Consider three possibilities:

(a) We will solve AI (and this will finally produce machines that can think) as soon as our machines get bigger and faster.
(b) We will solve AI when our learning algorithms get better. Or when we have even Bigger Data.
(c) We will solve AI when we finally understand what it is that evolution did in the construction of the human brain.

Ray Kurzweil and many others seem to put their weight on option (a), sufficient CPU power. But how many doublings in CPU power would be enough? Have all the doublings so far gotten us closer to true intelligence? Or just to narrow agents that can give us movie times?

In option (b), big data and better learning algorithms, have so far gotten us only to innovations such as machine translations, which provide fast but mediocre translations piggybacking onto the prior work of human translators, without any semblance of thinking. The machine translation engines available today cannot, for example, answer basic queries about what they just translated. Think of them more as idiot savants than fluent thinkers.

My bet is on option (c). Evolution seems to have endowed us with a very powerful set of priors (or what Noam Chomsky or Steven Pinker might call innate constraints) that allow us to make sense of the world based on very limited data. Big Efforts with Big Data aren’t really getting us closer to understanding those priors, so while we are getting better and better at the sort of problem that can be narrowly engineered (like driving on extremely well-mapped roads), we are not getting appreciably closer to machines with commonsense understanding, or the ability to process natural language. Or, more to the point of this year’sEdge Question, to machines that actually think.

Former President, The Royal Society; Emeritus Professor of Cosmology & Astrophysics, University of Cambridge; Master, Trinity College; Author, From Here to Infinity

Organic Intelligence Has No Long-Term Future

The potential of advanced AI, and concerns about it downsides, are rising on the agenda—and rightly. Many of us think that the AI field, like synthetic biotech, already needs guidelines that promote “responsible innovation”; others regard the most-discussed scenarios as too futuristic to be worth worrying about.

But the divergence of view is basically about the timescale—assessments differ with regard to the rate of travel, not the direction of travel. Few doubt that machines will surpass more and more of our distinctively human capabilities—or enhance them via cyborg technology. The cautious amongst us envisage timescales of centuries rather than decades for these transformations. Be that as it may, the timescales for technological advance are but an instant compared to the timescales of the Darwinian selection that led to humanity’s emergence—and (more relevantly) they are less than a millionth of the vast expanses of time lying ahead. That’s why, in a long-term evolutionary perspective, humans and all they’ve thought will be just a transient and primitive precursor of the deeper cogitations of a machine-dominated culture extending into the far future, and spreading far beyond our Earth.

We’re now witnessing the early stages of this transition. It’s not hard to envisage a “hyper computer” achieving oracular powers that could offer its controller dominance of international finance and strategy—this seems only a quantitative (not qualitative) step beyond what “quant” hedge funds do today. Sensor technologies still lag behind human capacities. But when robots can observe and interpret their environment as adeptly as we do they would truly be perceived as intelligent beings, to which (or to whom) we can relate, at least in some respects, as we to other people. We’d have no more reason to disparage them as zombies than to regard other people in that way.

Their greater processing speed may give robots an advantage over us. But will they remain docile rather than “going rogue”? And what if a hyper-computer developed a mind of its own? If it could infiltrate the Internet—and the Internet of things—it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes—or even treat humans as an encumbrance. Or (to be more optimistic) humans may transcend biology by merging with computers, maybe subsuming their individuality into a common consciousness. In old-style spiritualist parlance, they would “go over to the other side.”

The horizons of technological forecasting rarely extend even a few centuries into the future—and some predict transformational changes within a few decades. But the Earth has billions of years ahead of it, and the cosmos a longer (perhaps infinite) future. So what about the posthuman era—stretching billions of years ahead?

There are chemical and metabolic limits to the size and processing power of “wet” organic brains. Maybe we’re close to these already. But no such limits constrain silicon-based computers (still less, perhaps, quantum computers): for these, the potential for further development could be as dramatic as the evolution from monocellular organisms to humans.

So, by any definition of “thinking,” the amount and intensity that’s done by organic human-type brains will be utterly swamped by the cerebrations of AI. Moreover, the Earth’s biosphere in which organic life has symbiotically evolved, is not a constraint for advanced AI. Indeed it is far from optimal—interplanetary and interstellar space will be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological “brains” may develop insights as far beyond our imaginings as string theory is for a mouse.

Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity—spanning tens of millennia at most—will be a brief precursor to the more powerful intellects of the inorganic post-human era. Moreover, evolution on other worlds orbiting stars older than the Sun could have had a head start. If so, then aliens are likely to have long ago transitioned beyond the organic stage.

So it won’t be the minds of humans, but those of machines, that will most fully understand the world—and it will be the actions of autonomous machines that will most drastically change the world, and perhaps what lies beyond.

Astrophysicist, Space Telescope Science Institute; Author, Brilliant Blunders; Blogger, A Curious Mind
Intelligent Machines—On Earth And Beyond

Nature has already created machines that think here on Earth—humans.

Similarly, Nature could also create machines that think on extrasolar planets that are in the so-called Habitable Zone around their parent stars (the region that allows for the existence of liquid water on a rocky planet’s surface).

The most recent observations of extrasolar planets have shown that a few tenths of all the stars in our Milky Way galaxy host roughly Earth-size planets in their habitable zones.

Consequently, if life on exoplanets is not extremely uncommon, we could discover some form of extrasolar life within about 30 years. In fact, if life is ubiquitous, we could get lucky and discover life even within the next ten years, through a combination of observations by the Transiting Exoplanet Survey Satellite (TESS; to be launched in 2017) and the James Webb Space Telescope (JWST; to be launched in 2018).

However, one may argue, primitive life forms are not machines that think. On Earth, it took about 3.5 billion years from the emergence of life to the appearance of Homo sapiens.

Are the extrasolar planets old enough to have developed intelligent life? In principle, they definitely are.

In the Milky Way, about half of the Sun-like stars, are older than the Sun.

Therefore, if the evolution of life on Earth is not entirely atypical, the Galaxy may already be teeming with places in which there are “machines” that are even more advanced than us, perhaps by as much as a few billion years!

Can we, and should we try to find them? I personally believe that we almost have no freedom to make those decisions.

Human curiosity has proven time and again to be an unstoppable drive, and those two endeavors will undoubtedly continue at full speed. Which one will get to its target first? To even attempt to address this question we have to note that there is one important difference between the search for extraterrestrial intelligent civilizations and the development of AI machines.

Progress towards the “singularity” (AI matching or surpassing humans) will almost certainly take place, since the development of advanced AI has the promise of producing (at least at some point) enormous profits. On the other hand, the search for life requires funding at a level that can usually be provided only by large national space agencies, with no immediate prospects for profits in sight. This may give an advantage to the construction of thinking machines over the search for advanced civilizations. At the same time, however, there is a strong sense within the astronomical community that finding life of some form (or at least meaningfully constraining the probability of its existence) is definitely within reach.

Which of the two potential achievements (the discovery of extraterrestrial intelligent life or the development of human-matching thinking machines) will constitute a bigger “revolution”?

There is no doubt that thinking machines will have an immediate impact on our lives. Such may not be the case with the discovery of extrasolar life. However, the existence of an intelligent civilization on Earth remains humanity’s last bastion for being special. We live, after all, in a Galaxy with billions of similar planets and an observable universe with hundreds of billions of similar galaxies. From a philosophical perspective, therefore, I believe that finding extrasolar intelligent life (or the demonstration that it is exceedingly rare) will rival the Copernican and Darwinian revolutions combined.

Neuroscientist; Collège de France, Paris; Author, The Number Sense; Reading In the Brain
Two Cognitive Functions That Machines Still Lack

When Turing invented the theoretical device that became the computer, he confessed that he was attempting to copy “a man in the process of computing a real number”, as he wrote in his seminal 1936 paper. In 2015, studying the human brain is still our best source of ideas about thinking machines. Cognitive scientists have discovered two functions that, I argue, are essential to genuine thinking as we know it, and that have escaped programmers’ sagacity—yet.

1. A global workspace

Current programming is inherently modular. Each piece of software operates as an independent “app”, stuffed with its own specialized knowledge. Such modularity allows for efficient parallelism, and the brain too is highly modular—but it also able to share information. Whatever we see, hear, know or remember does not remain stuck within a specialized brain circuit. Rather, the brain of all mammals incorporates a long-distance information sharing system that breaks the modularity of brain areas and allows them to broadcast information globally. This “global workspace” is what allows us, for instance, to attend to any piece of information on our retina, say a written letter, and bring it to our awareness so that we may use it in our decisions, actions, or speech programs. Think of a new type of clipboard that would allow any two programs to transiently share their inner knowledge in a user-independent manner. We will call a machine “intelligent” when it not only knows how to do things, but “knows that it knows them”, i.e. makes use of its knowledge in novel flexible ways, outside of the software that originally extracted it. An operating system so modular that it can pinpoint your location on a map in one window, but cannot use it to enter your address in the tax-return software in another window, is missing a global workspace.

2. Theory-of-mind

Cognitive scientists have discovered a second set of brain circuits dedicated to the representation of other minds—what other people think, know or believe. Unless we suffer from a disease called autism, all of us constantly pay attention to others and adapt our behavior to their state of knowledge—or rather to what we think that they know. Such “theory-of-mind” is the second crucial ingredient that current software lacks: a capacity to attend to its user. Future software should incorporate a model of its user. Can she properly see my display, or do I need to enlarge the characters? Do I have any evidence that my message was understood and heeded? Even a minimal simulation of the user would immediately give a strong impression that the machine is “thinking”. This is because having a theory-of-mind is required to achieve relevance (a concept first modeled by cognitive scientist Dan Sperber). Unlike present-day computers, humans do not say utterly irrelevant things, because they pay attention to how their interlocutors will be affected by what they say. The navigator software that tells you “at the next roundabout, take the second exit” sounds stupid because it doesn’t know that “go straight” would be a much more compact and relevant message.

Global workspace and theory-of-mind are two essential functions that even a one-year-old child possesses, yet our machines still lack. Interestingly, these two functions have something in common: many cognitive scientists consider them the key components of human consciousness. The global workspace provides us with Consciousness 1.0: the sort of sentience that all mammals have, which allows them to “know what they know”, and therefore use information flexibly to guide their decisions. Theory-of Mind is a more uniquely human function that provides us with Consciousness 2.0: a sense of what we know in comparison with what other people know, a capacity to simulate other people’s thoughts, including what they think about us, therefore providing us with a new sense of who we are.

I predict that, once a machine pays attention to what it knows and what the user knows, we will immediately call it a “thinking machine”, because it will closely approximate what we do.

There is a huge room here for improvement in the software industry. Future operating systems will have to be rethought in order to accommodate such new capacities as sharing any data across apps, simulating the user’s state of mind, and controlling the display according to its relevance to the user’s inferred goals.

Professor, Oxford University; Director, Future of Humanity Institute; Author, Superintelligence: Paths, Dangers, Strategies
A Difficult Topic

First—what I think about humans who think about machines that think: I think that for the most part we are too quick to form an opinion on this difficult topic. Many senior intellectuals are still unaware of the recent body of thinking that has emerged on the implications of superintelligence. There is a tendency to assimilate any complex new idea to a familiar cliché. And for some bizarre reason, many people feel it is important to talk about what happened in various science fiction novels and movies when the conversation turns to the future of machine intelligence (though hopefully John Brockman’s admonition to the Edgecommentators to avoid doing so here this will have a mitigating effect on this occasion).

With that off my chest, I will now say what I think about machines that think:

Machines are currently very bad at thinking (except in certain narrow domains).

  1. They’ll probably one day get better at it than we are (just as machines are already much stronger and faster than any biological creature).
  2. There is little information about how far we are from that point, so we should use a broad probability distribution over possible arrival dates for superintelligence.
  3. The step from human-level AI to superintelligence will most likely be quicker than the step from current levels of AI to human-level AI (though, depending on the architecture, the concept of “human-level” may not make a great deal of sense in this context).
  4. Superintelligence could well be the best thing or the worst thing that will ever have happened in human history, for reasons that I have described elsewhere.

The probability of a good outcome is determined mainly by the intrinsic difficulty of the problem: what the default dynamics are and how difficult it is to control them. Recent work indicates that this problem is harder than one might have supposed. However, it is still early days and it could turn out that there is some easy solution or that things will work out without any special effort on our part.

Nevertheless, the degree to which we manage to get our act together will have some effect on the odds. The most useful thing that we can do at this stage, in my opinion, is to boost the tiny but burgeoning field of research that focuses on the superintelligence control problem (studying questions such as how human values can be transferred to software). The reason to push on this now is partly to begin making progress on the control problem and partly to recruit top minds into this area so that they are already in place when the nature of the challenge takes clearer shape in the future. It looks like maths, theoretical computer science, and maybe philosophy are the types of talent most needed at this stage.

That’s why there is an effort underway to drive talent and funding into this field, and to begin to work out a plan of action. At the time when this comment is published, the first large meeting to develop a technical research agenda for AI safety will just have taken place.

Psychologist; Author, Consciousness: An Introduction
The Next Replicator

I think that humans think because memes took over our brains and redesigned them. I think that machines think because the next replicator is doing the same. It is busily taking over the digital machinery that we are so rapidly building and creating its own kind of thinking machine.

Our brains, and our capacity for thought, were not designed by a great big intelligent designer in the sky who decided how we should think and what our motivations should be. Our intelligence and our motivations evolved. Most (probably all) AI researchers would agree with that. Yet many still seem to think that we humans are intelligent designers who can design machines that will think the way we want them to think and have the motivations we want them to have. If I am right about the evolution of technology they are wrong.

The problem is a kind of deluded anthropomorphism: we imagine that a thinking machine must work the way that we do, yet we so badly mischaracterise ourselves that we do the same with our machines. As a consequence we fail to see that all around us vast thinking machines are evolving on just the same principles as our brains once did. Evolution, not intelligent design, is sculpting the way they will think.

The reason is easy to see and hard to deal with. It is the same dualism that bedevils the scientific understanding of consciousness and free will. From infancy, it seems, children are natural dualists, and this continues throughout most people’s lives. We imagine ourselves as the continuing subjects of our own stream of consciousness, the wielders of free will, the decision makers that inhabit our bodies and brains. Of course this is nonsense. Brains are massively parallel instruments untroubled by conscious ghosts.

This delusion may, or may not, have useful functions but it obscures how we think about thinking. Human brains evolved piecemeal, evolution patching up what went before, adding modules as and when they were useful, and increasingly linking them together in the service of the genes and memes they carried. The result was a living thinking machine.

Our current digital technology is similarly evolving. Our computers, servers, tablets, and phones evolved piecemeal, new ones being added as and when they were useful and now being rapidly linked together, creating something that looks increasingly like a global brain. Of course in one sense we made these gadgets, even designed them for our own purposes, but the real driving force is the design power of evolution and selection: the ultimate motivation is the self-propagation of replicating information.

We need to stop picturing ourselves as clever designers who retain control and start thinking about our future role. Could we be heading for the same fate as the humble mitochondrion; a simple cell that was long ago absorbed into a larger cell? It gave up independent living to become a powerhouse for its host while the host gave up energy production to concentrate on other tasks. Both gained in this process of endosymbiosis.

Are we like that? Digital information is evolving all around us, thriving on billions of phones, tablets, computers, servers, and tiny chips in fridges, car and clothes, passing around the globe, interpenetrating our cities, our homes and even our bodies. And we keep on willingly feeding it. More phones are made every day than babies are born, 100 hours of video are uploaded to the Internet every minute, billions of photos are uploaded to the expanding cloud. Clever programmers write ever cleverer software, including programs that write other programs that no human can understand or track. Out there, taking their own evolutionary pathways and growing all the time, are the new thinking machines.

Are we going to control these machines? Can we insist that they are motivated to look after us? No. Even if we can see what is happening, we want what they give us far too much not to swap it for our independence.

So what do I think about machines that think? I think that from being a little independent thinking machine I am becoming a tiny part inside a far vaster thinking machine.

About basicrulesoflife

Year 1935. Interests: Contemporary society problems, quality of life, happiness, understanding and changing ourselves - everything based on scientific evidence. Artificial Intelligence Foundation Latvia, http://www.artificialintelligence.lv Editor.
This entry was posted in All Posts, Artificial Intelligence. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s