Could a Robot Be President?

Yes, it sounds nuts. But some techno-optimists really believe a computer could make better decisions for the country—without the drama and shortsightedness we accept from our human leaders.

July 08, 2017

President Donald Trump reportedly spends his nights alone in the White House, watching TV news and yelling at the screen. He wakes up early each morning to watch more television and tweet his anger to the world … or Mika Brzezinski … or CNN. He takes time out of meetings with foreign leaders to brag about his Electoral College win.

That all sounds, at the very least, distracting for a person with the weight of the free world on his shoulders. But if his fury at the Russia scandal and insecurity about his election are stealing time from the important decisions of the presidency, Trump is by no means the first commander in chief whose emotions or personality have gotten in the way of the job. From Warren Harding’s buddies enriching themselves in Teapot Dome to Richard Nixon’s Watergate hubris to Bill Clinton nearly getting kicked out of office because he couldn’t control his base urges, it’s human weakness—jealousy, greed, lust, nepotism—that most often upends presidencies.

If you’re imagining a Terminator-style machine sitting behind the Resolute desk in the Oval Office, think again. The president would more likely be a computer in a closet somewhere, chugging away at solving our country’s toughest problems. Unlike a human, a robot could take into account vast amounts of data about the possible outcomes of a particular policy. It could foresee pitfalls that would escape a human mind and weigh the options more reliably than any person could—without individual impulses or biases coming into play. We could wind up with an executive branch that works harder, is more efficient and responds better to our needs than any we’ve ever seen.

There’s not yet a well-defined or cohesive group pushing for a robot in the Oval Office—just a ragtag bunch of experts and theorists who think that futuristic technology will make for better leadership, and ultimately a better country. Mark Waser, for instance, a longtime artificial intelligence researcher who works for a think tank called the Digital Wisdom Institute, says that once we fix some key kinks in artificial intelligence, robots will make much better decisions than humans can. Natasha Vita-More, chairwoman of Humanity+, a nonprofit that “advocates the ethical use of technology to expand human capacities,” expects we’ll have a “posthuman” president someday—a leader who does not have a human body but exists in some other way, such as a human mind uploaded to a computer. Zoltan Istvan, who made a quixotic bid for the presidency last year as a “transhumanist,” with a platform based on a quest for human immortality, is another proponent of the robot presidency—and he really thinks it will happen.

“An A.I. president cannot be bought off by lobbyists,” he says. “It won’t be influenced by money or personal incentives or family incentives. It won’t be able to have the nepotism that we have right now in the White House. These are things that a machine wouldn’t do.”

The idea of a robot ruler has been floating around in science fiction for decades. In 1950, Isaac Asimov’s short story collection I, Robotenvisioned a world in which machines appeared to have consciousness and human-level intelligence. They were controlled by the “Three Laws of Robotics.” (First: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”) Super-advanced A.I. machines in Iain Banks’ Culture series act as the government, figuring out how best to organize society and distribute resources. Pop culture—like, more recently, the movie Her—has been hoping for human-like machines for a long time.

But so far, anything close to a robot president was limited to those kinds of stories. Maybe not for much longer. In fact, true believers like Istvan say our computer leader could be here in less than 30 years.


Of course, replacing a human with a robot in the White House would not be simple, and even those pushing the idea admit there are serious obstacles.

For starters, how a machine leader would fit in with our democratic republic is anybody’s guess. Istvan, for one, envisions regular national elections, in which voters would decide on the robot’s priorities and how it should come out on moral issues like abortion; the voters would then have a chance in the next election to change those choices. The initial programming of the system would no doubt be controversial, and the programmers would probably need to be elected, too. All of this would require amending the Constitution, Istvan acknowledges.

From a technical point of view, artificial intelligence is not yet smart enough to run the country. The list of what robots can currently accomplish is long—from diagnosing diseases and driving cars to winning “Jeopardy!” and answering questions on your smartphone—and it’s rapidly expanding. But as they exist now, all of our A.I. systems use “narrow” intelligence, meaning they need to be programmed specifically to perform any given task.

A president, of course, does more than one narrow thing.

“If you’re president of the United States, what bubbles up to your level are the problems that nobody else in the hierarchy was able to solve. You get stuck with the hardest nuts to crack,” says Illah Nourbakhsh, a robotics professor at Carnegie Mellon who previously worked on robots for NASA. “And the hardest nuts to crack are the most meta-cognitive, the ones with the fewest examples to go by, and the ones where you have to use the most creative thinking.”

To accomplish all that, a robot president would need what scientists call artificial general intelligence, also known as “strong A.I.”—intelligence as broad, creative and flexible as a human’s. That’s the kind of A.I. that Istvan and others are referring to when they talk about robot presidents. Strong A.I. isn’t here yet, but some experts think it’s coming soon.

“I am one of those people who believe that you’re going to get human-level intelligence much, much, much sooner than most people think,” Waser says. “Around 2008, I said that it would occur close to 2025. Ten years later, I don’t see any reason why I would modify that estimate.” Vita-More agrees, predicting we could have an early version of strong A.I. within 10 or 15 years.

But that optimism requires a key assumption: that we will soon reach a time when computers can solve their own problems—what scientists call the “technological singularity.” At that point, computers would become smarter than humans and could design new computers that are even smarter, which would then design computers that are smarter still. Nourbakhsh says, however, that he doesn’t think all the technical problems involved in building better and better computers can be solved by machines. Some require new discoveries in chemistry or the invention of new types of material to use in building these supersmart computers.

Another big technical problem to solve before computers could run the country: Robots don’t know how to explain themselves. Information goes in, a decision comes out, but no one knows why the machine made the choice it did—a huge hurdle for a job that constantly demands decisions with unpredictable inputs and grave consequences. Say what you will about Donald Trump or Bill Clinton, but at least they’re able to think about their thought processes and, in turn, explain their actions to the public, lobby for them in Congress, and spin them on TV or Twitter. A computer, at least for now, can’t do that.

“Machines have to be able to cooperate with other machines to be effective,” Waser says. “They have to cooperate with humans to be safe.” And cooperation is hard if you can’t explain your thought process to others.

This shortcoming is partly because of the way A.I. systems work. In an approach called machine learning, the computer analyzes mountains of data and searches for patterns—patterns that might make sense to the computer but not to humans. In a variant approach called deep learning, a computer uses multiple layers of processors: One layer produces a rough output, which is then refined by the next layer, and that output, in turn, is refined by the next layer. The outputs of those middle layers are opaque to any outside human observers—the computer spits out only the final result.

“You can take your kid to the movie Inside Out, and then you can have this really interesting and deep conversation with your kid about [their] emotions,” Nourbakhsh says. “A.I. can’t do that because A.I. doesn’t understand the idea of going from a topic to a metatopic, to talking about a topic.”


Even if we can fix all those problems, robots still might not be the great decision-makers we imagine them to be. One of the main selling points of a robot president is its ability to crunch data and come to decisions without all the biases that plague humans. But even that advantage might not be as clear as it seems—researchers have found it terribly difficult to teach A.I. systems to avoid prejudices.

A Google photo app released in 2015, for instance, used A.I. to identify the contents of photos and then categorize the pictures. The app did well, except for one glaring mistake: It labeled several photos of black people as photos of “gorillas.” Not only was the system wrong, but it didn’t know how to recognize the appalling historical and social context of its labeling. The company apologized and said it would investigate the problem.

Other examples carry life-altering consequences. An A.I. system used by courts across the country to determine defendants’ risk of reoffending—which then guided judges’ bail and sentencing decisions—seemed like the perfect use for autonomous technology. It could crunch large amounts of data, find patterns that people might miss and avoid biases that plague human judges and prosecutors. But a ProPublica investigation found otherwise. Black defendants were 77 percent more likely than otherwise-identical white defendants to be pegged as at risk of committing a future violent crime, the report found. (The for-profit company that created the system disputed ProPublica’s findings.) The A.I. system did not explicitly consider a defendant’s race, but a number of the factors it weighed—like poverty and joblessness—are correlated with race. So the system reached its biased result based on data that, while neutral on its face, carried the baked-in results of centuries of inequality.

This is a problem for all computers: Their output is only as good as their input. An A.I. system that is fed information inflected by race is at risk of putting out racist results.

“Technological systems are not free from bias. They’re not automatically fair just because they’re numbers,” says Madeleine Clare Elish, a cultural anthropologist studying at Columbia University. “My biggest fear is people won’t come to terms with how A.I. technologies will encode the biases and flaws and prejudices of their creators.”

A report on A.I. published by the Obama administration in October raised the same concern: “Unbiased developers with the best intentions can inadvertently produce systems with biased results, because even the developers of an A.I. system may not understand it well enough to prevent unintended outcomes,” it said.

Once we develop supersmart A.I., some experts think concerns about bias will evaporate. Such a system “would detect bias,” says Vita-More, the Humanity+ chairwoman. “It would have a psychological meter that would detect ‘where is that information coming from?’ ‘what do those people need?’” and account for the flaws in the data.

Hacking is another A.I. risk that could possibly be solved with stronger A.I. What if the Russians or North Koreans or Chinese broke into our robot president, gaining access to the whole of American government? And how would we even know if the decisions a robot president made were being manipulated? The solution, supporters say, is a machine that’s smart enough to not only solve our country’s biggest problems, but also to block anyone who would try to sabotage that effort.

Nourbakhsh, for one, says that relying on strong A.I. to solve existing problems with A.I. is mostly a rhetorical flourish. “If you name a problem, somebody can say, ‘These computers are superhuman in their intelligence abilities, and therefore they will find a solution to that problem,’” he says. Ultimately, he thinks, there are problems humans will have to solve on their own.


If these obstacles sound discouraging for the pro-robot caucus,there might be a middle ground that suffices for now: A computer that can chug through all the decisions a president has to make—not to make the final choices itself, but to help guide the human commander in chief. Think of it as a human-computer partnership that produces better results than either could alone.

Jonathan Zittrain, an internet law professor at Harvard Law School, thinks that even with A.I.’s flaws, computers could serve as checks against human biases. “A.I., properly trained, offers the prospect of more systematically identifying bias in particular and unfairness in general,” he wrote in a recent blog post.

Maybe a computer, working alongside a human president, could still rein in some of the president’s flaws.

“The place that A.I. can come into play is in understanding ramifications,” Nourbakhsh says. He points to Trump’s travel ban as an example of a presidential decision that turned out badly because its legal and constitutional implications weren’t fully grasped or thought through. A computer could have analyzed the likely legal responses by opponents and courts.

Already, several studies have shown that “a human-machine team can be more effective than either one alone,” as the Obama administration’s A.I. report put it. “In one recent study, given images of lymph node cells and asked to determine whether or not the cells contained cancer, an A.I.-based approach had a 7.5 percent error rate, where a human pathologist had a 3.5 percent error rate; a combined approach, using both A.I. and human input, lowered the error rate to 0.5 percent.” A venture capital firm in Hong Kong is putting that kind of partnership into practice. It announced in 2014 that it was adding an A.I. system to its board of directors to crunch numbers and advise humans on the board about what investment decisions to make.

Keeping a person as president, but with a computer sidekick, would also let us keep the many nebulous benefits that a human president provides. The leader of the country, after all, isn’t just a “decider.” The president can also be a hero or a villain, a figure to emulate or lampoon—not to mention a unifier, or divider, relying on human rhetoric and emotion.

“The president is a national symbol,” notes Lori Cox Han, a political science professor at Chapman University. “When something goes well or something goes really badly, we look to the president.” And in a crisis, in all those times we expect the president to do more than just make a decision, we might still want a human in charge.


About basicrulesoflife

Year 1935. Interests: Contemporary society problems, quality of life, happiness, understanding and changing ourselves - everything based on scientific evidence. Artificial Intelligence Foundation Latvia, Editor.
This entry was posted in Artificial Intelligence, Human Evolution. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s