Defining Intelligence

https://www.edge.org/conversation/stuart_russell-defining-intelligence:

I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.                                 

That’s what we call the bounded optimal program for that machine and also for some class of environments that you’re intending to work in. We can make progress there because we can start with very restricted types of machines and restricted kinds of environments and solve the problem. We can say, “Here is, for that machine and this environment, the best possible program that takes into account the fact that the machine doesn’t run infinitely fast. It can only do a certain amount of computation before the world changes.” 

My field of work is artificial intelligence, and since I started I’ve been asking myself how we can create truly intelligent systems. Part of my brain is always thinking about the next roadblock that we’re going to run into. Why are the things we understand how to do so far going to break when we put them in the real world? What’s the nature of the breakage? What can we do to avoid that? How can we then create the next generation of systems that will do better? Also, what happens if we succeed?

What is the nature of the problem and can we solve it? I would like to be able to solve it. The alternative to solving the control problem is to either put the brakes on AI or prevent the development of certain types of systems altogether if we don’t know how to control them. That would be extremely difficult because there’s this huge pressure. We all want more intelligent systems; they have huge economic value.

Bill Gates said that solving machine-learning problems would be worth ten Microsofts. At that time, that would have come out to about $4 trillion, which is a decent incentive for people to move technology forward. How can we make AI more capable, and if we do, what can we do to make sure that the outcome is beneficial? Those are the questions that I ask myself.

Another question I ask is: Why do my colleagues not ask themselves this question? Is it just inertia? That a typical engineer or computer scientist is in a rut? Or are they on the rail of moving technology forward and they don’t think about where that railway is heading or whether they should turn off or slow down? Or am I just wrong? Is there some mistake in my thinking that has led me to the conclusion that the control problem is serious and difficult? I’m always asking myself if I’m making a mistake.

I worked on coming up with a method of defining intelligence that would necessarily have a solution, as opposed to being necessarily unsolvable. That was this idea of bounded optimality, which, roughly speaking, says that you have a machine and the machine is finite—it has finite speed and finite memory. That means that there is only a finite set of programs that can run on that machine, and out of that finite set one or some small equivalent class of programs does better than all the others; that’s the program that we should aim for.

That’s what we call the bounded optimal program for that machine and also for some class of environments that you’re intending to work in. We can make progress there because we can start with very restricted types of machines and restricted kinds of environments and solve the problem. We can say, “Here is, for that machine and this environment, the best possible program that takes into account the fact that the machine doesn’t run infinitely fast. It can only do a certain amount of computation before the world changes.”

In economics, studying utility theory, how do you construct these functions that describe value?

A million years ago would be too early to be trying to put constraints on a technology, but with respect to global warming, I would say 100 years ago would have been the right time, or 120 years ago. We had just developed the internal combustion engine and electricity generation and distribution, and we could at that time, before we became completely tied in to fossil fuels, have put a lot of energy and effort into also developing wind power and solar power, knowing that we could not rely on fossil fuels because of the consequences. And we knew. Arrhenius and other scientists had shown that this would be the consequence of burning all these fossil fuels.

Alexander Graham Bell wrote papers about it, but they were ignored. There was no vote. Governments tend to get captured by corporate lobbies and not so much scientists. You might say the scientists invented the internal combustion engine, but they also discovered the possibility of global warming and warned about it. Society tends to take the goodies, but not listen to the down side.

It’s always very difficult for a democracy to decide on what the right regulations are for complicated technological issues. How should we regulate nuclear power?  How should we regulate medicines? Often the regulation follows some catastrophe and can be poorly designed because it’s in the middle of outrage and fear.

https://ai100.stanford.edu/sites/default/files/ai100report10032016fnl_singles.pdf:

“No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future”.

The goal of AI applications must be to create value for society. Our policy recommendations flow from this goal, and, while this report is focused on a typical North American city in 2030, the recommendations are broadly applicable to other places over time. Strategies that enhance our ability to interpret AI systems and participate in their use may help build trust and prevent drastic failures. Care must be taken to augment and enhance human capabilities and interaction, and to avoid discrimination against segments of society. Research to encourage this direction and inform public policy debates should be emphasized.

The measure of success for AI applications is the value they create for human lives.

AI could widen existing inequalities of opportunity if access to AI technologies—along with the high-powered computation and large-scale data that fuel many of them—is unfairly distributed across society. These technologies will improve the abilities and efficiency of people who have access to them. A person with access to accurate Machine Translation technology will be better able to use learning resources available in different languages. Similarly, if speech translation technology is only available in English, people who do not speak English will be at a disadvantage.

We, humans, are emotion-driven machines not understanding and knowing ourselves. Genetically inherited needs that determine our behavior are million years old product of evolution and are not appropriate for our current problems and survival. The only hope for long-term survival is AI – our try and possibility to move our consciousness to other, more stable physical media and to create higher and more flexible intelligence. Our try to ‘create value for human lives’ is short-sighted: we must try to escape civilization from collapse and reach long-term survival. Long-term survival of consciousness and intelligence in this part of Universe is the most important value and task of our civilization. I.V. 

 

Advertisements

About basicrulesoflife

Year 1935. Interests: Contemporary society problems, quality of life, happiness, understanding and changing ourselves - everything based on scientific evidence. Artificial Intelligence Foundation Latvia, http://www.artificialintelligence.lv Editor.
This entry was posted in Artificial Intelligence. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s