AI, engineering approach

Abstract.

In this paper author formulates definitions of basic notions and proposes definitions and conditions necessary for achieving general artificial intelligence (AGI).

Author maintains that the last big hurdle in achieving AGI in robots is the fact that they don’t create external world models (EWM).

Keywords.

Artificial intelligence, Emergence, Consciousness, External world models, Thinking, Learning.

 

  1. Current state.

The common features (random search, memories, learning, thinking, and EWM) necessary for creation of AI are known [1], [2], but the conditions for emergence of these features are not considered: We believe consciousness will result as an emergent behavior if there is adequate sensor input, processing power, and learning. [3].

Human-like intelligence is referred to as strong AI. General intelligence or strong AI has not been achieved yet and is a long-term goal of AI research. In conferences and hundreds of papers there are complicated discussions about the consciousness and AI. But some essential notions are missing.

1.1. Intelligence and its constituents are emergent processes. In animals simplest features are inherited genetically (random search, memories, learning), and after the birth more complicated features are developed: learning, thinking, consciousness, and EWM. In artificial machines these processes and their emergence conditions must preprogrammed: it is impossible to train robots to achieve features, for which evolution has spent millions of years. The first is preprogramming the features, tendencies and rewards; the second is to support, to create the conditions for the emergence and development of more complicated features: thinking, consciousness, and EWM. This means to determine, to define the emergence conditions and implement them.

1.2. Thinking about one model is described in [4]: For the robot to learn how to stand and twist its body, for example, it first performs a series of simulations in order to train a high-level deep-learning network how to perform the task—something the researchers compare to an “imaginary process.” This provides overall guidance for the robot, while a second deep-learning network is trained to carry out the task while responding to the dynamics of the robot’s joints and the complexity of the real environment. 

It is impossible to preprogram thinking for all possible actions and processes: the structure of this one process has to be replicated and adapted for uncountable number of other processes.

1.3. Contemporary robots are preprogrammed machines with distinct behaviors. When confronted with new and unknown situations these robots don’t have adequate behavior. The main issue and problem is the emergence of EWM. Some AI emergence conditions are counted in [1], [2], [3]. Hopeful systems are with input sensors, actuators and body distinct from environment, where creation of unrestricted number of EWM can be induced [4].

1.4. Step in the direction of domain-independent reinforced learning (RL) is deep learning, corresponding results are achieved with PR2 [5]. US Berkeley researcher Levine says: For all our versatility, humans are not born with a repertoire that can be deployed like a Swiss army knife, and we don’t need to be programmed. Instead we learn new skill over the course of our life from experience and from other humans.

The reality is different: some minutes after the birth most animals start using genetically inherited movements (drinking mother’s milk, following the mother, running, swimming or flying). These movements are not learned just after the birth but sought out from huge genetically inherited library and induced by the reward system. These movements were learned and fine-tuned by previous generations of individuals, and after the birth they are recognized and used after some tries.

Robot builders have to copy this evolutionary experience: millions of movements by contemporary robots first have to be learned slowly and after that used – taken from library and optimized and fine-tuned for real conditions. Creation and replication of EWM and learning then follows. But it takes much more time [6].

1.5. Deep learning is close to the way evolution happens [9]: Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output.

1.6. The necessity of models and their improving via reinforcement learning is mentioned in [7]:

With model-free approach, these works could not leverage the knowledge about underlying system, which is essential and plentiful in software engineering, to enhance their learning. In this paper, we introduce the advantages of model-based RL. By utilizing engineering knowledge, system maintains a model of interaction with its environment and predicts the consequence of its action, to improve and guarantee system performance. We also discuss the engineering issues and propose a procedure to adopt model-based RL to build self-adaptive software and bring policy evolution closer to real-world applications.

1.7. The generalization in AI is still a problem: AI programs …were successful at specific tasks, but generalizing the learned behavior to other domains was not attempted. How can generalized intelligence ever be realized? This paper will examine the different aspects of generalization and whether it can be performed successfully by future computer programs or robots [8].

1.8. Jeff Hawkins has named three fundamental attributes of the neocortex necessary for the intelligence to emerge: learning by rewiring, sparse distributed representations, and sensorymotor integration [13]. These three attributes provide and support the intelligence emergence conditions 3.1-3.4.

 

 

  1. Definitions of basic notions.

It is impossible to create something not defined. Without definitions science is impossible. But all definitions are temporal.

Generation of information in living systems is accomplished via random search, which creates the entropy space, and selection, which eliminates the not appropriate, not useful states or processes.

2.1. Intelligence is information processing system’s (IPS) ability to achieve its goals by adapting its behavior to changing environment, using pre-programmed (genetically inherited or obtained from environment) information, and optimize its behavior by creating and using models of environment and predictions about the environment’s reactions.

2.2. Artificial intelligence is the simulation of intelligence in machines.

2.3. EWM-s are preprogrammed or learned collaboration algorithms between the IPS and the environment. Activation of EWM enables IPS to predict the EW events. When these predictions are correct, we say that IPS understands the EW.

2.4. General artificial intelligence (AGI) is human-like intelligence in which IPS achieves its goals by creating unrestricted number of EWM and predicts the EW events.

2.5. Simple learning is accomplished via random moves, sensory feedback and choosing the best moves. More complex learning is accomplished via activation of existing and creation of new EWM (and corresponding behaviors, skills, values, preferences).

2.6. Thinking is the activation of event streams from the past or imagined future, marking them by symbols, and applying the rules of logic and laws of nature (to the degree they are known to the system) to the EWM, without executing corresponding actions. This allows IPS to predict the EW reactions and to plan, to choose own behavior.

2.7. Consciousness is the model of self.

 

 

 

  1. Intelligence and EWM emergence conditions.

3.1. The features created by the structure and programming: ability and tendency to memorize EW event strings (The ability to recognize and predict temporal sequences of sensory inputs) [12], neuron colon level-like structure which provides the ability to filter out the principal, basic lines in all input patterns (generalization [1], [2], [8]), and huge number of connections between neurons which provides the possibility to activate similar patterns [10].

3.2. Sensor signal processing, actuator control (transition from multi-coordinate world of output actuator moves to 4 coordinate world of output actions), and own action evaluation and learning.

3.3. Random search, selection, generalization, reward and punishing system, copying or multiplying the existing EWM or creation of new EWM and movement libraries. For complicated EWM this is achieved by thinking.

3.4. The ability to create the model of self, which receives EW signals and executes internal program’s decisions. This is called consciousness. All sensory streams are integrated and the map of EW is created, where the receiving and acting subject plays the main role. Lyle N. Long names it Unity: All sensor modalities melded into one experience [1].

3.5. The hierarchical reward and punishing system, which creates values – the laws saying what is good and what has to be avoided.

3.6. For General AI the ability to learn, understand spoken and written language and the ability to speak is necessary. For understanding of language the words and symbols of language must be connected with the own sensory experience and EWM .

 

 

 

  1. Emergence.

All atoms and molecules of the physical world, all physical processes, chemical reactions, human made products, inventions and all living beings are emergent property systems. If we have good models of systems and processes, we explain and predict the emerged properties by properties of parts and known physical laws. I will not consider here the declarations about impossibility of generation or understanding emergence. Human intelligence, consciousness and thinking are complex emergent property processes, for which we can define and implement the emergence conditions.

Many emergence conditions in contemporary AI systems are embedded structurally, e.g., big number of connections between the neurons, which allows the activation of alike memories and processes, a layer-like structure of NN, which allows generalizing, creating basic lines and abstract notions for incoming pictures, reward and punishing system which directs behavior. In living entities these features are inherited genetically, in artificial systems they must be preprogrammed.

Consciousness emerges in complicated multi-level systems, therefore we can’t reduce them to the properties of neurons. There are many complexity levels between the basic elements (neurons) and final emerged properties: sensors, neurons, neuron colons, output activators, and processes: memory reading and writing, generalizing, thinking, reward systems, EWM generation and development. But we can formulate the emergence conditions and, when implemented, the consciousness will emerge. The first results are already obtained [4], [6], [9].

The teaching-programming of robot will be like raising human infant [1], [8]. Randomly generated actuator moves will create thousands of event streams (with sensory signals added to each action) recorded in robot’s memory. If after many trials and actuator moves robot stops hitting obstacles, starts grabbing and moving objects, and shows collaboration elements (definite reactions to external visual, audio or touch signals) with EW, or, as in [4], learns to stand vertically on its own feet, or imitates the learned sounds, this means that robot has created maps and models of the EW.

How to teach the robot to adapt to environment, to optimize the own body moves? In a way all animals and humans do: connect input sensor signals to the current situation and processes, remember them, and use them next time by like situations.

Strong AI will arrive only when we will manage our robots to create the models of external world by themselves.  In animals and humans everyday usage of EWM is partly unconscious. This means that the essential lines are maintained but concrete details are abandoned.  For example, all animals unconsciously know and use the Earth’s gravitational force, know that all objects of external world have hard surfaces, but some are soft or liquid, some are hot or cold, and adjust their behavior.

 

 

  1. Discussion and Conclusions.

Emergent behavior is the main process of intelligent systems. EWM and consciousness emerge only in complex systems. This is a price we have to pay for AI and creation of consciousness. Complexity of the system can be measured by the number of parts and emerged properties.

It is not possible to pre-program EWM for all life situations. If future robots will not create EWM for all life situations by themselves, they will not have true intelligence:

 

Rule-based systems and cognitive architectures require humans to program the rules, and this process is not scalable to billions of rules. The machines will need to rely on hybrid systems, learning, and emergent behavior; and they will need to be carefully taught and trained by teams of engineers and scientists. Humans will not be capable of completely specifying and programming the entire system; learning and emergent behavior will be a stringent requirement for development of the system [11].

 

Living beings are genetically prepared for adapting to unknown and changing environment. The first environment all living beings are confronted with after the hatch or birth is own body. Movement is a fundamental characteristic of living systems [8]. In relatively short time and series of random moves they recognize and start using basic genetically inherited actions. Developing and improving these series of actions via learning and thinking leads to creation of EWM. Human body has about 500 muscles, an approximate robot can have about 50 actuators. Researchers have realized that it is impossible to solve the math equations in real time environment even for simple moves, because it takes “minutes or hours of computation for seconds of motion” [8]. This means that all moves are to be optimized via slow supervised learning and after that used automatically, taken from the library. Successful actions are stored and their algorithms are copied and used for the next models.

The intelligence and its constituents are gradual features [1], [11], which can be more or less outspoken, developed and recognizable. Conditions 3.(1-6) facilitate the emergence of thinking, creation of EWM and consciousness, but the degree of necessity of each feature is not known.

The approach with actuator movement library creation and transferring the library to new robots (see 1.3) will have some problems: actuator change is coupled with corresponding program change, in order to transfer the experience of previous machines, the corresponding transcoding of control commands will be necessary. In this sense the evolution of robots will be like the evolution of living beings: in order to transfer the experience from previous generations, new individuals must keep the organs and actuators that ‘understand’ the old commands (e.g., our triune brain).

There are no ‘easy’ or ‘hard’ problems of consciousness [1]. In all IPS notions, senses and emotions (e.g., feeling of some color or feeling of self) are connected with personal sensory experience, which is unique for every individual. In this view talking about “what it is like to be” is nonsensical. As long as we have not reached the direct transfer of information (copying neuron connections) between the individuals, it is impossible to know exactly their inner experience. It is an axiomatic truth of information transfer, and there is no mystics or something impossible.

The task of creating consciousness is challenging: Consciousness is an emergent property, and the first conscious robot will be a bit surprising: “It will be as astounding and frightening to humans as the discovery of life on other planets” [1].

The emergence conditions, tendencies, proclivities, rewards, values, and moves, learned by previous units, must be pre-programmed. The further development of these features and EWM is accomplished by the system.

After the IPS presents the simplest EWM for mechanical moves, it must be taught like human or animal infants [11], [12].

 

 

 

Sources.

  1. Lyle N. Long, Review of Consciousness and the Possibility of Conscious Robots, JOURNAL OF AEROSPACE COMPUTING, INFORMATION, AND COMMUNICATION Vol. 7, February 2010,

http://www.personal.psu.edu/lnl/papers/consc_JACIC_2010.pdf

  1. Lyle N. Long, and Troy D. Kelley, The Requirements and Possibilities of Creating Conscious Systems, http://www.personal.psu.edu/lnl/papers/aiaa20091949.pdf
  2. Lyle N. Long, Troy D. Kelley, and Michael, J. Wenger, The Prospects for Creating Conscious Machines, http://www.personal.psu.edu/lnl/papers/conscious2008.pdf
  3. Robot Toddler Learns to Stand by “Imagining” How to Do It. http://www.technologyreview.com/news/542921/robot-toddler-learns-to-stand-by-imagining-how-to-do-it/
  4. Sarah Yang, New ‘deep learning’ technique enables robot mastery of skills via trial and error http://news.berkeley.edu/2015/05/21/deep-learning-robot-masters-skills-via-trial-and-error/.
  5. Jean-Paul Laumond, Nicolas Mansard, Jean Bernard Lasserre, Optimization as Motion Selection Principle in Robot Action, Communications of the ACM, ACM, 2015, 58 (5), pp.64-74

https://hal.archives-ouvertes.fr/hal-01376752/file/CACM-optimization-principle2.pdf

  1. Han Nguyen Ho, Eunseok Lee, Model-based Reinforcement Learning Approach for Planning in Self-Adaptive Software System, Proceedings of the 9th International Conference on Ubiquitos Information Management and Communication, Article No. 103,

http://dl.acm.org/citation.cfm?id=2701191

  1. Troy D. Kelleyand Lyle N. Long, Deep Blue Cannot Play Checkers: The Need for Generalized Intelligence for Mobile Robots,

https://www.hindawi.com/journals/jr/2010/523757/

  1. The Dark Secret at the Heart of AI, Intelligent Machines by Will Knight,

April 11, 2017,

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/?utm_source=MIT+Technology+Review&utm_campaign=40d49ebf11-weekly_roundup_2017-04-20_edit&utm_medium=email&utm_term=0_997ed6f472-40d49ebf11-154031673&goal=0_997ed6f472-40d49ebf11-154031673&mc_cid=40d49ebf11&mc_eid=e24fa1c1c3

  1. Jeff HawkinsSubutai Ahmad, Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex, https://arxiv.org/abs/1511.00083
  2. Imants Vilks, When Will Consciousness Emerge? Bulletin of Electrical Engineering and Informatics, Vol 2, No 1: March 2013
  3. Yuwei Cui*, Chetan Surpur, Subutai Ahmad, and Jeff Hawkins, Continuous online sequence learning with an unsupervised neural network model,arXiv:1512.05463v1.
  4. Jeff Hawkins, What Intelligent Machines Need to Learn From the Neocortex,

http://spectrum.ieee.org/computing/software/what-intelligent-machines-need-to-learn-from-the-neocortex.

Advertisements

About basicrulesoflife

Year 1935. Interests: Contemporary society problems, quality of life, happiness, understanding and changing ourselves - everything based on scientific evidence. Artificial Intelligence Foundation Latvia, http://www.artificialintelligence.lv Editor.
This entry was posted in Artificial Intelligence. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s