Lyle N. Long∗ Pennsylvania State University, University Park, Pennsylvania 16802 and Troy D. Kelley† U.S. Army Research Laboratory, Aberdeen, Maryland 20783
This paper discusses the psychological, philosophical, and neurological definitions of consciousness and the prospects for the development of conscious machines or robots in the foreseeable future. Various definitions of consciousness are introduced and discussed within the different fields mentioned. A conscious machine or robot may be within the realm of engineering possibilities if current technological developments, especially Moore’s law, continue at their current pace. Given the complexity of cognition and consciousness a hybrid parallel architecture with significant input/output appears to offer the best solution for the implementation of a complex system of systems which functionally approximates a human mind. Ideally, this architecture would include traditional symbolic representations as well as distributed representations which approximate the nonlinear dynamics seen in the human brain.
I. Introduction. WHILE there have been numerous discussions of computers reaching human levels of intelligence [1–3], building intelligent or conscious machines is still an enormously complicated task. Kurzweil  believes there will be systems with intelligence equal to humans by the late 2020s, and that we will see a merging of human and machine systems. Philosophers [4–6] and psychologists [7,8] have been debating consciousness for centuries, and more recently neuroscientists have begun discussing the scientific aspects of consciousness [9–13]. Discover Magazine [Nov. 1992] referred to consciousness as one of the “ten great unanswered questions of science.”
It is time for engineers and scientists to seriously discuss the architectural requirements and possibilities of building conscious systems. This paper compares and contrasts what is known about consciousness from philosophy, psychology, and neuroscience with what might be possible to build using complex systems of computers, sensors, algorithms, and software. This paper has three purposes: 1) to review the current understanding of consciousness in a form suitable for engineers, 2) to discuss the possibility of conscious robots, and 3) to give some preliminary architectural requirements for conscious robot designs.
II. Definitions: Autonomy, Intelligence, and Consciousness.
It is important to distinguish between autonomy, intelligence, and consciousness. In the field of unmanned vehicles (air-, land-, or sea-based) the terms autonomous and intelligent are often used synonymously, but these are different ideas. Many unmanned systems are simply operated remotely, however, they do not have any onboard intelligence. Intelligence can be defined as : “A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.” Autonomy is different than intelligence and consciousness : “Autonomy refers to systems capable of operating in the real-world environment without any form of external control for extended periods of time.” A system can be autonomous, but not very intelligent (e.g., an earthworm) or it could be intelligent but not autonomous (e.g., a supercomputer, with appropriate software to simulate intelligence). Autonomy would require some intelligence, however. Clearly, it is possible to have varying levels of both autonomy and intelligence. It is also possible to have varying levels of consciousness. Intelligence and consciousness are not the same thing either, and will have different architectural requirements. A conscious system would have capabilities far beyond a mere intelligent or autonomous system, but one of the problems in the scientific study of consciousness is that people often interpret consciousness differently. In some cases, people will take it to mean something far beyond self-awareness.
While not everyone agrees on a definition of consciousness, one well-accepted definition  describes it as a “state of awareness” or being self-aware including: 1) Subjectivity: Our own ideas, moods, and sensations are experienced directly; unlike those of other people. 2) Unity: All sensor modalities melded into one experience. 3) Intentionality: Experiences have meaning beyond the current moment.
These arise simply from the physical properties of the neurons and synapses in the central nervous system , not some mystical properties (as Descartes claimed ) or quantum effects (as Penrose and others claim ). In addition, consciousness is often closely associated with attention [10,17]. Attention brings objects into our consciousness, and also allows us to handle the massive amounts of data entering our brains, however, some things are attended to unconsciously. Another definition of consciousness that is often cited is : “Most psychologists define consciousness simply as the experiencing of one’s own mental events in such a manner that one can report on them to others.” The above two definitions are often called self-consciousness or access consciousness. Esoteric questions such as do humans all perceive the color red in the same manner, or what does it feel like to be a cat, or what it is like to be a particular cat  will not be considered here.
Some say the big problem with consciousness is that there is no definitive test for it , so it is difficult to address scientifically, compounded by the problem that there are many different definitions of consciousness. If one restricts ourselves to testing if something or someone is self-aware or self-conscious, then there probably are tests. It is likely that machines can (and will) be self-aware, we can test for it, and it will be a remarkable moment in history. While autonomy and intelligence are uncoupled, consciousness is related to intelligence and there are probably gradations in consciousness. Many people believe that many mammals have some level of consciousness or are at least self-aware; and there are even indications that fish may have consciousness [20–22]. One simple test for this is the “mirror test,” where a spot of color is placed on the test subject and when the subject looks in the mirror they recognize that they are seeing themselves (maybe by trying to touch the spot on their own body not the mirror). Humans older than 18 months, great apes, bottlenose dolphins, pigeons, elephants, and magpies all pass this test and show apparent self-awareness. When consciousness is defined as above, it is not that difficult to speculate that in the future machines will be conscious, but in order for them to have subjectivity, unity, and intentionality they will need powerful processing power, significant multi-model sensory input with data fusion, machine learning, and large memory systems. One model of the varying levels of intelligence  (and probably consciousness) is shown in Fig. 1. At the lowest level are creatures that just perform stimulus-response behavior such as worms. Simple versions of robots such as these are fairly easy to build, for example, with simply a touch sensor and simple motor control. At the next level (e.g., a goldfish) there is significant perception, sensor input, and sensor processing. The structure of the goldfish brain has many similarities to mammalian brains . At the next level (e.g., rats), the system is capable of generalizing its experience, i.e., applying its current knowledge to analogous new situations. At the highest level, humans are also capable of induction and deduction. Along with levels of intelligence, there are levels of consciousness. We know that some non-human animals do exhibit self-awareness (possibly even goldfish). These levels of consciousness are related to the increasing levels of intelligent functionality (perception, generalization, induction, and deduction). Current robotic vehicles or systems have probably not achieved the intelligence of a rat or the autonomy of a worm.
Viens no labākajiem rakstiem nozarē. Atmetot izplatīto mistiku: “not some mystical properties (as Descartes claimed ) or quantum effects (as Penrose and others claim ” Daudzo filosofu sacerējumi atmesti ar smaidu: “Esoteric questions such as do humans all perceive the color red in the same manner, or what does it feel like to be a cat, or what it is like to be a particular cat  will not be considered here”.
Tik pat viegli atspēkots paziņojums, ka inteliģenci ar Tjūringa testu nevarēs mērīt:
“If one restricts ourselves to testing if something or someone is self-aware or self-conscious, then there probably are tests. It is likely that machines can (and will) be self-aware, we can test for it, and it will be a remarkable moment in history. While autonomy and intelligence are uncoupled, consciousness is related to intelligence and there are probably gradations in consciousness.”
Tas ir elementāri: jebkuras ierīces vai mašīnas inteliģenci jāmēra, liekot tai darboties, kaut kādā veidā izpausties, sevi parādīt. Mašīnas ‘izturēšanās’ ir vienīgais kritērijs, kas ļauj kaut ko par to pateikt. Ja nav novērojumu, tad nav par ko runāt. I.V.