Progress in the design and creation of intelligent machines has been steady for the last four decades and at times has exhibited sharp peaks in both advances and applications. This progress has gone relatively unnoticed, or has been trivialized by the very individuals who have been responsible for it. The field of artificial intelligence has been peculiar in that regard: every advance is hailed as major at the time of its inception, but after a very short time it is delegated to the archives as being “trivial” or “not truly intelligent.” It is unknown why this pattern always occurs, but it might be due to the willingness of researchers to engage in philosophical debate on the nature of mind and the possibility, or impossibility, of thinking machines. By indulging in such debates, researchers waste precious time that is better used dealing with the actual building of these machines or the development of algorithms or reasoning patterns by which these machines can solve problems of both theoretical and practical interest. Also, philosophical musings on artificial intelligence, due to the huge conceptual spaces in which they wander aimlessly, are usually of no help in pointing to the right direction for researchers to follow. What researchers need is a “director” or “set of directors” that are familiar with the subject matter, have both applied and theoretical experience in the field of artificial intelligence, and that eschew philosophical armchair speculation in favor of realistic dialog about the nature and functioning of intelligent machines.
The author of this book has been one of these “directors” throughout his professional career, and even though some of his writings have a speculative air about them, many others have been very useful as guidance to those working in the trenches of artificial intelligence. One can point to the author’s writings as both inspiration and as a source of perspiration, the latter arising because of the difficulty in bringing some of his ideas to fruition. It would be incorrect to state that the author’s ideas have played a predominant role in the field of artificial intelligence, but his influence has been real, if sometimes even in the negative, such as his commentary on the role of perceptrons.
There are intelligent machines today, and they have wide application in business and finance, but their intelligence is restricted (but highly effective) to certain domains of applicability. There are machines for example that can play superb chess and backgammon, being competitive with the best human players in this regard, but these machines, and the reasoning patterns they use in chess and backgammon cannot without major modification indulge themselves in performing financial prediction or proving difficult theorems in mathematics. The building of intelligent machines that can think in multiple domains is at present one of the most difficult outstanding problems in artificial intelligence. Some progress is being made, but it has been stymied again by overindulgence in philosophical speculation and rancorous debates on the nature of mind and whether or not machines can have true emotions.
Humans can of course think in multiple domains. Indeed, a good human chess player can also be a good mathematician or a good chef. The ability to think in multiple domains has been christened as “commonsense” by many psychologists and professional educators, and those skeptical of the possibility of machine intelligence. It is thought by many that in order for a machine to be considered as truly intelligent, or even indeed to possess any intelligence at all, it must possess “commonsense”, in spite of the vague manner in which this concept is frequently presented in both the popular and scientific literature.
The nature of “commonsense” is explored in an atypical manner in this book, and in this regard the author again shows his ability to think outside of the box and phrase issues in a new light. This is not to say that advice on how to implement these ideas in real machines is included in the book, as it is not. But the ideas do seem plausible as well as practical, particularly the concept of a “panalogy”, which is the author’s contraction of the two words “parallel analogy”. A panalogy allows a machine (human or otherwise) to give multiple meanings to an object, event, or situation, and thus be able to discern whether a particular interpretation of an event is inappropriate. A machine good in the game of chess could possibly then give multiple interpretations to its moves, some of which may happen to be similar to the interpretations given to a musical composition for example. The machine could thus use its expertise in chess to write musical compositions, and therefore be able to think in multiple domains. On the other hand, the machine may realize that there are no such analogies between chess and musical composition, and thus refrain from attempting to gain expertise in the latter. Another role for pananalogies, which may be a fruitful one, is that they can be used to measure to what degree interpretations are “entangled” with each other. Intepretations, which are the results of thinking, algorithmic processing, or reasoning patterns as it were, could be entangled in the sense that they always refer to objects, events, or situations in multiple domains. A panalogy, being a collection of interpretations in one domain, could be entangled with another in a different domain. The machine could thus switch between these with great ease, and thus be effective in both domains. It remains of course to construct explicit examples of panalogies that can be implemented in a real machine. The author does not direct the reader on how to do this, unfortunately.
The author also discusses a few other topics that have been hotly debated in artificial intelligence, throughout its five-decade long history, namely the possibility of a conscious machine or one that displays (and feels!) genuine emotions. The nature of consciousness, even in the human case, is poorly understood, so any discussion of its implementation in machines must wait further clarification and elucidation. Contemporary research in neuroscience is giving assistance in this regard. The author though takes another view of consciousness, which departs from the “folk psychology” that this concept is typically embedded in. His view of consciousness is more process-oriented, in that consciousness is the result of more than twenty processes going on in the human brain. An entire chapter is spent elaborating on this view, which is highly interesting to read but of course needs to be connected with what is known in cognitive neuroscience.
“The nature of consciousness, even in the human case, is poorly understood..” Not so anymore. See definition in a previous article. I.V.
It remains to be seen whether the ideas in this book can be implemented in real machines. If the author’s views on emotions, commonsense, and consciousness are correct, as detailed throughout the book, it seems more plausible that machines will arise in the next few years that have these characteristics. If not, then perhaps machine intelligence should be viewed as something that is completely different from the human case. The fact that hundreds of tasks are now being done by machines that used to be thought of as the sole province of humans says a lot about the degree to which machine intelligence has progressed. Whenever the first machines are constructed to operate and reason in many in different domains, it seems likely that they will have their own ideas about how to direct further progress. Their understanding of ideas and issues may perhaps be very different than what humans is, and they may in fact serve as directors for further human advancement in different fields and contexts, much like the author has done throughout a major portion of his life.
On the positive side I am in general agreement with Minsky that thought can be decomposed into subroutines like:
Max Hodges10 years ago
Dennett: The research world is going to be impatient with Marvin because they are eager for computational models that really work. Marvin is saying, “Wait a minute, let’s work out some of the high-level architectural details in a way that’s still very loose, very impressionistic. It’s too early to build the big model.”
Minsky: Actually, I could quarrel with that. I think the architecture described in The Emotion Machine is programmable. If I could afford to get three or four first-rate systems programmers, we could do it. You can get millions of dollars to drive a car through a desert, but you can’t get money to try to do something that’s more human.
Thinking is the activation of event streams from the past or imagined future, marking them by symbols, and applying the rules of logic and laws of nature (to the degree they are known to the system) to the EWM, without executing corresponding actions.
At least for 90 % we are emotion machines, this means that our decisions and external world models (EWM) they are using often do not correspond to reality, are false. Because the values used in EWM are oriented for positive emotions but not for reality. This is the source of conflict between the ’emotions and mind’, the commonly known issue. This is the source of our so called ‘global problems’. If contemporary humans want to survive, they have to create the new balance between emotions and rational analysis: today our genetic heritage makes optimal balance hard to achieve. I.V.
I agree with the reviewer who noted how odd it was that a book titled “The Emotion Machine” does not discuss Joseph LeDoux, even if only to refute him. But I think that the problem is with the title, not the book. I found many of Minsky’s insights very helpful – it is a very good book about how machines think. And if you are not a dualist, then those insights apply to people too. The book is very well organized and clearly written, and helps you think about thinking. I especially enjoyed his discussion of qualia (although he does not use the term), and why he thinks it is not quite the problem that so many philosophers want to make it.
Minsky’s main take on emotions is that emotional states are not fundamentally different from other types of thinking, and that the entire dicotomy of rationality v. emotion is misleading. He prefers to view them all as different ways of thinking – of utilizing various mental resources at one’s disposal, some conscious and some not. He organizes his discussion of difficult material very well, but I wish there was more grounding in the underlying neural anatomy of human emotion.
Emotional states are fundamentally different from other types of thinking not only because they are created in a different region of brain (limbic brain), but because they use completely different EWM with values, oriented for positive emotions. I.V.