How Will Capitalism End?

After years of ill health, capitalism is now in a critical condition. Growth has given way to stagnation; inequality is leading to instability; and confidence in the money economy has all but evaporated.

In How Will Capitalism End?, the acclaimed analyst of contemporary politics and economics Wolfgang Streeck argues that the world is about to change. The marriage between democracy and capitalism, ill-suited partners brought together in the shadow of World War Two, is coming to an end. The regulatory institutions that once restrained the financial sector’s excesses have collapsed and, after the final victory of capitalism at the end of the Cold War, there is no political agency capable of rolling back the liberalization of the markets.

Ours has become a world defined by declining growth, oligarchic rule, a shrinking public sphere, institutional corruption and international anarchy, and no cure to these ills is at hand.

As always in evolution, people will change only when their survival is threatened. There is a cure at hand: human nature, as E.O.Wilson has it described. There is the only way: a system, compatible with human nature and based on it. This means private property, but restricted, possibility to confirm, to witness oneself, but restricted, to have attachment and love, sense and sanctity, but restricted, to have longer life, but restricted. And, at least, to have a possibility to receive forgiveness, but restricted. 

Democracy in its simplest form (voting for people or decisions) is non-viable, impossible. For long-term survival of humanity only one step of democracy is necessary: voting for survival. The following steps and decisions are to be made by the AI program, specially designed for this purpose. A program, an intelligence, that is far above current average human intelligence,  that can avoid the current shortcomings of human nature. 

Contemporary societies are far away from such a system, it still waits its realization. This means deaf penalty is a must, like it is used not only by the evolution, but also by humans themselves (in their wars). The reality is camouflaged by slogans about human rights. And a system of ‘regulatory institutions’, using penalties in accordance with the trespassers damage to the society. Such a system will rise, will be gradually created on the ruins of our society. I.V.

Posted in Are We doomed?, Human Evolution | Leave a comment

Garry Kasparov: Don’t fear intelligent machines. Work with them

Everything nice and correct, but the machines will dream! I.V.

Posted in Artificial Intelligence | Leave a comment

The Emotion Machine by Marwin Minsky

By  Shansay
This review is from: The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind
Marvin Minsky, along with a small group of scientists, coined the term, “artificial intelligence.” He also wrote Society of Mind, a tour de force that paved the way for thinking about getting machines to think. The Emotion Machine, his more recent work, presents ideas that shake up the field of psychology. His suggestions about the ways in which emotions “cascade” as a series of logical events in response to life circumstances offer a picture that is both physiologically and cognitively rational. Beyond suggesting what machines can accomplish, Minsky’s suggestions are highly innovative regarding the field of psychology. These topics, in addition to many others spanning varied fields, make the book amazing.
HALL OF FAMEon December 17, 2006

Progress in the design and creation of intelligent machines has been steady for the last four decades and at times has exhibited sharp peaks in both advances and applications. This progress has gone relatively unnoticed, or has been trivialized by the very individuals who have been responsible for it. The field of artificial intelligence has been peculiar in that regard: every advance is hailed as major at the time of its inception, but after a very short time it is delegated to the archives as being “trivial” or “not truly intelligent.” It is unknown why this pattern always occurs, but it might be due to the willingness of researchers to engage in philosophical debate on the nature of mind and the possibility, or impossibility, of thinking machines. By indulging in such debates, researchers waste precious time that is better used dealing with the actual building of these machines or the development of algorithms or reasoning patterns by which these machines can solve problems of both theoretical and practical interest. Also, philosophical musings on artificial intelligence, due to the huge conceptual spaces in which they wander aimlessly, are usually of no help in pointing to the right direction for researchers to follow. What researchers need is a “director” or “set of directors” that are familiar with the subject matter, have both applied and theoretical experience in the field of artificial intelligence, and that eschew philosophical armchair speculation in favor of realistic dialog about the nature and functioning of intelligent machines.

The author of this book has been one of these “directors” throughout his professional career, and even though some of his writings have a speculative air about them, many others have been very useful as guidance to those working in the trenches of artificial intelligence. One can point to the author’s writings as both inspiration and as a source of perspiration, the latter arising because of the difficulty in bringing some of his ideas to fruition. It would be incorrect to state that the author’s ideas have played a predominant role in the field of artificial intelligence, but his influence has been real, if sometimes even in the negative, such as his commentary on the role of perceptrons.

There are intelligent machines today, and they have wide application in business and finance, but their intelligence is restricted (but highly effective) to certain domains of applicability. There are machines for example that can play superb chess and backgammon, being competitive with the best human players in this regard, but these machines, and the reasoning patterns they use in chess and backgammon cannot without major modification indulge themselves in performing financial prediction or proving difficult theorems in mathematics. The building of intelligent machines that can think in multiple domains is at present one of the most difficult outstanding problems in artificial intelligence. Some progress is being made, but it has been stymied again by overindulgence in philosophical speculation and rancorous debates on the nature of mind and whether or not machines can have true emotions.

Humans can of course think in multiple domains. Indeed, a good human chess player can also be a good mathematician or a good chef. The ability to think in multiple domains has been christened as “commonsense” by many psychologists and professional educators, and those skeptical of the possibility of machine intelligence. It is thought by many that in order for a machine to be considered as truly intelligent, or even indeed to possess any intelligence at all, it must possess “commonsense”, in spite of the vague manner in which this concept is frequently presented in both the popular and scientific literature.

The nature of “commonsense” is explored in an atypical manner in this book, and in this regard the author again shows his ability to think outside of the box and phrase issues in a new light. This is not to say that advice on how to implement these ideas in real machines is included in the book, as it is not. But the ideas do seem plausible as well as practical, particularly the concept of a “panalogy”, which is the author’s contraction of the two words “parallel analogy”. A panalogy allows a machine (human or otherwise) to give multiple meanings to an object, event, or situation, and thus be able to discern whether a particular interpretation of an event is inappropriate. A machine good in the game of chess could possibly then give multiple interpretations to its moves, some of which may happen to be similar to the interpretations given to a musical composition for example. The machine could thus use its expertise in chess to write musical compositions, and therefore be able to think in multiple domains. On the other hand, the machine may realize that there are no such analogies between chess and musical composition, and thus refrain from attempting to gain expertise in the latter. Another role for pananalogies, which may be a fruitful one, is that they can be used to measure to what degree interpretations are “entangled” with each other. Intepretations, which are the results of thinking, algorithmic processing, or reasoning patterns as it were, could be entangled in the sense that they always refer to objects, events, or situations in multiple domains. A panalogy, being a collection of interpretations in one domain, could be entangled with another in a different domain. The machine could thus switch between these with great ease, and thus be effective in both domains. It remains of course to construct explicit examples of panalogies that can be implemented in a real machine. The author does not direct the reader on how to do this, unfortunately.

The author also discusses a few other topics that have been hotly debated in artificial intelligence, throughout its five-decade long history, namely the possibility of a conscious machine or one that displays (and feels!) genuine emotions. The nature of consciousness, even in the human case, is poorly understood, so any discussion of its implementation in machines must wait further clarification and elucidation. Contemporary research in neuroscience is giving assistance in this regard. The author though takes another view of consciousness, which departs from the “folk psychology” that this concept is typically embedded in. His view of consciousness is more process-oriented, in that consciousness is the result of more than twenty processes going on in the human brain. An entire chapter is spent elaborating on this view, which is highly interesting to read but of course needs to be connected with what is known in cognitive neuroscience.

“The nature of consciousness, even in the human case, is poorly understood..” Not so anymore. See definition in a previous article. I.V. 

It remains to be seen whether the ideas in this book can be implemented in real machines. If the author’s views on emotions, commonsense, and consciousness are correct, as detailed throughout the book, it seems more plausible that machines will arise in the next few years that have these characteristics. If not, then perhaps machine intelligence should be viewed as something that is completely different from the human case. The fact that hundreds of tasks are now being done by machines that used to be thought of as the sole province of humans says a lot about the degree to which machine intelligence has progressed. Whenever the first machines are constructed to operate and reason in many in different domains, it seems likely that they will have their own ideas about how to direct further progress. Their understanding of ideas and issues may perhaps be very different than what humans is, and they may in fact serve as directors for further human advancement in different fields and contexts, much like the author has done throughout a major portion of his life.


on November 5, 2006
Anyone working on cognitive systems will want this book in their library. In reviewing THE EMOTION MACHINE there are two lines of criticism that seem important. Firstly, with the behaviorists I would argue that introspection is both frequently inaccurate and unscientific. Secondly, and more significantly, most of Minsky’s theories have not been developed to the level of detail needed in order to formulate actual algorithms. (To be fair there is Riecken’s “M system” (in SOFTWARE AGENTS, J. M. Bradshaw, Ed., MIT Press, 1997) and Singh’s thesis (EM-ONE, PhD thesis, MIT, June 2005) which are at least a start in that direction.)
On the positive side I am in general agreement with Minsky that thought can be decomposed into subroutines like: 
remembering (search), generalization, comparison, explanation, deduction,
organization, induction, classification, concept formation, imagemanipulation, feature detection, analogy, compression, simulation, value assessment. 
My list appears in Asa H: A hierarchical architecture for software agents (Transactions of the Kansas Academy of Science, vol. 109, No. 3/4, 2006). Minsky calls these “ways to think” and a partial list appears on pages 226-228 of THE EMOTION MACHINE. My own Asa H software uses exactly these mechanisms but my architecture is not nearly as complex as what Minsky is looking for.

 Max Hodges10 years ago

All the bizarre reviews posted here are testimony to the kinds of bad assumptions, misconceptions and retarded psychology that Minsky is up against. Minsky’s book is full of deep, penetrating insight. But most of the reviews here seem to be full of the reviewer’s own (misguided) ideas and opinions and have little to do with the book Minsky wrote. Secondly, and more significantly, most of Minsky’s theories have not been developed to the level of detail needed in order to formulate actual algorithms.

Dennett: The research world is going to be impatient with Marvin because they are eager for computational models that really work. Marvin is saying, “Wait a minute, let’s work out some of the high-level architectural details in a way that’s still very loose, very impressionistic. It’s too early to build the big model.”
Minsky: Actually, I could quarrel with that. I think the architecture described in The Emotion Machine is programmable. If I could afford to get three or four first-rate systems programmers, we could do it. You can get millions of dollars to drive a car through a desert, but you can’t get money to try to do something that’s more human.

Marwin Minsky has written a book about emotions but has not written much about how the emotions influence the human decisions and behavior. But the emotions are very important: they change significantly the results of thinking and our decisions. Maintaining the subroutines Minsky has formulated we can define thinking a bit wider and more exactly (see previous article): 

Thinking is the activation of event streams from the past or imagined future, marking them by symbols, and applying the rules of logic and laws of nature (to the degree they are known to the system) to the EWM, without executing corresponding actions.

At least for 90 % we are emotion machines, this means that our decisions and external world models (EWM) they are using often do not correspond to reality, are false. Because the values used in EWM are oriented for positive emotions but not for reality. This is the source of conflict between the ’emotions and mind’, the commonly known issue.  This is the source of our so called ‘global problems’. If contemporary humans want to survive, they have to create the new balance between emotions and rational analysis: today our genetic heritage makes optimal balance hard to achieve.  I.V.

on June 9, 2007

I agree with the reviewer who noted how odd it was that a book titled “The Emotion Machine” does not discuss Joseph LeDoux, even if only to refute him. But I think that the problem is with the title, not the book. I found many of Minsky’s insights very helpful – it is a very good book about how machines think. And if you are not a dualist, then those insights apply to people too. The book is very well organized and clearly written, and helps you think about thinking. I especially enjoyed his discussion of qualia (although he does not use the term), and why he thinks it is not quite the problem that so many philosophers want to make it.

Minsky’s main take on emotions is that emotional states are not fundamentally different from other types of thinking, and that the entire dicotomy of rationality v. emotion is misleading. He prefers to view them all as different ways of thinking – of utilizing various mental resources at one’s disposal, some conscious and some not. He organizes his discussion of difficult material very well, but I wish there was more grounding in the underlying neural anatomy of human emotion.

Emotional states are fundamentally different from other types of thinking not only because they are created in a different region of brain (limbic brain), but because they use completely different EWM with values, oriented for positive emotions.  I.V. 


Posted in Artificial Intelligence | Leave a comment

AI, engineering approach


In this paper author formulates definitions of basic notions and proposes definitions and conditions necessary for achieving general artificial intelligence (AGI).

Author maintains that the last big hurdle in achieving AGI in robots is the fact that they don’t create external world models (EWM).


Artificial intelligence, Emergence, Consciousness, External world models, Thinking, Learning.


  1. Current state.

The common features (random search, memories, learning, thinking, and EWM) necessary for creation of AI are known [1], [2], but the conditions for emergence of these features are not considered: We believe consciousness will result as an emergent behavior if there is adequate sensor input, processing power, and learning. [3].

Human-like intelligence is referred to as strong AI. General intelligence or strong AI has not been achieved yet and is a long-term goal of AI research. In conferences and hundreds of papers there are complicated discussions about the consciousness and AI. But some essential notions are missing.

1.1. Intelligence and its constituents are emergent processes. In animals simplest features are inherited genetically (random search, memories, learning), and after the birth more complicated features are developed: learning, thinking, consciousness, and EWM. In artificial machines these processes and their emergence conditions must preprogrammed: it is impossible to train robots to achieve features, for which evolution has spent millions of years. The first is preprogramming the features, tendencies and rewards; the second is to support, to create the conditions for the emergence and development of more complicated features: thinking, consciousness, and EWM. This means to determine, to define the emergence conditions and implement them.

1.2. Thinking about one model is described in [4]: For the robot to learn how to stand and twist its body, for example, it first performs a series of simulations in order to train a high-level deep-learning network how to perform the task—something the researchers compare to an “imaginary process.” This provides overall guidance for the robot, while a second deep-learning network is trained to carry out the task while responding to the dynamics of the robot’s joints and the complexity of the real environment. 

It is impossible to preprogram thinking for all possible actions and processes: the structure of this one process has to be replicated and adapted for uncountable number of other processes.

1.3. Contemporary robots are preprogrammed machines with distinct behaviors. When confronted with new and unknown situations these robots don’t have adequate behavior. The main issue and problem is the emergence of EWM. Some AI emergence conditions are counted in [1], [2], [3]. Hopeful systems are with input sensors, actuators and body distinct from environment, where creation of unrestricted number of EWM can be induced [4].

1.4. Step in the direction of domain-independent reinforced learning (RL) is deep learning, corresponding results are achieved with PR2 [5]. US Berkeley researcher Levine says: For all our versatility, humans are not born with a repertoire that can be deployed like a Swiss army knife, and we don’t need to be programmed. Instead we learn new skill over the course of our life from experience and from other humans.

The reality is different: some minutes after the birth most animals start using genetically inherited movements (drinking mother’s milk, following the mother, running, swimming or flying). These movements are not learned just after the birth but sought out from huge genetically inherited library and induced by the reward system. These movements were learned and fine-tuned by previous generations of individuals, and after the birth they are recognized and used after some tries.

Robot builders have to copy this evolutionary experience: millions of movements by contemporary robots first have to be learned slowly and after that used – taken from library and optimized and fine-tuned for real conditions. Creation and replication of EWM and learning then follows. But it takes much more time [6].

1.5. Deep learning is close to the way evolution happens [9]: Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output.

1.6. The necessity of models and their improving via reinforcement learning is mentioned in [7]:

With model-free approach, these works could not leverage the knowledge about underlying system, which is essential and plentiful in software engineering, to enhance their learning. In this paper, we introduce the advantages of model-based RL. By utilizing engineering knowledge, system maintains a model of interaction with its environment and predicts the consequence of its action, to improve and guarantee system performance. We also discuss the engineering issues and propose a procedure to adopt model-based RL to build self-adaptive software and bring policy evolution closer to real-world applications.

1.7. The generalization in AI is still a problem: AI programs …were successful at specific tasks, but generalizing the learned behavior to other domains was not attempted. How can generalized intelligence ever be realized? This paper will examine the different aspects of generalization and whether it can be performed successfully by future computer programs or robots [8].

1.8. Jeff Hawkins has named three fundamental attributes of the neocortex necessary for the intelligence to emerge: learning by rewiring, sparse distributed representations, and sensorymotor integration [13]. These three attributes provide and support the intelligence emergence conditions 3.1-3.4.



  1. Definitions of basic notions.

It is impossible to create something not defined. Without definitions science is impossible. But all definitions are temporal.

Generation of information in living systems is accomplished via random search, which creates the entropy space, and selection, which eliminates the not appropriate, not useful states or processes.

2.1. Intelligence is information processing system’s (IPS) ability to achieve its goals by adapting its behavior to changing environment, using pre-programmed (genetically inherited or obtained from environment) information, and optimize its behavior by creating and using models of environment and predictions about the environment’s reactions.

2.2. Artificial intelligence is the simulation of intelligence in machines.

2.3. EWM-s are preprogrammed or learned collaboration algorithms between the IPS and the environment. Activation of EWM enables IPS to predict the EW events. When these predictions are correct, we say that IPS understands the EW.

2.4. General artificial intelligence (AGI) is human-like intelligence in which IPS achieves its goals by creating unrestricted number of EWM and predicts the EW events.

2.5. Simple learning is accomplished via random moves, sensory feedback and choosing the best moves. More complex learning is accomplished via activation of existing and creation of new EWM (and corresponding behaviors, skills, values, preferences).

2.6. Thinking is the activation of event streams from the past or imagined future, marking them by symbols, and applying the rules of logic and laws of nature (to the degree they are known to the system) to the EWM, without executing corresponding actions. This allows IPS to predict the EW reactions and to plan, to choose own behavior.

2.7. Consciousness is the model of self.




  1. Intelligence and EWM emergence conditions.

3.1. The features created by the structure and programming: ability and tendency to memorize EW event strings (The ability to recognize and predict temporal sequences of sensory inputs) [12], neuron colon level-like structure which provides the ability to filter out the principal, basic lines in all input patterns (generalization [1], [2], [8]), and huge number of connections between neurons which provides the possibility to activate similar patterns [10].

3.2. Sensor signal processing, actuator control (transition from multi-coordinate world of output actuator moves to 4 coordinate world of output actions), and own action evaluation and learning.

3.3. Random search, selection, generalization, reward and punishing system, copying or multiplying the existing EWM or creation of new EWM and movement libraries. For complicated EWM this is achieved by thinking.

3.4. The ability to create the model of self, which receives EW signals and executes internal program’s decisions. This is called consciousness. All sensory streams are integrated and the map of EW is created, where the receiving and acting subject plays the main role. Lyle N. Long names it Unity: All sensor modalities melded into one experience [1].

3.5. The hierarchical reward and punishing system, which creates values – the laws saying what is good and what has to be avoided.

3.6. For General AI the ability to learn, understand spoken and written language and the ability to speak is necessary. For understanding of language the words and symbols of language must be connected with the own sensory experience and EWM .




  1. Emergence.

All atoms and molecules of the physical world, all physical processes, chemical reactions, human made products, inventions and all living beings are emergent property systems. If we have good models of systems and processes, we explain and predict the emerged properties by properties of parts and known physical laws. I will not consider here the declarations about impossibility of generation or understanding emergence. Human intelligence, consciousness and thinking are complex emergent property processes, for which we can define and implement the emergence conditions.

Many emergence conditions in contemporary AI systems are embedded structurally, e.g., big number of connections between the neurons, which allows the activation of alike memories and processes, a layer-like structure of NN, which allows generalizing, creating basic lines and abstract notions for incoming pictures, reward and punishing system which directs behavior. In living entities these features are inherited genetically, in artificial systems they must be preprogrammed.

Consciousness emerges in complicated multi-level systems, therefore we can’t reduce them to the properties of neurons. There are many complexity levels between the basic elements (neurons) and final emerged properties: sensors, neurons, neuron colons, output activators, and processes: memory reading and writing, generalizing, thinking, reward systems, EWM generation and development. But we can formulate the emergence conditions and, when implemented, the consciousness will emerge. The first results are already obtained [4], [6], [9].

The teaching-programming of robot will be like raising human infant [1], [8]. Randomly generated actuator moves will create thousands of event streams (with sensory signals added to each action) recorded in robot’s memory. If after many trials and actuator moves robot stops hitting obstacles, starts grabbing and moving objects, and shows collaboration elements (definite reactions to external visual, audio or touch signals) with EW, or, as in [4], learns to stand vertically on its own feet, or imitates the learned sounds, this means that robot has created maps and models of the EW.

How to teach the robot to adapt to environment, to optimize the own body moves? In a way all animals and humans do: connect input sensor signals to the current situation and processes, remember them, and use them next time by like situations.

Strong AI will arrive only when we will manage our robots to create the models of external world by themselves.  In animals and humans everyday usage of EWM is partly unconscious. This means that the essential lines are maintained but concrete details are abandoned.  For example, all animals unconsciously know and use the Earth’s gravitational force, know that all objects of external world have hard surfaces, but some are soft or liquid, some are hot or cold, and adjust their behavior.



  1. Discussion and Conclusions.

Emergent behavior is the main process of intelligent systems. EWM and consciousness emerge only in complex systems. This is a price we have to pay for AI and creation of consciousness. Complexity of the system can be measured by the number of parts and emerged properties.

It is not possible to pre-program EWM for all life situations. If future robots will not create EWM for all life situations by themselves, they will not have true intelligence:


Rule-based systems and cognitive architectures require humans to program the rules, and this process is not scalable to billions of rules. The machines will need to rely on hybrid systems, learning, and emergent behavior; and they will need to be carefully taught and trained by teams of engineers and scientists. Humans will not be capable of completely specifying and programming the entire system; learning and emergent behavior will be a stringent requirement for development of the system [11].


Living beings are genetically prepared for adapting to unknown and changing environment. The first environment all living beings are confronted with after the hatch or birth is own body. Movement is a fundamental characteristic of living systems [8]. In relatively short time and series of random moves they recognize and start using basic genetically inherited actions. Developing and improving these series of actions via learning and thinking leads to creation of EWM. Human body has about 500 muscles, an approximate robot can have about 50 actuators. Researchers have realized that it is impossible to solve the math equations in real time environment even for simple moves, because it takes “minutes or hours of computation for seconds of motion” [8]. This means that all moves are to be optimized via slow supervised learning and after that used automatically, taken from the library. Successful actions are stored and their algorithms are copied and used for the next models.

The intelligence and its constituents are gradual features [1], [11], which can be more or less outspoken, developed and recognizable. Conditions 3.(1-6) facilitate the emergence of thinking, creation of EWM and consciousness, but the degree of necessity of each feature is not known.

The approach with actuator movement library creation and transferring the library to new robots (see 1.3) will have some problems: actuator change is coupled with corresponding program change, in order to transfer the experience of previous machines, the corresponding transcoding of control commands will be necessary. In this sense the evolution of robots will be like the evolution of living beings: in order to transfer the experience from previous generations, new individuals must keep the organs and actuators that ‘understand’ the old commands (e.g., our triune brain).

There are no ‘easy’ or ‘hard’ problems of consciousness [1]. In all IPS notions, senses and emotions (e.g., feeling of some color or feeling of self) are connected with personal sensory experience, which is unique for every individual. In this view talking about “what it is like to be” is nonsensical. As long as we have not reached the direct transfer of information (copying neuron connections) between the individuals, it is impossible to know exactly their inner experience. It is an axiomatic truth of information transfer, and there is no mystics or something impossible.

The task of creating consciousness is challenging: Consciousness is an emergent property, and the first conscious robot will be a bit surprising: “It will be as astounding and frightening to humans as the discovery of life on other planets” [1].

The emergence conditions, tendencies, proclivities, rewards, values, and moves, learned by previous units, must be pre-programmed. The further development of these features and EWM is accomplished by the system.

After the IPS presents the simplest EWM for mechanical moves, it must be taught like human or animal infants [11], [12].





  1. Lyle N. Long, Review of Consciousness and the Possibility of Conscious Robots, JOURNAL OF AEROSPACE COMPUTING, INFORMATION, AND COMMUNICATION Vol. 7, February 2010,

  1. Lyle N. Long, and Troy D. Kelley, The Requirements and Possibilities of Creating Conscious Systems,
  2. Lyle N. Long, Troy D. Kelley, and Michael, J. Wenger, The Prospects for Creating Conscious Machines,
  3. Robot Toddler Learns to Stand by “Imagining” How to Do It.
  4. Sarah Yang, New ‘deep learning’ technique enables robot mastery of skills via trial and error
  5. Jean-Paul Laumond, Nicolas Mansard, Jean Bernard Lasserre, Optimization as Motion Selection Principle in Robot Action, Communications of the ACM, ACM, 2015, 58 (5), pp.64-74

  1. Han Nguyen Ho, Eunseok Lee, Model-based Reinforcement Learning Approach for Planning in Self-Adaptive Software System, Proceedings of the 9th International Conference on Ubiquitos Information Management and Communication, Article No. 103,

  1. Troy D. Kelleyand Lyle N. Long, Deep Blue Cannot Play Checkers: The Need for Generalized Intelligence for Mobile Robots,

  1. The Dark Secret at the Heart of AI, Intelligent Machines by Will Knight,

April 11, 2017,

  1. Jeff HawkinsSubutai Ahmad, Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex,
  2. Imants Vilks, When Will Consciousness Emerge? Bulletin of Electrical Engineering and Informatics, Vol 2, No 1: March 2013
  3. Yuwei Cui*, Chetan Surpur, Subutai Ahmad, and Jeff Hawkins, Continuous online sequence learning with an unsupervised neural network model,arXiv:1512.05463v1.
  4. Jeff Hawkins, What Intelligent Machines Need to Learn From the Neocortex,

Posted in Artificial Intelligence | Leave a comment

Scientific Naturalism: A Manifesto for Enlightenment Humanism

The success of the Scientific Revolution led to the development of
the worldview of scientific naturalism, or the belief that the world
is governed by natural laws and forces that can be understood,
and that all phenomena are part of nature and can be explained
by natural causes, including human cognitive, moral and social
phenomena. The application of scientific naturalism in the human
realm led to the widespread adoption of Enlightenment humanism,
a cosmopolitan worldview that places supreme value on science
and reason, eschews the supernatural entirely and relies
exclusively on nature and nature’s laws, including human nature.

From scientific naturalism to Enlightenment humanism
Scientific naturalism is the principle that the world is governed by natural laws and forces
that can be understood, and that all phenomena are part of nature and can be explained by
natural causes, including human cognitive, moral and social phenomena. According to a
Google Ngram Viewer search, the term “scientific naturalism” first came into use in the
1820s, picked up momentum from the 1860s through the 1920s, then hit three peaks in
the 1930s, 1950s and early 2000s, where it is now established as a core component of
modern science. It incorporates methodological naturalism, the principle that the
methods of science operate under the presumption that the world and everything in it
is the result of natural processes in a system of material causes and effects that does not
allow, or need, the introduction of supernatural forces. “Methodological naturalism”
spiked dramatically in use in the mid-1990s and continues climbing into the 2000s,7
most likely the result of the rise in popularity (and polarization) of “scientific creationism”
and Intelligent Design Theory, the proponents of which complained that methodological
naturalism unfairly excludes their belief in what I have called methodological supernaturalism,  or the principle that supernatural intervention in the natural world may be invoked to explain any allegedly unexplained phenomena, such as the Big Bang, the fine-tuned cosmos, consciousness, morality, the eye, DNA and, notoriously, bacterial flagella.



Posted in Understand and Manage Ourselves | Leave a comment

The Strange Death of Europe: Immigration, Identity, Islam

The Strange Death of Europe is a highly personal account of a continent and culture caught in the act of suicide. Declining birth rates, mass immigration, and cultivated self-distrust and self-hatred have come together to make Europeans unable to argue for themselves and incapable of resisting their own comprehensive alteration as a society and an eventual end.

This is not just an analysis of demographic and political realities, it is also an eyewitness account of a continent in self-destruct mode. It includes accounts based on travels across the entire continent, from the places where migrants land to the places they end up, from the people who pretend they want them to the places which cannot accept them.

Murray takes a step back at each stage and looks at the bigger and deeper issues which lie behind a continent’s possible demise, from an atmosphere of mass terror attacks to the steady erosion of our freedoms. The book addresses the disappointing failure of multiculturalism, Angela Merkel’s U-turn on migration, the lack of repatriation, and the Western fixation on guilt. Murray travels to Berlin, Paris, Scandinavia, Lampedusa, and Greece to uncover the malaise at the very heart of the European culture, and to hear the stories of those who have arrived in Europe from far away.

This sharp and incisive book ends up with two visions for a new Europe–one hopeful, one pessimistic–which paint a picture of Europe in crisis and offer a choice as to what, if anything, we can do next. But perhaps Spengler was right: “civilizations like humans are born, briefly flourish, decay, and die.”

Later chapters, the bulk of the book, go into extensive detail about the Islamic immigrants. They do not want to integrate. They have no respect for the host cultures. They are given to crime, especially rape. Their parts of the major cities – Paris, Stockholm, Berlin – become no-go zones for police, firemen and ambulances. They institute Sharia law among themselves and reject the host countries. Many other authors have described what he saw in France ,Germany,Holland and Sweden.

It is the politicians who are especially cowardly. The people by and large, and in increasing numbers, don’t want widespread Muslim immigration. Yet the politicians keep the doors open and keep telling saccharine stories about how wonderful it all is. The common man is able to contrast the stories with everyday reality and conclude that they are lying.

Genetics is another topic that deserves more attention. Murray would credit the differences between the immigrants and the host populations as purely cultural. Liberals believe the same, and fervently hope that in a few generations the immigrants will become indistinguishable from the host populations. Findings by scientists in genetics,evolution and intelligence give the lie to these happy dreams. The populations are genetically different. They took thousands of years to evolve traits that enable them to optimally fill the niches they do. To survive in a harsh climate, bands of Northern Europeans developed altruism, tolerance and high intelligence. Said altruism has them project these traits onto others and welcome them into the society.



Posted in Human Evolution | Leave a comment

Could a Robot Be President?

Yes, it sounds nuts. But some techno-optimists really believe a computer could make better decisions for the country—without the drama and shortsightedness we accept from our human leaders.

July 08, 2017

President Donald Trump reportedly spends his nights alone in the White House, watching TV news and yelling at the screen. He wakes up early each morning to watch more television and tweet his anger to the world … or Mika Brzezinski … or CNN. He takes time out of meetings with foreign leaders to brag about his Electoral College win.

That all sounds, at the very least, distracting for a person with the weight of the free world on his shoulders. But if his fury at the Russia scandal and insecurity about his election are stealing time from the important decisions of the presidency, Trump is by no means the first commander in chief whose emotions or personality have gotten in the way of the job. From Warren Harding’s buddies enriching themselves in Teapot Dome to Richard Nixon’s Watergate hubris to Bill Clinton nearly getting kicked out of office because he couldn’t control his base urges, it’s human weakness—jealousy, greed, lust, nepotism—that most often upends presidencies.

If you’re imagining a Terminator-style machine sitting behind the Resolute desk in the Oval Office, think again. The president would more likely be a computer in a closet somewhere, chugging away at solving our country’s toughest problems. Unlike a human, a robot could take into account vast amounts of data about the possible outcomes of a particular policy. It could foresee pitfalls that would escape a human mind and weigh the options more reliably than any person could—without individual impulses or biases coming into play. We could wind up with an executive branch that works harder, is more efficient and responds better to our needs than any we’ve ever seen.

There’s not yet a well-defined or cohesive group pushing for a robot in the Oval Office—just a ragtag bunch of experts and theorists who think that futuristic technology will make for better leadership, and ultimately a better country. Mark Waser, for instance, a longtime artificial intelligence researcher who works for a think tank called the Digital Wisdom Institute, says that once we fix some key kinks in artificial intelligence, robots will make much better decisions than humans can. Natasha Vita-More, chairwoman of Humanity+, a nonprofit that “advocates the ethical use of technology to expand human capacities,” expects we’ll have a “posthuman” president someday—a leader who does not have a human body but exists in some other way, such as a human mind uploaded to a computer. Zoltan Istvan, who made a quixotic bid for the presidency last year as a “transhumanist,” with a platform based on a quest for human immortality, is another proponent of the robot presidency—and he really thinks it will happen.

“An A.I. president cannot be bought off by lobbyists,” he says. “It won’t be influenced by money or personal incentives or family incentives. It won’t be able to have the nepotism that we have right now in the White House. These are things that a machine wouldn’t do.”

The idea of a robot ruler has been floating around in science fiction for decades. In 1950, Isaac Asimov’s short story collection I, Robotenvisioned a world in which machines appeared to have consciousness and human-level intelligence. They were controlled by the “Three Laws of Robotics.” (First: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”) Super-advanced A.I. machines in Iain Banks’ Culture series act as the government, figuring out how best to organize society and distribute resources. Pop culture—like, more recently, the movie Her—has been hoping for human-like machines for a long time.

But so far, anything close to a robot president was limited to those kinds of stories. Maybe not for much longer. In fact, true believers like Istvan say our computer leader could be here in less than 30 years.


Of course, replacing a human with a robot in the White House would not be simple, and even those pushing the idea admit there are serious obstacles.

For starters, how a machine leader would fit in with our democratic republic is anybody’s guess. Istvan, for one, envisions regular national elections, in which voters would decide on the robot’s priorities and how it should come out on moral issues like abortion; the voters would then have a chance in the next election to change those choices. The initial programming of the system would no doubt be controversial, and the programmers would probably need to be elected, too. All of this would require amending the Constitution, Istvan acknowledges.

From a technical point of view, artificial intelligence is not yet smart enough to run the country. The list of what robots can currently accomplish is long—from diagnosing diseases and driving cars to winning “Jeopardy!” and answering questions on your smartphone—and it’s rapidly expanding. But as they exist now, all of our A.I. systems use “narrow” intelligence, meaning they need to be programmed specifically to perform any given task.

A president, of course, does more than one narrow thing.

“If you’re president of the United States, what bubbles up to your level are the problems that nobody else in the hierarchy was able to solve. You get stuck with the hardest nuts to crack,” says Illah Nourbakhsh, a robotics professor at Carnegie Mellon who previously worked on robots for NASA. “And the hardest nuts to crack are the most meta-cognitive, the ones with the fewest examples to go by, and the ones where you have to use the most creative thinking.”

To accomplish all that, a robot president would need what scientists call artificial general intelligence, also known as “strong A.I.”—intelligence as broad, creative and flexible as a human’s. That’s the kind of A.I. that Istvan and others are referring to when they talk about robot presidents. Strong A.I. isn’t here yet, but some experts think it’s coming soon.

“I am one of those people who believe that you’re going to get human-level intelligence much, much, much sooner than most people think,” Waser says. “Around 2008, I said that it would occur close to 2025. Ten years later, I don’t see any reason why I would modify that estimate.” Vita-More agrees, predicting we could have an early version of strong A.I. within 10 or 15 years.

But that optimism requires a key assumption: that we will soon reach a time when computers can solve their own problems—what scientists call the “technological singularity.” At that point, computers would become smarter than humans and could design new computers that are even smarter, which would then design computers that are smarter still. Nourbakhsh says, however, that he doesn’t think all the technical problems involved in building better and better computers can be solved by machines. Some require new discoveries in chemistry or the invention of new types of material to use in building these supersmart computers.

Another big technical problem to solve before computers could run the country: Robots don’t know how to explain themselves. Information goes in, a decision comes out, but no one knows why the machine made the choice it did—a huge hurdle for a job that constantly demands decisions with unpredictable inputs and grave consequences. Say what you will about Donald Trump or Bill Clinton, but at least they’re able to think about their thought processes and, in turn, explain their actions to the public, lobby for them in Congress, and spin them on TV or Twitter. A computer, at least for now, can’t do that.

“Machines have to be able to cooperate with other machines to be effective,” Waser says. “They have to cooperate with humans to be safe.” And cooperation is hard if you can’t explain your thought process to others.

This shortcoming is partly because of the way A.I. systems work. In an approach called machine learning, the computer analyzes mountains of data and searches for patterns—patterns that might make sense to the computer but not to humans. In a variant approach called deep learning, a computer uses multiple layers of processors: One layer produces a rough output, which is then refined by the next layer, and that output, in turn, is refined by the next layer. The outputs of those middle layers are opaque to any outside human observers—the computer spits out only the final result.

“You can take your kid to the movie Inside Out, and then you can have this really interesting and deep conversation with your kid about [their] emotions,” Nourbakhsh says. “A.I. can’t do that because A.I. doesn’t understand the idea of going from a topic to a metatopic, to talking about a topic.”


Even if we can fix all those problems, robots still might not be the great decision-makers we imagine them to be. One of the main selling points of a robot president is its ability to crunch data and come to decisions without all the biases that plague humans. But even that advantage might not be as clear as it seems—researchers have found it terribly difficult to teach A.I. systems to avoid prejudices.

A Google photo app released in 2015, for instance, used A.I. to identify the contents of photos and then categorize the pictures. The app did well, except for one glaring mistake: It labeled several photos of black people as photos of “gorillas.” Not only was the system wrong, but it didn’t know how to recognize the appalling historical and social context of its labeling. The company apologized and said it would investigate the problem.

Other examples carry life-altering consequences. An A.I. system used by courts across the country to determine defendants’ risk of reoffending—which then guided judges’ bail and sentencing decisions—seemed like the perfect use for autonomous technology. It could crunch large amounts of data, find patterns that people might miss and avoid biases that plague human judges and prosecutors. But a ProPublica investigation found otherwise. Black defendants were 77 percent more likely than otherwise-identical white defendants to be pegged as at risk of committing a future violent crime, the report found. (The for-profit company that created the system disputed ProPublica’s findings.) The A.I. system did not explicitly consider a defendant’s race, but a number of the factors it weighed—like poverty and joblessness—are correlated with race. So the system reached its biased result based on data that, while neutral on its face, carried the baked-in results of centuries of inequality.

This is a problem for all computers: Their output is only as good as their input. An A.I. system that is fed information inflected by race is at risk of putting out racist results.

“Technological systems are not free from bias. They’re not automatically fair just because they’re numbers,” says Madeleine Clare Elish, a cultural anthropologist studying at Columbia University. “My biggest fear is people won’t come to terms with how A.I. technologies will encode the biases and flaws and prejudices of their creators.”

A report on A.I. published by the Obama administration in October raised the same concern: “Unbiased developers with the best intentions can inadvertently produce systems with biased results, because even the developers of an A.I. system may not understand it well enough to prevent unintended outcomes,” it said.

Once we develop supersmart A.I., some experts think concerns about bias will evaporate. Such a system “would detect bias,” says Vita-More, the Humanity+ chairwoman. “It would have a psychological meter that would detect ‘where is that information coming from?’ ‘what do those people need?’” and account for the flaws in the data.

Hacking is another A.I. risk that could possibly be solved with stronger A.I. What if the Russians or North Koreans or Chinese broke into our robot president, gaining access to the whole of American government? And how would we even know if the decisions a robot president made were being manipulated? The solution, supporters say, is a machine that’s smart enough to not only solve our country’s biggest problems, but also to block anyone who would try to sabotage that effort.

Nourbakhsh, for one, says that relying on strong A.I. to solve existing problems with A.I. is mostly a rhetorical flourish. “If you name a problem, somebody can say, ‘These computers are superhuman in their intelligence abilities, and therefore they will find a solution to that problem,’” he says. Ultimately, he thinks, there are problems humans will have to solve on their own.


If these obstacles sound discouraging for the pro-robot caucus,there might be a middle ground that suffices for now: A computer that can chug through all the decisions a president has to make—not to make the final choices itself, but to help guide the human commander in chief. Think of it as a human-computer partnership that produces better results than either could alone.

Jonathan Zittrain, an internet law professor at Harvard Law School, thinks that even with A.I.’s flaws, computers could serve as checks against human biases. “A.I., properly trained, offers the prospect of more systematically identifying bias in particular and unfairness in general,” he wrote in a recent blog post.

Maybe a computer, working alongside a human president, could still rein in some of the president’s flaws.

“The place that A.I. can come into play is in understanding ramifications,” Nourbakhsh says. He points to Trump’s travel ban as an example of a presidential decision that turned out badly because its legal and constitutional implications weren’t fully grasped or thought through. A computer could have analyzed the likely legal responses by opponents and courts.

Already, several studies have shown that “a human-machine team can be more effective than either one alone,” as the Obama administration’s A.I. report put it. “In one recent study, given images of lymph node cells and asked to determine whether or not the cells contained cancer, an A.I.-based approach had a 7.5 percent error rate, where a human pathologist had a 3.5 percent error rate; a combined approach, using both A.I. and human input, lowered the error rate to 0.5 percent.” A venture capital firm in Hong Kong is putting that kind of partnership into practice. It announced in 2014 that it was adding an A.I. system to its board of directors to crunch numbers and advise humans on the board about what investment decisions to make.

Keeping a person as president, but with a computer sidekick, would also let us keep the many nebulous benefits that a human president provides. The leader of the country, after all, isn’t just a “decider.” The president can also be a hero or a villain, a figure to emulate or lampoon—not to mention a unifier, or divider, relying on human rhetoric and emotion.

“The president is a national symbol,” notes Lori Cox Han, a political science professor at Chapman University. “When something goes well or something goes really badly, we look to the president.” And in a crisis, in all those times we expect the president to do more than just make a decision, we might still want a human in charge.

Posted in Artificial Intelligence, Human Evolution | Leave a comment

When Will The Planet Be Too Hot For Humans? Much, Much Sooner Than You Imagine.

Peering beyond scientific reticence.

It is, I promise, worse than you think. If your anxiety about global warming is dominated by fears of sea-level rise, you are barely scratching the surface of what terrors are possible, even within the lifetime of a teenager today. And yet the swelling seas — and the cities they will drown — have so dominated the picture of global warming, and so overwhelmed our capacity for climate panic, that they have occluded our perception of other threats, many much closer at hand. Rising oceans are bad, in fact very bad; but fleeing the coastline will not be enough.

Indeed, absent a significant adjustment to how billions of humans conduct their lives, parts of the Earth will likely become close to uninhabitable, and other parts horrifically inhospitable, as soon as the end of this century.

Even when we train our eyes on climate change, we are unable to comprehend its scope. This past winter, a string of days 60 and 70 degrees warmer than normal baked the North Pole, melting the permafrost that encased Norway’s Svalbard seed vault — a global food bank nicknamed “Doomsday,” designed to ensure that our agriculture survives any catastrophe, and which appeared to have been flooded by climate change less than ten years after being built.

The Doomsday vault is fine, for now: The structure has been secured and the seeds are safe. But treating the episode as a parable of impending flooding missed the more important news. Until recently, permafrost was not a major concern of climate scientists, because, as the name suggests, it was soil that stayed permanently frozen. But Arctic permafrost contains 1.8 trillion tons of carbon, more than twice as much as is currently suspended in the Earth’s atmosphere. When it thaws and is released, that carbon may evaporate as methane, which is 34 times as powerful a greenhouse-gas warming blanket as carbon dioxide when judged on the timescale of a century; when judged on the timescale of two decades, it is 86 times as powerful. In other words, we have, trapped in Arctic permafrost, twice as much carbon as is currently wrecking the atmosphere of the planet, all of it scheduled to be released at a date that keeps getting moved up, partially in the form of a gas that multiplies its warming power 86 times over.

Maybe you know that already — there are alarming stories every day, like last month’s satellite data showing the globe warming, since 1998, more than twice as fast as scientists had thought. Or the news from Antarctica this past May, when a crack in an ice shelf grew 11 miles in six days, then kept going; the break now has just three miles to go — by the time you read this, it may already have met the open water, where it will drop into the sea one of the biggest icebergs ever, a process known poetically as “calving.”

But no matter how well-informed you are, you are surely not alarmed enough. Over the past decades, our culture has gone apocalyptic with zombie movies and Mad Max dystopias, perhaps the collective result of displaced climate anxiety, and yet when it comes to contemplating real-world warming dangers, we suffer from an incredible failure of imagination. The reasons for that are many: the timid language of scientific probabilities, which the climatologist James Hansen once called “scientific reticence” in a paper chastising scientists for editing their own observations so conscientiously that they failed to communicate how dire the threat really was; the fact that the country is dominated by a group of technocrats who believe any problem can be solved and an opposing culture that doesn’t even see warming as a problem worth addressing; the way that climate denialism has made scientists even more cautious in offering speculative warnings; the simple speed of change and, also, its slowness, such that we are only seeing effects now of warming from decades past; our uncertainty about uncertainty, which the climate writer Naomi Oreskes in particular has suggested stops us from preparing as though anything worse than a median outcome were even possible; the way we assume climate change will hit hardest elsewhere, not everywhere; the smallness (two degrees) and largeness (1.8 trillion tons) and abstractness (400 parts per million) of the numbers; the discomfort of considering a problem that is very difficult, if not impossible, to solve; the altogether incomprehensible scale of that problem, which amounts to the prospect of our own annihilation; simple fear. But aversion arising from fear is a form of denial, too.

In between scientific reticence and science fiction is science itself. This article is the result of dozens of interviews and exchanges with climatologists and researchers in related fields and reflects hundreds of scientific papers on the subject of climate change. What follows is not a series of predictions of what will happen — that will be determined in large part by the much-less-certain science of human response. Instead, it is a portrait of our best understanding of where the planet is heading absent aggressive action. It is unlikely that all of these warming scenarios will be fully realized, largely because the devastation along the way will shake our complacency. But those scenarios, and not the present climate, are the baseline. In fact, they are our schedule.

The present tense of climate change — the destruction we’ve already baked into our future — is horrifying enough. Most people talk as if Miami and Bangladesh still have a chance of surviving; most of the scientists I spoke with assume we’ll lose them within the century, even if we stop burning fossil fuel in the next decade. Two degrees of warming used to be considered the threshold of catastrophe: hundreds of millions of climate refugees unleashed upon an unprepared world. Now two degrees is our goal, per the Paris climate accords, and experts give us only slim odds of hitting it. The U.N. Intergovernmental Panel on Climate Change issues serial reports, often called the “gold standard” of climate research; the most recent one projects us to hit four degrees of warming by the beginning of the next century, should we stay the present course. But that’s just a median projection. The upper end of the probability curve runs as high as eight degrees — and the authors still haven’t figured out how to deal with that permafrost melt. The IPCC reports also don’t fully account for the albedo effect (less ice means less reflected and more absorbed sunlight, hence more warming); more cloud cover (which traps heat); or the dieback of forests and other flora (which extract carbon from the atmosphere). Each of these promises to accelerate warming, and the geological record shows that temperature can shift as much as ten degrees or more in a single decade. The last time the planet was even four degrees warmer, Peter Brannen points out in The Ends of the World, his new history of the planet’s major extinction events, the oceans were 260 feet higher, and the warming wiped out all but one species of European primates.

The Earth has experienced five mass extinctions before the one we are living through now, each so complete a slate-wiping of the evolutionary record it functioned as a resetting of the planetary clock, and many climate scientists will tell you they are the best analog for the ecological future we are diving headlong into. Unless you are a teenager, you probably read in your high-school textbooks that these extinctions were the result of asteroids. In fact, all but the one that killed the dinosaurs were caused by climate change produced by greenhouse gas. The most notorious was 252 million years ago; it began when carbon warmed the planet by five degrees, accelerated when that warming triggered the release of methane in the Arctic, and ended with 97 percent of all life on Earth dead. We are currently adding carbon to the atmosphere at a considerably faster rate; by most estimates, at least ten times faster. The rate is accelerating. This is what Stephen Hawking had in mind when he said, this spring, that the species needs to colonize other planets in the next century to survive, and what drove Elon Musk, last month, to unveil his plans to build a Mars habitat in 40 to 100 years. These are nonspecialists, of course, and probably as inclined to irrational panic as you or I. But the many sober-minded scientists I interviewed over the past several months — the most credentialed and tenured in the field, few of them inclined to alarmism and many advisers to the IPCC who nevertheless criticize its conservatism — have quietly reached an apocalyptic conclusion, too: No plausible program of emissions reductions alone can prevent climate disaster.

Over the past few decades, the term “Anthropocene” has climbed out of academic discourse and into the popular imagination — a name given to the geologic era we live in now, and a way to signal that it is a new era, defined on the wall chart of deep history by human intervention. One problem with the term is that it implies a conquest of nature (and even echoes the biblical “dominion”). And however sanguine you might be about the proposition that we have already ravaged the natural world, which we surely have, it is another thing entirely to consider the possibility that we have only provoked it, engineering first in ignorance and then in denial a climate system that will now go to war with us for many centuries, perhaps until it destroys us. That is what Wallace Smith Broecker, the avuncular oceanographer who coined the term “global warming,” means when he calls the planet an “angry beast.” You could also go with “war machine.” Each day we arm it more.

Continue reading

Posted in Are We doomed? | Leave a comment

5 вещей, которые в России лучше, чем в Англии

Россия лучше Англии?

Posted in Common, Happiness and Quality of Life | Leave a comment

Trump is full of lies and Putin is full of tricks. Who to believe?

At the G-20 meeting in Hamburg, Germany, Donald Trump met with Russian President Vladimir Putin, the man whose thumb was all over the scale that delivered Trump’s victory. It was like a father meeting his offspring.

Tillerson said Trump and Putin focused on “how do we move forward” because “it’s not clear to me that we will ever come to some agreed-upon resolution of that question between the two nations

And this whole business of setting up a cybersecurity working group with the Russians is like inviting the burglar to help you design your alarm system.


Posted in Economics and Politics | Leave a comment

Miris Iļja Glazunov

Илья Глазунов «Вечная Россия»

Posted in Values and Sense of Life | Leave a comment

The UN calls for an end on the War on Drugs and “prevention and treatment” as a replacement

The World Health Organization and the United Nations have called for drugs to be decriminalized, the war on drugs put to end, and a shift to a “prevention and treatment” way of addressing the problem.

Mural Christiania.

Pro-cannabis and anti-heroin mural in Christiania, Denmark.

So we’ve known that the war on drugs flat out doesn’t work. And it’s pretty easy to sum up why. First: People. Like. Drugs. If you take the drugs away they won’t stop using, they’ll just turn around and pay shady dudes in shady alleys to get them. Drugs are also closely associated with crime in the public mind, but that’s because of and due to the war on drugs, not despite it — if there are no legitimate way to supply demand, black markets will pop up to fill it. Lastly, use of illegal drugs often leads to a lot of medical complications and deaths, but again, that’s mostly because of the war on drugs — shady dealers don’t have to worry about health standards so they can mix anything in, and users aren’t the most likely to go to the ER when something goes south since they fear legal repercussions.

It goes on like this. I’m not saying drugs aren’t a problem in and of themselves — but many of the issues they’re blamed for are caused by our reaction to the drugs. For a long time, and despite scientists pointing out to the fact that prohibition flat out doesn’t work, it seemed that politics was too well entrenched in the war on drugs for things to change.

But last month, on the International Day Against Drug Abuse, UN Secretary General António Guterres called for tackling the problem through “prevention and treatment” and by adhering to human rights. As part of a joint release describing how the two bodies say member states should go about ending healthcare discrimination, they’ve called for the “reviewing and repealing punitive laws that have been proven to have negative health outcomes,” including “drug use or possession of drugs for personal use”.


Posted in Understand and Manage Ourselves | Leave a comment

Do we matter in the cosmos?

Although Carl Sagan said to them almost before a century, that “We are a way for the Universe to know itself”, they don’t know it, they don’t understand it, they write a ‘scientific’ papers about our insignificance and missing sense of life:

I wrote before some 20 years (in Sense and Values):

Our life has deep and cosmic meaning: latest observations and SETI data say that possibly we are the only life form in our Universe. The only complex life form in our galaxy and in our Universe, the only Universe’s matter attempt to get aware of itself. This creates grand significance and responsibility for us: we are the only bearers of conscious matter and we are responsible for preserving and further development of this life form. We are the only form who can and will (must) transfer biological Homo sapiens to another more appropriate and more stable silicon or carbon environment.
This is great and holy challenge, possibility and responsibility. No bigger one is possible. If we will manage to pass the contemporary societies problems bottleneck, we, humans, will live forever. We will spread ourselves in a Universe and we will start manage the cosmological processes – create appropriate conditions for conscious matter limitless survival, create other universes. In short, Universe will get conscious and alive.  Imants Vilks


Posted in Cosmology, Human Evolution, Values and Sense of Life | Leave a comment

Dissolving the ego

You don’t need drugs or a church for an ecstatic experience that helps transcend the self and connect to something bigger

In 1969, the British writer Philip Pullman was walking down the Charing Cross Road in London, when his consciousness abruptly shifted. It appeared to him that ‘everything was connected by similarities and correspondences and echoes’. The author of the fantasy trilogy His Dark Materials (1995-2000) wasn’t on drugs, although he had been reading a lot of books on Renaissance magic. But he told me he believes that his insight was valid, and that ‘my consciousness was temporarily altered, so that I was able to see things that are normally beyond the range of routine ordinary perception’. He had a deep sense that the Universe is ‘alive, conscious and full of purpose’

What does one call such an experience? Pullman refers to it as ‘transcendent’. The philosopher and psychologist William James called them ‘religious experiences’ – although Pullman, who wrote a fictionalised biography of Jesus, would insist that God was not involved. Other psychologists call such moments spiritual, mystical, anomalous or out-of-the-ordinary. My preferred term is ‘ecstatic’. Today, we think of ecstasy as meaning the drug MDMA or the state of being ‘very happy’, but originally it meant ekstasis – a moment when you stand outside your ordinary self, and feel a connection to something bigger than you. Such moments can be euphoric, but also terrifying.

Over the past five centuries, Western culture has gradually marginalised and pathologised ecstasy. That’s partly a result of our shift from a supernatural or animist worldview to a disenchanted and materialist one. In most cultures, ecstasy is a connection to the spirit world. In our culture, since the 17th century, if you suggest you’re connected to the spirit world, you’re likely to be considered ignorant, eccentric or unwell. Ecstasy has been labelled as various mental disorders: enthusiasm, hysteria, psychosis. It’s been condemned as a threat to secular government. We’ve become a more controlled, regulated and disciplinarian society, in which one’s standing as a good citizen relies on one’s ability to control one’s emotions, be polite, and do one’s job. The autonomous self has become our highest ideal,  and the idea of surrendering the self is seen as dangerous. Yet ecstatic experiences are surprisingly common, we just don’t talk about them. The polling company Gallup has, since the 1960s, measured the frequency of mystical experiences in the United States. In 1960, only 20 per cent of the population said they’d had one or more. Now, it’s around 50 per cent. In a surveyI did in 2016, 84 per cent of respondents said they’d had an experience where they went beyond their ordinary self, and felt connected to something greater than them.

The most common word used when describing such experiences is ‘connection’ – we briefly shift beyond our separate self-absorbed egos, and feel deeply connected to other beings, or to all things. Some interpret these moments as an encounter with the divine, but not all do. The philosopher Bertrand Russell, for example, also had a ‘mystic moment’ when he suddenly felt filled with love for people on a London street. The experience didn’t turn him into a Christian, but it did turn him into a life-long pacifist.


It seems to me that humans have always sought ecstasy. The earliest human artefacts – the cave paintings of Lascaux – are records of Homo sapiens’ attempt to get out of our heads. We have always sought ways to ‘unself’, as the writer Iris Murdoch called it, because the ego is an anxious, claustrophobic, lonely and boring place to be stuck. As the author Aldous Huxley wrote, humans have ‘a deep-seated urge to self-transcendence’. However, we can get out of our ordinary selves in good and bad ways – what Huxley called ‘healthy and toxic transcendence’. How can we seek ecstasy in a healthy way? In its most common-garden variety, we can seek what the psychologist Mihaly Csikszentmihalyi called ‘flow’. By this he meant moments where we become so absorbed in an activity that we forget ourselves and lose track of time. We could lose ourselves in a good book, for example, or a computer game. The author Geoff Dyer, who’s written extensively on ‘peak experiences’, says: ‘If you asked me when I’m most in the zone, obviously it would be playing tennis. That absorption in the moment, I just love it.’ Others shift their consciousness by going for a walk in nature, where they find what the poet William Wordsworth called ‘the quiet stream of self-forgetfulness’. Or we turn to sex, which the feminist Susan Sontag called the ‘oldest resource which human beings have available to them for blowing their mind’.

A third way that people seek ecstasy today is through religious worship. In his classic text Varieties of Religious Experience(1902), William James noted that surrendering to a higher power often triggered deep psychological healing and growth. The experience of Bill Wilson, co-founder of Alcoholics Anonymous (AA), is one notable example of this: after decades of struggling with alcohol dependence, he finally surrendered to a God he barely believed in: ‘Suddenly the room lit up with a great white light. I was caught up in an ecstasy which there are no words to describe … it burst upon me that I was a free man.’

Psychologists and psychiatrists are moving from their traditional hostility to ecstasy to an understanding that it’s often good for us. Much of our personality is made up of attitudes that are usually subconscious. We drag around buried trauma, guilt, feelings of low self-worth. In moments of ecstasy, the threshold of consciousness is lowered, people encounter these subconscious attitudes, and are able to step outside of them. They can feel a deep sense of love for themselves and others, which can heal them at a deep level. Maybe this is just an opening to the subconscious, maybe it’s a connection to a higher dimension of spirit – we don’t know.

Our behavior is determined by  two sources: 1) common or axiomatic laws of information theory and 2) our human history, which created genetic heritage we call instincts and needs.

Axiomatic law for all conscious systems is the necessity for punishment and reward: survival oriented, guided behavior is a must. Without punishment and reward there will be no emotions and no goals. It seems that the need and ability to experience transcendence and ecstasy comes from our genetics: it seems to be our gift, like attachment and love. I.V. 

Posted in Happiness and Quality of Life, Values and Sense of Life | Leave a comment

What Intelligent Machines Need to Learn From the Neocortex


Posted 2 Jun 2017 | 15:00 GMT

Machines won’t become intelligent unless they incorporate certain features of the human brain. Here are three of them

Computers have transformed work and play, transportation and medicine, entertainment and sports. Yet for all their power, these machines still cannot perform simple tasks that a child can do, such as navigating an unknown room or using a pencil.

The solution is finally coming within reach. It will emerge from the intersection of two major pursuits: the reverse engineering of the brain and the burgeoning field of artificial intelligence. Over the next 20 years, these two pursuits will combine to usher in a new epoch of intelligent machines.

Why do we need to know how the brain works to build intelligent machines? Although machine-learning techniques such as deep neural networks have recently made impressive gains, they are still a world away from being intelligent, from being able to understand and act in the world the way that we do. The only example of intelligence, of the ability to learn from the world, to plan and to execute, is the brain. Therefore, we must understand the principles underlying human intelligence and use them to guide us in the development of truly intelligent machines.


There are good science articles, and there is science-like trash. I will let to the reader to determine. I.V. 

Posted in Artificial Intelligence | Leave a comment

Non-science or trash

Fari Amini & others in their ‘The general Theory of Love’ show persuasively how the nearly sacred infatuation and attachment works for all mammals. Helen Fisher, Ph.D, writes that this infatuation is necessary: it gives to us the ability to endure our mate for all remaining life. What has this ‘campaign’ common with human values? 

“The prospect creating an AI invites us to ask about the purpose and meaning of being human: what a human is for in a world where we are not the only workers, not the only thinkers, not the only conscious agents shaping our destiny.”

We know that we are descendants of primates, who have created superstitions and science, and this science teaches us that we are the only workers, thinkers and conscious agents shaping our destiny. Refusing from it is suicidal. I.V. 




Posted in Values and Sense of Life | Leave a comment

Making Humans a Multi-Planetary Species

Elon Musk, Chief Executive Officer, SpaceX, Hawthorne, California.

By talking about the SpaceX Mars architecture, I want to make Mars seem possible—make it seem as though it is something that we can do in our lifetime. There really is a way that anyone could go if they wanted to.


I think there are really two fundamental paths. History is going to bifurcate along two directions. One path is we stay on Earth forever, and then there will be some eventual extinction event. I do not have an immediate doomsday prophecy, but eventually, history suggests, there will be some doomsday event.

The alternative is to become a space-bearing civilization and a multi-planetary species, which I hope you would agree is the right way to go. So how do we figure out how to take you to Mars and create a self-sustaining city—a city that is not merely an outpost but which can become a planet in its own right, allowing us to become a truly multi-planetary species?

Posted in Cosmology | Leave a comment

Как воровали в СССР

Вопреки расхожему мнению, на бытовом уровне в СССР воровство существовало повсеместно, но при этом среди самих мелких воришек таковым не считалось.

например обвесить в магазине или изменить состав блюд в общепите, принести с завода пару подшипников, слить десятка два литров соляры на автобазе — это встречалось сплошь и рядом и не считалось чем-то зазорным. Отчего так происходило? Во-первых, от всеобщей бедности, равно размазанной по всем слоям общества. Во-вторых — из-за всеобщего дефицита необходимых товаров и услуг. Прокладки для сантехники всегда были у “своего” слесаря (которые тот спер где-то на складе), нужные запчасти для автомобиля (которых не было в автомагазинах) чудом находились у “своего” автомеханика. Можно сказать, что такая “низовая коррупция” является практически неизбежной в бедных странах, при этом внутри самого общества она не считается чем-то зазорным, люди просто выживают как умеют.

И третья составляющая такого воровства — это отсутствие частной собственности как таковой, в широком понимании этого слова. Фраза “всё вокруг колхозное, всё вокруг ничьё” очень хорошо характеризует ситуацию тех лет — взять “ничьи” дрова или “ничей” металлолом считалось почти нормой и вроде как даже и не воровством.

Pamatā, cilvēku nospiedošais vairākums ir godīgi un kārtīgi. Apzināti slikti, tādi, kas par sevi tā arī domā: es esmu zaglis, es esmu blēdis, tādu ir maz. Visbiežāk cilvēki zog tad, kad iekšējā taisnīguma izjūta to atļauj, kad zagšanu attaisno ar domu, ka es nezogu, bet izlīdzinu man vai citiem nodarītu netaisnību. Saskaņā ar šo sajūtu netaisnību mums nodarījusi valsts, retāk – līdzpilsoņi.

To cilvēku, kuri ņem kukuļus, apzog valsti un sabiedrību, to cilvēku vairākums tā rīkojas nevis ar domu, ka viņi dara slikti, ir negodīgi, ir zagļi, bet gan ar domu, ka ar savu rīcību viņi izlīdzina viņiem nodarītu netaisnību (protams, vēl ir daudzas citas pozīcijas, piemēram, ‘visi tā dara’, ‘tur nekā nevar darīt, tādēļ man arī jāņem’): valsts nerūpējas par patiesu godīgumu, valsts pieļauj, ka nesodīti zog (piemēram, paši piešķir sev nepamatoti lielas algas, u.t.t.) politiķi, biznesmeņi, dažādu iestāžu darbinieki, tad kāpēc lai es neņemtu? To redzam sākumā dotajā piemērā: padomju valsts deva pietiekoši daudz iemeslu uzskatīt, ka valsts cilvēku ir apzagusi, ka sistēma ir netaisnīga. Diemžēl, daudzas mūsdienu sabiedrības nepārprotami tuvojas šai vēstures situācijai.

Mazāk skarbs, bet visai masveidīgs ‘taisnīguma atjaunošanas’ paņēmiens ir nodokļu nemaksāšana. Dažreiz valsts saimnieciskai darbībai ir radījusi tik lielus nodokļus un tik mazus nemaksāšanas riskus, ka godīgs biznesmenis nevar konkurēt ar tiem, kas nemaksā; Tas rosina nemaksāt arī pārējos.  

Ja gribam, lai cilvēki masveidīgi nezagtu, tad jāsāk ar valsti: valstij jārīkojas tā, lai iedzīvotāju vairākums nepārprotami sajustu, ka lielie  zagļi, kas ņem miljonus, tiek noķerti un sodīti. Ka valstī valda taisnīgums un godīgums. 

Interesanti, ka godīgums ģimenes locekļu attiecībās ir augstāks par godīgumu pilsoņu-valsts attiecībās. Iznāk, ka valsts neveicina cilvēku godīgumu, bet tieši otrādi – provocē cilvēkus veidot speciālas, mazāk godīgas attiecības. Kādēļ tas ir tā? Viens no iemesliem ir fakts, ka ģimenē atgriezeniskā saite iedarbojas gandrīz momentāni, bet valsts-pilsoņu attiecībās – ar kaut kādu kavējumu, un dažreiz (kad izdodas valsti ‘veiksmīgi’ apzagt) – neiedarbojas nemaz. Slēdziens ir viens: jāmaina noteikumi sistēmā.

Vai to var panākt, vai tas ir iespējams? Pašreizējā sabiedrībā ar pašreizējiem likumiem – nē. Ko vispār būtu jādara, ko varētu darīt? Radikāli jāmaina soda principi: sodam jābūt atkarīgam no sabiedrībai nodarītā kaitējuma. (Šādu nepieciešamu pasākumu ir daudz, tai skaitā – radikāli jāpalielina sodi, tā, lai zagšanas ieguvums, reizināts ar noķeršanas varbūtību, būtu pietiekami daudz mazāks par nezagšanas ieguvumu. Vienkārši sakot, sodam jābūt tādam, lai zagt nebūtu izdevīgi. Ja vēl precīzāk, tad sodam jābūt tādam, lai to, ka zagt nav izdevīgi, domātu aptuveni 95-99% iedzīvotāju).

Ja 1000 Eiro nozog valsts iestādes darbinieks, tad sabiedrībai nodarītais kaitējums ir nesalīdzināmi lielāks par to, kuru valstij nodara 1000 Eiro naudas viltotājs. Jo pirmais ir diskretidējis valsti. Pašreizējie likumi (un tā sauktās ‘cilvēku tiesības’) izveidoti tā, ka šāda principu maiņa nav iespējama.

Stāvoklis ir līdzīgs psihoterapeitu aprakstītajai novirzei, kad slimnieks ‘iekritis’ emociju upes vienā krastā, ASV psihoterapeits Dr. D. Siegel to sauc par ‘sastingumu’ (rigidity), un  slimnieks pats no šīs novirzes atbrīvoties nespēj. Arī automātiskās regulēšanas teorijā ir zināmi šādi sistēmas stāvokļi, kad izejas parametrs – šajā gadījumā tā ir sadarbība starp pilsoņiem un valsti – ieņem vienu galējo vērtību, no kuras atbrīvoties neļauj pozitīvā atgriezeniskā saite, šajā gadījumā tās ir pilsoņu emocijas, kas nosaka viņu rīcību. 

Tādēļ nekas nemainīsies ne tikai Latvijā, bet visā pasaulē. Ar negodīgiem paņēmieniem izveidotā sociālā nevienlīdzība turpinās palielināties, masveidīga zagšana un morāles degradācija – arī.  Vai šos vienkāršos likumus nezina polītiķi? Zina, bet viņi tos netaisās mainīt. Kādēļ? Tādēļ, ka viņiem ir izdevīgi tā, kā ir. Vai arī varbūt vēl tādēļ, ka ‘tur nekā nevar darīt’, ka ‘tāda ir dzīve’.   I.V.

Posted in Contemporary Society Problems, Understand and Manage Ourselves, Values and Sense of Life | Leave a comment

Der Syrienkrieg einfach erklärt

Im Zuge des Arabischen Frühlings kam es vor gut fünf Jahren, im März 2011, zum Bürgerkrieg in Syrien. Was mit angeblich friedlichen Protesten gegen die Regierung von Präsident Bashar al-Assad begann, entwickelte sich bald zu einem komplizierten und undurchsichtigen Krieg, sodass kaum noch von einem Bürgerkrieg gesprochen werden kann. Verschiedenste bewaffnete Gruppen sind involviert, von Terror- bis zu Kurdenorganisationen, die gegen die Regierungstruppen, aber auch gegeneinander oder mit den Regierungstruppen kämpfen. In einem Vortrag am 30. Mai 2016 gab der Schweizer Historiker und Friedensforscher Dr. phil. Daniele Ganser eine mögliche und einfache Antwort auf die Ursache des Syrienkrieges.

Posted in Economics | Leave a comment

Garry Kasparov: Don’t fear intelligent machines. Work with them

One of the greatest chess players in history, Garry Kasparov lost a memorable match to a supercomputer in 1997. Now he shares his vision for a future where intelligent machines help us turn our grandest dreams into reality.

We must face our fears if we want to get the most out of technology — and we must conquer those fears if we want to get the best out of humanity, says Garry Kasparov. One of the greatest chess players in history, Kasparov lost a memorable match to IBM supercomputer Deep Blue in 1997. Now he shares his vision for a future where intelligent machines help us turn our grandest dreams into reality.

There is one thing only humans can do: that is dream. I surmise that AI machines will dream too. They will replace us. I.V. 

Stuart Russell: 3 principles for creating safer AI:

Posted in Artificial Intelligence | Leave a comment

Tā dzīvojam. Professor Daniele Ganser

Many, who don’t like his thoughts, call them conspiracy theories. It seems to me that it is the same conspiracy as evolution theory, our primate needs and behavior. I.V. 

Posted in Economics and Politics | Leave a comment

Giving Away Your Billion

Recently I’ve been reading the Giving Pledge letters. These are the letters that rich people write when they join Warren Buffett’s Giving Pledge campaign. They take the pledge, promising to give away most of their wealth during their lifetime, and then they write letters describing their giving philosophy.

“I suppose I arrived at my charitable commitment largely through guilt,” writes George B. Kaiser, an oil and finance guy from Oklahoma, who is purported to be worth about $8 billion. “I recognized early on that my good fortune was not due to superior personal character or initiative so much as it was to dumb luck. I was blessed to be born in an advanced society with caring parents. So, I had the advantage of both genetics … and upbringing.”

Kaiser decided he was “morally bound to help those left behind by the accident of birth.” But he understood the complexities: “Though almost all of us grew up believing in the concept of equal opportunity, most of us simultaneously carried the unspoken and inconsistent ‘dirty little secret’ that genetics drove much of accomplishment so that equality was not achievable.”

His reading of modern brain research, however, led to the conclusion that genetic endowments can be modified by education, if you can get to kids early. Kaiser has directed much of his giving to early childhood education.

Continue reading the main story

Most of the letter writers started poor or middle class. They don’t believe in family dynasties and sometimes argue that they would ruin their kids’ lives if they left them a mountain of money. Schools and universities are the most common recipients of their generosity, followed by medical research and Jewish cultural institutions. A ridiculously disproportionate percentage of the Giving Pledge philanthropists are Jewish.

Older letter writers have often found very specific niches for their giving — fighting childhood obesity in Georgia. Younger givers, especially the tech billionaires, are vague and less thoughtful.

A few letters burn with special fervor. These people generally try to solve a problem that touched them directly. Dan Gilbert, who founded Quicken Loans, had a son born with neurofibromatosis, a genetic condition that affects the brain. Gordon Gund went fully blind in 1970. Over the ensuing 43 years, he and his wife helped raise more than $600 million for blindness research.

The letters set off my own fantasies. What would I do if I had a billion bucks to use for good? I’d start with the premise that the most important task before us is to reweave the social fabric. People in disorganized neighborhoods need to grow up enmeshed in the loving relationships that will help them rise. The elites need to be reintegrated with their own countrymen.

Only loving relationships transform lives, and such relationships can be formed only in small groups. Thus, I’d use my imaginary billion to seed 25-person collectives around the country. 

A collective would be a group of people who met once a week to share and discuss life. Members of these chosen families would go on retreats and celebrate life events together. There would be “clearness committees” for members facing key decisions.

The collectives would be set up for people at three life stages. First, poor kids between 16 and 22. They’d meet in the homes of adult hosts and help one another navigate the transition from high school to college.

Second, young adults across classes between 23 and 26. This is a vastly under-institutionalized time of life when many people suffer a Telos Crisis. They don’t know why they are here and what they are called to do. The idea would be to bring people across social lines together with hosts and mentors, so that they could find a purpose and a path.

Third, successful people between 36 and 40. We need a better establishment in this country. These collectives would identify the rising stars in local and national life, and would help build intimate bonds across parties and groups, creating a baseline of sympathy and understanding these people could carry as they rose to power.

The collectives would hit the four pressure points required for personal transformation:

Heart: By nurturing deep friendships, they would give people the secure emotional connections they need to make daring explorations.

Hands: Members would get in the habit of performing small tasks of service and self-control for one another, thus engraving the habits of citizenship and good character.

Head: Each collective would have a curriculum, a set of biographical and reflective readings, to help members come up with their own life philosophies, to help them master the intellectual virtues required for public debate.

Soul: In a busy world, members would discuss fundamental issues of life’s purpose, so that they might possess the spiritual true north that orients a life.

The insular elites already have collectives like this in the form of Skull and Bones and such organizations. My billion would support collectives across society, supporting the homes and retreats where these communities would happen, offering small slush funds they could use for members in crisis.

Now all I need is a hedge fund to get started.

Posted in Human Evolution | Leave a comment

Curtains For Us All?

Martin Rees


Continue reading

Posted in Human Evolution | Leave a comment

Carl Sagan on God


Carl Sagan’s Best Arguments Of All Time:

Posted in Cosmology, Understand and Manage Ourselves, Values and Sense of Life | Leave a comment

The Conceptual Penis as a Social Construct: A Sokal-Style Hoax on Gender Studies

The Hoax

The androcentric scientific and meta-scientific evidence that the penis is the male reproductive organ is considered overwhelming and largely uncontroversial.

That’s how we began. We used this preposterous sentence to open a “paper” consisting of 3,000 words of utter nonsense posing as academic scholarship. Then a peer-reviewed academic journal in the social sciences accepted and published it.

This paper should never have been published. Titled, “The Conceptual Penis as a Social Construct,” our paper “argues” that “The penis vis-à-vis maleness is an incoherent construct. We argue that the conceptual penis is better understood not as an anatomical organ but as a gender-performative, highly fluid social construct.” As if to prove philosopher David Hume’s claim that there is a deep gap between what is and what ought to be, our should-never-have-been-published paper was published in the open-access (meaning that articles are freely accessible and not behind a paywall), peer-reviewed journal Cogent Social Sciences. (In case the PDF is removed, we’ve archived it.)

Assuming the pen names “Jamie Lindsay” and “Peter Boyle,” and writing for the fictitious “Southeast Independent Social Research Group,” we wrote an absurd paper loosely composed in the style of post-structuralist discursive gender theory. The paper was ridiculous by intention, essentially arguing that penises shouldn’t be thought of as male genital organs but as damaging social constructions. We made no attempt to find out what “post-structuralist discursive gender theory” actually means. We assumed that if we were merely clear in our moral implications that maleness is intrinsically bad and that the penis is somehow at the root of it, we could get the paper published in a respectable journal.

Manspreading — a complaint levied against men for sitting with their legs spread wide — is akin to raping the empty space around him.

This already damning characterization of our hoax understates our paper’s lack of fitness for academic publication by orders of magnitude. We didn’t try to make the paper coherent; instead, we stuffed it full of jargon (like “discursive” and “isomorphism”), nonsense (like arguing that hypermasculine men are both inside and outside of certain discourses at the same time), red-flag phrases (like “pre-post-patriarchal society”), lewd references to slang terms for the penis, insulting phrasing regarding men (including referring to some men who choose not to have children as being “unable to coerce a mate”), and allusions to rape (we stated that “manspreading,” a complaint levied against men for sitting with their legs spread wide, is “akin to raping the empty space around him”). After completing the paper, we read it carefully to ensure it didn’t say anything meaningful, and as neither one of us could determine what it is actually about, we deemed it a success. […]


To, ka mēs pieredzam sabiedrības un kultūras degradāciju, ka mēs esam tās dalībnieki un izpildītāji, to mēs jau zinājām. Bet ieraudzīt kārtējo izpausmi dažreiz ir ne tikai nožēlojami, bet dažiem tas šķitīs pat smieklīgi. Bet patiesībā nekā smieklīga tur nav.

To, ka cilvēku vairākumu neinteresē  lielie procesi, rāda tas, ko viņi lasa, kā viņi risina krustvārdu mīklas un sudoku, kā spēlē bumbiņu spēles internetā, kā skatās TV, ko runā par laiku, savām kaitēm, un tomātu vai puķu stādiem. Neinteresē? Pamatoti. Jo tāpat neviens neko iespaidot, izmainīt nevar. Un nevarēs. Lai kaut ko izmainītu, jāizmaina vidējā masu cilvēka domāšana, vērtības, šabloni. To var izdarīt un izdarīs tikai tādi notikumi, kas iespaidos visus. Masveidīgi. Izskatās, ka tas ir skarbs evolūcijas likums un paradokss, kuru mūsdienu evolūcijas zinātnieki zina, bet masu cilvēki nav iedomājušies pie sevis piemērot, uz sevi attiecināt: Nepiemērotais netiek kaut kādā veidā ‘pāraudzināts, mainīts vai iespaidots’, bet – tiek atmests. Un tā vietā nāk jaunais. Un, pat vēl skarbāk: nu un, ja kāds iedomājas, tas neko nemaina. Mums visiem jāiet kopā uz kopējo, likumsakarīgo, vienaldzīgo, ‘godīgi pelnīto’ galu. 

Masu cilvēks ir evolūcijas saprogrammēts truls pusautomāts, kurš nav vainīgs par to, ka klausa savai programmai. Mēs visi tā darām, un mēs neesam vainīgi. Bet: tie, kas ir sevi ieraudzījuši un sapratuši, tie ir atbildīgi un, ja pārkāpj tos likumus, kurus ieraudzījuši un sapratuši, tad ir vainīgi. Bet tas, salīdzinot ar viena cilvēka, ar vienas paaudzes dzīveslaiku, ir ilgs process.

Kad kaut kas mainīsies? Tikai tad, kad populācijas subjekti ieraudzīs galveno, vissvarīgāko vērtību: sevis, savas populācijas saglabāšana, izdzīvošana liela laika mērogā. Šī civilizācija to vēl nav sasniegusi, to ir pateikuši tikai daži tās izcilie domātāji (Steven Hawking, Carl Sagan, E.O. Wilson), bet kopējā izklaides un sensāciju jūklī viņu teiktais neko nemaina. I.V. 

Vēl viens tik pat vērtīgs zinātnisks darbs: 

Why flamingos stand on one leg [Life Lines]
Posted in Are We doomed?, Understand and Manage Ourselves | 1 Comment