The singularity: fact or fiction?

In this paper I seek to shed light on the notion of a ”technological singularity”. First I start off by defining it as a kind of artificial intelligence that is capable of selfimproving thus rapidly increasing in intelligence, letting us far behind. The first step in achieving this will probably be creating human-level artificial intelligence (AGI). To create AGI we can use whole-brain emulation or write a piece of software from scratch. Next a list of possible accelerators is provided, which could speed up the process of achieving AGI. Then I take a look at already existing software which already gives us a good starting point for creating AGI software. It is argued that once we create this AGI, there are no reasons to think it can’t produce superhuman intelligence due to advantages of its substrate. In chapter three I try to provide possible problems for the notion of a singularity. An enumeration of possible speed bumps is provided. After this, the slowdown hypothesis by Plebe and Perconti is explored.
Finally, Theodore Modis’s argument is taken a look at. His argument of the nonexistence of exponential patterns is presented but found wanting. In conclusion I can state that the singularity has many reasonable arguments going for it and that we should take the arguments seriously.
Keywords: technological singularity, artificial intelligence, intelligence explosion.
Chapter 1. Introduction
In this paper I wanted to explore a domain of which I didn’t really knew much. That domain is called artificial intelligence. Because philosophy is my field of research this paper doesn’t consist of very technical notions of computer science, maths or other things related to AI. I would rather like to explore what the field of AI could mean for our own society, what influence it could possibly exert over us as a species.
Philosophers are usually portrayed as old men with scruffy beards that concern themselves only with the past and other philosophers that lived long ago. However being young and tech-savvy myself I wanted not to look at the past, but instead, look at the future. In an increasingly technology dependent society it seems to me appropriate that philosophers not be blind of what the technological future might hold and think about certain plausible scenarios. It could very well be important to imagine possible futures so that we can anticipate them and take precautions when necessary.
It is not only my eyes who have caught interest in artificial intelligence but also some major figures. I think of Stephen hawking or Elon Musk. Both of them have warned for the dangers of potentially malevolent AI. Most people think they have been watching too many Terminator movies and that their claims are simply based on hot air. This however is a mistake and this becomes apparent when reading the papers of various scholars within the field. The fact that this is already being spoken about might be an indication of the fact that the field of artificial intelligence is quickly bearing results.
Because of my interest in technology I soon became familiar with the term ”technological singularity”. There are a great many ways in which this term is defined and I didn’t really have a clue of what the fuss was all about. And so with this paper I seek to shed some light on the matter.
I will try to clear up the notion of a ”technological singularity”. Next, I aspire to give several reasons for whether and how that scenario might come about. And lastly, I will strive to give reasons why it possibly won’t come about or into which problems it might run. It should always be kept in mind that we are talking about a potential future and that we can’t ever be certain. This doesn’t mean we should remain silent about the matter because potential problems could be avoided if we were able to anticipate them.
1.1 What is the technological singularity?
So what is this ”technological singularity”? The best way to start off this section is by using a definition: An event or phase that will radically change human civilization, and perhaps even human nature itself. (Eden et al., 2012, p. 1). This event that might radically change human civilization comes into existence because of accelerating progress in technologies such as artificial intelligence, robotics and nanotechnology. If we take a look into our history, we can clearly observe a trend of technologies emerging faster and faster in ever increasing speed. One such example is Moore’s law which states that the number of transistors in a chip doubles approximately every two years. If we would plot this on a graph, we would see the exponential nature of it very clearly. Moore’s law in itself isn’t something to be very concerned about, it’s the property of exponentiality that concerns some. If we were to apply this property to advances in artificial intelligence, some very interesting things could come about (Eden et al., 2012).
For example, some researchers believe that once we create an AI that is capable of improving itself this will cause a runaway effect. This means that the AI could improve itself, this improved version can improve itself yet again and so on. Because no humans are involved in this process it could very easily happen that we can not grasp in which direction these improvements will constitute themselves, since the AI is probably going to be vastly more intelligent than us. It is precisely this that concerns some scholars. People who are singularitarians, i. e. people who hold the belief that the singularity will happen subscribe to the technological singularity thesis as described above. This version of the singularity thesis is based on the creation of an artificial superintelligent agent. It could be a software-based synthetic mind (Muehlhauser and Salamon, 2012). However, there is another possibility.
This scenario does not depend on making an AI external to ourselves, instead it depends on the idea of improving ourselves. This notion is central to transhumanist thought. It is the belief that by utilizing cutting-edge technologies such as nanotechnology and genetical engineering, we could enhance ourselves to the point where we become a post-human race. This post-human race will overcome some of our human limitations such as aging, death and disease. A popular name associated with this train of thought is Ray Kurzweil (Kurzweil, 2005).
There are many ways in which all of this could come about or end up not coming about. But the fact that this is considered a very serious issue translates itself into the fact that more research is getting published each year (Eden et al., 2012). However, not everyone shares the optimism with singularitarians. In the following chapters I am going to examine the arguments from the singularitarians and the arguments against the singularity hypothesis. I am, for reasons of clarity, going to narrow down the scope of this paper. The focus will be specifically on the first scenario, which is the bringing forth of the singularity by means of developing AGI. This would supposedly create a run-away effect.
Chapter 2. Corroborating the singularity
David Chalmers has eloquently put the crux of the argument into words in his paper: ”the singularity: A philosophical analysis” (Chalmers, 2010). I will try to represent this argument here because it constitutes a good starting point. First he formalizes the argument into the following:
1. There will be AI+.
2. If there is AI+, there will be AI++.
3. There will be AI++.
With AI+ is meant the kind of intelligence that is capable of self-improving. To be able to self improve, the AI should be capable of understanding its design (Loosemore and Goertzel, 2012). AI++ denotes a kind of super-intelligence. Most people agree with premise 1, however it still needs some support to really back it up. If premise 2 holds true, an intelligence explosion will occur. If this repeats itself, super-intelligence will arrive and we can speak of a singularity emerging. (Chalmers, 2010).
The argument above does assume that there is such a thing as intelligence and that it can be improved. Some people might challenge this by claiming that there is not a property worth calling intelligence, but this seems a bit far-fetched. If one can agree that humans are more intelligent than, for example, snails it does seem reasonable to claim that there can be beings who are more intelligent than humans (Chalmers, 2010).
The first step in achieving a technological singularity would be to achieve, what is called, human-level AI (AI+). This is also commonly referred to as general AI, which is distinct from narrow AI such as the chess computer Deep Blue (Richards and Shaw, 2004). It must not be mistaken though that this general AI (henceforth AGI) should reason as a human. It should merely be able to perform the same tasks as humans.
To avoid having in mind an anthropomorphic definition of intelligence, I will quote a suitable definition:
Intelligence measures an agent’s capacity for efficient cross-domain optimization of the world according to the agent’s preferences. (Muehlhauser and Salamon, 2012, p. 17)
There seem to be two ways in which one can try to pursue this goal of creating an AGI. One is through whole brain emulation. This basically means that by utilizing a computer one emulates all of the brain structures to reproduce human cognition. The other option is to make the whole software from scratch. The first option has the advantage of relying on a basis of millions of years of evolution and the second option has the advantage of much more flexibility (Muehlhauser and Salamon, 2012).
Let’s take a closer look at the emulation argument. This argument relies on some premises which can be formalized as follows:
1. The human brain is a machine.
2. We will have the capacity to emulate this machine.
3. If we emulate this machine, there will be AGI.
4. Absent defeaters, there will be AGI.
The first premise seems well supported by what we have learned from biology. Fundamentally, there seems no reason to doubt that every living organism is in essence a machine, the one more complicated than the other. We also have good reasons to believe the second premise, as we have already succeeded in emulating more simple life forms such as C. Elegans (Palyanov et al., 2012). The third premise relies on the fact that if we emulate something approximately 1 on 1, there is no reason to believe that the relevant characteristics wouldn’t be emulated as well. So if we were to be able to emulate a whole human brain, there are reasons to believe we will have emulated human intelligence. The final conclusion of this argument is that if no defeaters occur, AGI will follow (Chalmers, 2010).
The next argument I will present is the evolution argument:
1. Evolution produced human-level intelligence.
2. If evolution produced human-level intelligence, then we can produce AGI.
3. Absent defeaters, there will be AGI.
It is assumed in this argument that if evolution was able to produce human-level intelligence, surely we could achieve it as well. Since evolution doesn’t require any forethought; it is simply the mechanism of natural selection at work. There seems to be absolutely no reason why we couldn’t do it much faster; we are intelligent in contrast to evolutionary mechanisms (Chalmers, 2010).
Whilst it is always a risky business to make predictions of the future and many have failed, it can still be very useful in determining where to spend money and research effort (Bostrom, 2007). There isn’t really a reliable method to make accurate longterm predictions but there are some things that should be able to convince sceptics that the singularity has a reasonable amount of chance to happen. Enough to perhaps consider taking action, how this should be done and whether we ought to take action does not fall within the scope of this paper. The following section will largely be based on Muehlhauser and Salamon (2012), because they do an excellent job at enumerating possible accelerators.
2.1 Accelerators. 1. More hardware
There are good reasons to expect that our hardware will be much more powerful in the near future, because of current trends like Moore’s law. Though it seems that some academics disagree on whether or not Moore’s law will hold, this need not be a great problem (Lundstrom, 2003; Mack, 2011). Even when hardware stops advancing exponentionally, slowing down to linear advancement, progression is still being made. One important thing to note according to Muehlhauser and Salamon is that better hardware doesn’t correspond directly to artificial intelligence. It merely gives us more opportunities to run all kinds of software that might be too complex to compute with the hardware resources we have today (Muehlhauser and Salamon, 2012).
2. Better algorithms
Some people dispute the usefulness of mathematics, but one such domain where mathematics is very crucial is the domain of AI. Mathematical breakthroughs possibly mean that better algorithms can be designed, greatly increasing the efficiency of computers at computing things. Muehlhauser and Salamon give a clear cut example:
For example, IBM’s Deep Blue played chess at the level of world champion Garry Kasparov in 1997 using about 1.5 trillion instructions per second (TIPS), but a program called Deep Junior did it in 2003 using only 0.015 TIPS (Richards and Shaw, 2004). (Muehlhauser and Salamon, 2012, p. 22) This is a valuable example to show that raw computing power doesn’t always bring the best results if it isn’t combined with a clever algorithmic design of the program (Muehlhauser and Salamon, 2012).
3. Massive datasets
The use of large datasets facilitates progress in speech recognition and translation software. It is therefore no suprise that big company’s like Google and Apple, who are able to gather large amounts of data, collaborate with researchers that work on speech recognition and translation software. There is reason to believe that datasets will continue to become larger and larger over time, thus enabling advances in certain domains of AI (Muehlhauser and Salamon, 2012).
4. Progress in psychology and neuroscience
A great way to uncover information about what it is to be intelligent is looking at the human brain, because that happens to be the most intelligent object
hitherto known. It is believed by some that advances in cognitive science and neuroscience will stimulate advances in AI (Van der Velde, 2010). This isn’t a very astonishing claim as these are things that are already taking place right now. Neural networks and reinforcement learning, for example, have already greatly contributed to progress in AI (Arel, 2012). A recent project named the OpenWorm project has mapped the connections between 302 neurons of a worm (C. elegans) and simulated them in software. The project’s ultimate goal is to completely simulate C. elegans as a virtual organism. What is pretty astonishing is that they uploaded that neural network into a lego robot and it behaved, more or less, as C. elegans normally does.
5. Accelerated science
Muehlhauser and Salamon also note that it is important to take into account the fact that many first world countries are increasingly developing, which amounts to more research being conducted. An interesting statistic states that the world’s scientific output grew by one third from 2002 to 2007 alone, which is mainly because of upcoming nations like India and China (Muehlhauser and Salamon, 2012). Some inventions can also boost progress, the fMRI for example, which was a great tool to accelerate neuroscience. Even scientists themselves can be enhanced to increase scientific output says Bostrom and Sandberg (Bostrom and Sandberg, 2009).
6. Economic incentive
Some companies may want to invest and collaborate with researchers, because it could prove to be cheaper using AI and automating some or even large chunks of work(Brynjolfsson and McAfee, 2011). There are many advantages to robots working for you, such as the fact that they do not have to sleep nor take breaks and, most importantly, don’t have to receive a monthly payment. Amazon is a good example of a company that already uses robots to move items in their stockrooms.
7. First-mover incentive
A last possible accelerator that Muehlhauser and Salamon identify is the firstmover incentive. This essentially means that people will want to be the first ones acquiring a new technology, because doing so will enable them to have a great advantage over the other competitors (Gubrud, 1997). It would be a case of bringing a gun to a knife fight as they say. A quote illustrates more clearly how this could increase the speed of development drastically:
Thus, political and private actors who realize that AI is within reach may devote substantial resources to developing AI as quickly as possible, provoking an AI arms race. (Muehlhauser and Salamon, 2012, p. 23)
2.2 Formal breakthroughs?
If we want to create a good formalism for an AGI, we will have to take into account many things it should be able to do. First of all, it should be able to make sense of data via induction. Induction is a major problem in philosophy and it boils down to the fact that the AGI should be able to generalize rules from large amounts of data (Hutter, 2009). This has to be achieved so it can form a model of the world. Forming a model of the world is crucial to making predictions. To most of us it should be clear why predictions matter. Predictions matter because they help us make decisions. If we, for example, wanted to determine whether we should wear a coat or a swimming suit, it makes sense to be able to predict what weather it will be. This helps us make the right decision. The next thing to do is take action. We want our AGI to be able to execute certain actions (Hutter, 2009). Tieši tā. Tad, kad AGI mašīnas lietos indukciju, automātiski, patvaļīgi, nepārtraukti un neierobežoti veidos ārējās pasaules modeļus (katrai robota kustībai atbilst savs modelis, bet tādu ir miljoniem), lietos šos modeļus notikumu prognozēšanai un izvēlēsies savu rīcību, balstoties uz prognozēm, tad mēs varēsim teikt, ka ir iegūta AGI. Cilvēki un dzīvnieki šo vienkāršo modeļu lielāko daļu lieto neapzināti. Bez tam vēl, tikai cilvēki, izmantojot dažādu valodu (matemātika, fizika, ķīmija, kosmoloģija, sociālās zinātnes) simbolus, apziņā veido lielu sistēmu modeļus.
If we can create an AGI that can learn via induction, predict possible futures, make decisions according to those predictions, and finally, execute the actions once it made a decision, we are on the right track. The following two formalisms claim to have provided a formalism that can do just the thing we ask our AGI to be capable of. According to Muehlhauser and Salamon many futurists and philosophers aren’t aware of the progress being made towards self-improving AI (Muehlhauser and Salamon, 2012). To state their case, they mention two important formalisms of AGI.
The first one is Marcus Hutter’s universal and optimal AIXI agent model. Because the explanation of these formalisms are very technical, I will opt for quoting a whole paragraph instead of trying to paraphrase it in my own words. This will hopefully give the reader a better understanding of how it might work:
The theory, coined UAI (Universal Artificial Intelligence), developed in the last decade and explained in Hutter says: All you need is Ockham, Epicurus, Turing, Bayes, Solomonoff, Kolmogorov, Bellman: Sequential decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental probability distribution is known. If the environment is unknown, Bayesians replace the true distribution by a weighted mixture of distributions from some (hypothesis) class. Using the large class of all (semi)measures that are (semi)computable on a Turing machine bears in mind Epicurus, who teaches not to discard any (consistent) hypothesis. In order not to ignore Ockham, who would select the simplest hypothesis, Solomonoff defined a universal prior that assigns high/low prior weight to simple/complex environments, where Kolmogorov quantifies complexity. Their unification constitutes the theory of UAI and resulted in AIXI (Hutter, 2005). (Muehlhauser and Salamon, 2012, p. 24)
It must be noted that AIXI is uncomputable. However, there are computationally tractable approximations who have already yielded some interesting results. The same approximation (MC-AIXI-CTW) has accomplished the feat of learning to play TicTacToe, Kuhn Poker and Pacman from scratch (Veness et al., 2011). The next goal would be to do some tests in virtual worlds and finally some tests on real-world problems (Muehlhauser and Salamon, 2012).
The second formalism is developed by Jurgen Schmidhuber. He named his program after the famous mathematician Kurt Godel, by whose theories he was inspired, hence the ’G¨odel machine’. For the same reasons as before I will use a quote, this is simply more comprehensible:
The G¨odel machine already is a universal AI that is at least theoretically optimal in certain sense. It may interact with some initially unknown, partially observable environment to maximize future expected utility or reward by solving arbitrary user-defined computational tasks. Its initial algorithm is not hardwired; it can completely rewrite itself without essential limits apart from the limits of computability, provided a proof searcher embedded within the initial algorithm can first prove that the rewrite is useful, according to the formalized utility function taking into account the limited computational resources. Self-rewrites may modify/improve the proof searcher itself, and can be shown to be globally optimal, relative to G¨odel’s well-known fundamental restrictions of provability (G¨odel, 1931). (Schmidhuber, 2012, p. 175)
This goes on to show that research is being done towards the development of AGI and that this appears to begin bear fruit. Muehlhauser and Salamon conclude from this that there is a significant probability that AGI will be created this century (Muehlhauser and Salamon, 2012). They do nuance their claim by saying that it is not a scientifical claim, albeit a reasonable one. To infinity and beyond!
The former sections are arguing for AGI. But even if we were to accomplish such a feat, this does not necessarily imply that a singularity will follow. It could very well be that the AGI we have created is not capable of self-improving. However, this seems unlikely according to Muehlhauser and Salamon. Narrow AI is already able to outsmart us in some niches like chess and memory size, why wouldn’t they be able to outsmart us in general intelligence as well? There are some very good reasons as to why AGI has several advantages compared to biological minds when it comes to creating more intelligent beings. Below I will provide a list of the advantages Muehlhauser and Salamon think to be of substance:
1. Increased computational resources
The human brain is constrained by evolution with a brain that only has a specific size and a specific amount of neurons that can’t really be altered. Ifwe compare this with machine intelligence, it becomes obvious that we could scale the latter up much more. Imagine a brain the size of a ware house say Muehlhauser and Salamon. Point being that when working with non-biological materials we have much more flexibility (Muehlhauser and Salamon, 2012).
2. Communication speed
The speed of our brain is fixed by biology but the speed of software minds can be upgraded. We just have to move the software mind to better hardware, thus enabling it to process information faster (Muehlhauser and Salamon, 2012). Hardware can essentially compensate for inadequate software.
3. Increased serial depth
The best way to explain this is by directly quoting Muehlhauser and Salamon:
Due to neurons’ slow firing speed, the human brain relies on massive parallelization and is incapable of rapidly performing any computation that requires more than about 100 sequential operations. Perhaps there are cognitive tasks that could be performed more efficiently and precisely if the brain’s ability to support parallelizable patternmatching algorithms were supplemented by support for longer sequential processes. (Muehlhauser and Salamon, 2012, p. 26)
4. Duplicability
What is meant by this is that it is very easy to create more software minds, we can simply copy the software of the original one. The only way in which we can create more human brains at the moment is by having children. This is a very labor-intensive thing to do compared to copying software. In no time we could provide every piece of suitable hardware with a software mind and perhaps even surpass the human population (Muehlhauser and Salamon, 2012).
5. Editability
This basically boils down to the fact that it is way easier to edit the software of a software mind, than it is to edit the human mind. We could perhaps do this by means of genetic engineering but this process is a lot harder than just rewriting a piece of code.
6. Goal coordination
Imagine a set of AGI copies that all work together on one task because they can easily parallelize their task. Instead of 1 human worker that works 10 hours on a task, software minds could, if the task is parallelizable, split the work into 10 times 1 hour of work, thus increasing the speed at which things get done.
7. Improved rationality
We humans think we are a rational species. But in fact we tend to behave very irrational and have lots of biases (Toplak et al., 2011). Machine intelligences do not have to deal with these problems if we design them right. All of these advantages make it likely that if we create AGI, super intelligence might follow (Muehlhauser and Salamon, 2012).
Chapter 3. Refuting the singularity
Refuting the singularity shouldn’t be taken too literally, because it’s very hard to refute the possibility of something happening in the future. This doesn’t mean we should remain silent about it and we can’t say anything reasonable. The following chapter will be mostly about giving reasons why some think that it isn’t very likely something like the singularity will happen.
3.1 Speed bumps
While the arguments of Muehlhauser and Salamon are mostly pro singularity, they have also summarized some important speed bumps that could happen. Those speed bumps could delay the process of advancing AI:
1. An end to Moore’s law
Thanks to the exponential rate at which information technologies have been progressing, we have been quickly advancing the whole field of technology. Some papers claims that the prospects for the continuation of Moore’s law aren’t very bright (Lundstrom, 2003; Mack, 2011). If this were to come to an end because of, for example, physical constraints, we can imagine that technical progress will also slow down, thus delaying a potential singularity (Muehlhauser and Salamon, 2012).
2. Depletion of low-hanging fruit
By depletion of low-hanging fruit is meant that when a new research field sees the light, the first advances in that field are probably going to be easier than later advances. This is mainly due to increasing complexity. If it were to become apparent that AI is one of those fields, progress could be delayed (Muehlhauser and Salamon, 2012). This topic will be more fully explored later on in this paper.
3. Societal collapse
If we, for a number of possible reasons, are unable to continue living in flourishing societies, it could forestall progress (Posner, 2004). Natural disasters could happen and lay ruins to our civilization. It doesn’t need much explanation to understand that this could be a serious threat to a singularity ever occurring (Muehlhauser and Salamon, 2012).
4. Disinclination
If people were purposely trying to prevent AI advancement from happening, this could also be a major problem for potential advances (Chalmers, 2010). This could be because of public fears, or some sort of political agenda interfering. A perfect example of a case are the attacks on the potato test fields in Wetteren (Belgium) against GMO technology. Because of fears, activists were trying to sabotage such experiments. The same could happen to AI technology. These speedbumps seem like very reasonable possible problems. Especially societal collapse or disinclination could mean serious trouble for the singularity. In the next section, the depletion of low-hanging fruit is going to be investigated more thoroughly.
3.2 The slowdown hypothesis
Alessio Plebe and Pietro Perconti want to present a case for something they call the ”slowdown hypothesis” (Plebe and Perconti, 2012). They do not think the philosophical arguments against AGI are very compelling, however that is not their thesis. The case they want to make is that they think progress in AI is going to slow down because, as the field advances, it will become clear that the things areway more complex than we originally thought them to be. Plebe and Perconti state their aims very clearly at the beginning of their paper and it goes as follows:
On the whole, we will argue that the slowdown effect is due both to reasons that are internal to the logic of scientific discovery, and to the changes in the expectations held in regard to a much idealized subject of inquiry: ”intelligence”. (Plebe and Perconti, 2012, p. 350).
To argue for their case of the ”slowdown hypothesis” they make use of a mathematical formalization which I will not present in this paper. This is due to reasons of complexity and the fact that overall this mathematical formalization doesn’t add much understanding to the overall thesis they are trying to corroborate. The quintessence of their reasoning is based on the fact that when investigating a field that is based on the notion of human intelligence, we stumble upon discoveries that often spawn a whole new field of investigation (Plebe and Perconti, 2012). Because of the complexity of what it is to be intelligent, we increasingly learn that we have only touched upon the tip of the iceberg. For example, it might be discovered that a process that was first thought to be atomic, actually consists out of two sub-processes that each deserve their own branch of investigation.
As a counterargument one can argue that the first discoveries within a field are more important than the later discoveries (Plebe and Perconti, 2012). This because the later smaller discoveries, which fill the mesh, constitute less to intelligence than the rigid structure of the mesh itself, which is the foundation of the concept ”intelligence”. This is a solid argument and should be taken into account, however this doesn’t refute the slowdown hypothesis.
According to Plebe and Perconti the singularity hypothesis requires an exponential growth of computational and design capacities. They think that it is even questionable that there will be indefinite linear growth. From this they conclude that it sounds like an a fortiori argument supporting the slowdown hypothesis (Plebe and Perconti, 2012).
The next finding of Plebe and Perconti is something they call the ”zooming in and zooming out effect”. In scientific theories often some aspects are idealized. For example, Galileo who imagined a perfectly round and smooth ball rolling off a perfectly flat surface, made advances in mechanics. This is due to the idealization of some aspects of the theory, in this case a perfectly smooth and perfectly flat surface, that in the real world doesn’t actually exist. It basically boils down to the fact that to explain a certain phenomenon, one has to sometimes deliberately ignore some parts of the real world (Plebe and Perconti, 2012). Tā ir tā pati ārējās pasaules modeļu veidošana apziņā.
They go on to remark that in the history of science this idealization has been going on for a long time and changes its scope continuously; hence, the name ”zooming in and zooming out effect”. This effect should be taken into account when thinking about intelligence. Our definition of intelligence has changed throughout the ages and certain aspects have been idealized. The singularity hypothesis is, for example, based upon the assumption that AI findings are cumulative:
If the amount of features we are interested in grows in a remarkable way, the cumulative effect vanishes. In fact the growth of knowledge involves increases in a horizontal direction rather than in a vertical cumulative one. (Plebe and Perconti, 2012, p. 356). The last couple of decades our concept of intelligence has changed tremendously. Intelligence can now be seen more as:
A multifaceted cognitive process, with a proliferation of proposals of new kinds of intelligence, from emotional to musical, from spatial to social. (Plebe and Perconti, 2012, p. 356) Because of this, the number of aspects that have to be taken into account increases more and more. Perconti and Plebe conclude from this that the pathway of scientific discovery is going in the direction of a slowdown rather than towards a singularity.
These are possible speed bumps, however this does not mean that such a thing as the singularity can’t possibly happen. Perconti and Plebe make some very viable arguments and we should conclude from this that the path to a singularity will be full of potential obstacles. This might mean that the current estimates of some futurologists might be overly optimistic. Kurzweil is one such example of a futurologist some people consider too optimistic. Kurzweil predicts that a singularity will probably occur somewhere close to 2045 and that we’ll achieve AGI by 2030 (Kurzweil, 2005). Whether or not this will be the case can only the future determine. There are however, people who have argued against the possibility of a singularity taking place. We will take a look at them in the following section.
3.3 Why the singularity cannot happen
Theodore Modis argues against a singularity characterized as exponential intelligence-growth on the grounds that no exponential pattern actually exists in nature. The supposedly exponential pattern reveals itself in the end to be following S-curves (Modis, 2012). To make clear what he means I will provide a graph which demonstrates an exponential and an S-curve.
Modis believes that this is a fundamental cosmic law that dictates how progress evolves. To make his case he uses several examples of patterns that were at first seemingly exponential but later on behaved like an S-curve (Modis, 2012). I will provide two of his examples to show what he means. The first one is the much referenced Moore’s Law. The number of transistors has doubled every two years since 1970s. But it is generally accepted that eventually this exponential pattern will cease to be.
The next example is the US oil production. During the first hundred years, oil production was following an exponential pattern. Later on it became clear that this pattern couldn’t keep on going and it became flattened out as the graph shows.
From these examples Modis seems to conclude that exponential patterns have no basis in the real world and that this a priori excludes any singularity from happening (Modis, 2012). However, to me it is unclear that what Modis is trying to ascertain is really a reasonable counterargument. The examples he uses seem to rely on the depletion of certain material things or the fact that some physical properties limits what can be achieved. It isn’t clear that the notion of a singularity or intelligence explosion relies on such things. Intelligence isn’t a material substance and therefore not limited like the amount of oil. It also doesn’t seem very likely that intelligence is limited by physical constraints.
In fact, whether or not the advancement in intelligence later on reveals itself to be an S-curve is not very important. Because it doesn’t rule out the fact that there could be a very substantial increase in intelligence compared to normal human intelligence and that is what the singularity is all about. It only has to be disruptive in the sense that big changes happen to our everyday life because of this event happening. So to me the counterargument from Modis isn’t very convincing. We should however, keep in mind that exponential patterns probably won’t go on forever, but that this doesn’t undermine the whole singularity thesis.
Chapter 4. Conclusion
In chapter one I have tried to explain what is meant by the ”technological singularity”. A satisfying answer is that it denotes the point at which human-level AI is developed that is capable of self-improving, thus rapidly becoming more and more intelligent up to the point where we can’t even fathom how the AI thinks. We have learned that this can be achieved in two ways: one is by creating an artificial superintelligent agent, the other, by becoming that super-intelligent agent ourselves.
In chapter two I presented two possible ways of creating AGI. The first is by using whole brain emulation, this essentially mimics the way the brain works. The second option is by writing a piece of software that is capable of self-improving. The next section enumerated possible accelerators for creating AGI. All of these accelerators make it pretty feasible that something like a singularity could come into existence.
After this, two quite technical formalisms were presented of possible AGI designs. This hopefully gave the reader an understanding of what such an AGI design could look like. In section three I summarized reasons why an AGI has advantages over humans to create super-human intelligence.
In chapter three I aspired to present some possible objections to this whole notion, or some potentially troubling things. The first section is an enumeration of possible things that could slow AI progress down. In the next section I have taken a look at Plebe and Perconti’s ”slowdown hypothesis”, which essentially says that most futurists are too optimistic about the future because they forget that as research in an area progresses it tends to become more complex, thus slowing progress down. Not only does the research slow down because of complexity, it also slows down because the notion of what is being investigated changes itself. In the next chapter Theodore Modis makes the claim that an exponential pattern of recursive self-improving cannot happen, because exponential patters simply do not exist in nature. The exponential patterns always reveal themselves to be S-curves. However, I have concluded that the examples Modis gives aren’t very convincing.
Finally, I can conclude from my research that it isn’t just a bunch of fiction and that we should in all seriousness ask ourselves some questions. If this recursive selfimproving theory a.k.a. the singularity is a feasible scenario, ought we not take action?
This action could consist in figuring out whether or not such an outcome would be beneficial to us or disadvantageous. If it would be disadvantageous, how can we make sure that this will not be the case. Some people like the researchers at MIRI (Machine Intelligence Research Institute) are already taking this potential threat seriously, and are working on ways to make it safe. Whether or not such a thing as the singularity will eventually occur, it is a good thought-exercise that stimulates us to advance our collective web of knowledge.
Arel, I. (2012), The threat of a reward-driven adversarial artificial general intelligence, in ‘Singularity Hypotheses’, Springer, pp. 43–60.
Bostrom, N. (2007), ‘Technological revolutions: ethics and policy in the dark’, Nanoscale: Issues and perspectives for the nano century pp. 129–152.
Bostrom, N. and Sandberg, A. (2009), ‘Cognitive enhancement: Methods, ethics, regulatory challenges’, Science and Engineering Ethics 15(3), 311–341.
Brynjolfsson, E. and McAfee, A. (2011), ‘Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy’.
Chalmers, D. (2010), ‘The singularity: A philosophical analysis’, Journal of Consciousness Studies 17(9-10), 7–65. Eden, A. H., Steinhart, E., Pearce, D. and Moor, J. H. (2012), Singularity hypotheses: An overview, in ‘Singularity Hypotheses’, Springer, pp. 1–12.
Go¨del, K. (1931), ‘U¨ ber formal unentscheidbare sa¨tze der principia mathematica und verwandter systeme i’, Monatshefte f ¨ur mathematik und physik 38(1), 173–198.
Gubrud, M. A. (1997), Nanotechnology and international security, in ‘Fifth Foresight Conference on Molecular Nanotechnology, Palo Alto, CA, Nov’, pp. 5–8.
Hutter, M. (2005), Universal artificial intelligence, Springer.
Hutter, M. (2009), ‘Open problems in universal induction & intelligence’, Algorithms 2(3), 879–906.
Kurzweil, R. (2005), The singularity is near: When humans transcend biology, Penguin.
Loosemore, R. and Goertzel, B. (2012), Why an intelligence explosion is probable, in ‘Singularity Hypotheses’, Springer, pp. 83–98.
Lundstrom, M. (2003), ‘Moore’s law forever?’, SCIENCE-NEW YORK THEN WASHINGTON- pp. 210–212.
Mack, C. A. (2011), ‘Fifty years of moore’s law’, Semiconductor Manufacturing, IEEE Transactions on 24(2), 202–207.
Modis, T. (2012), Why the singularity cannot happen, in ‘Singularity Hypotheses’, Springer, pp. 311–346.
Muehlhauser, L. and Salamon, A. (2012), Intelligence explosion: Evidence and import, in ‘Singularity Hypotheses’, Springer, pp. 15–42.
Palyanov, A., Khayrulin, S., Larson, S. D. and Dibert, A. (2012), ‘Towards a virtual c. elegans: A framework for simulation and visualization of the neuromuscular
system in a 3d physical environment’, In silico biology 11(3), 137–147.
Plebe, A. and Perconti, P. (2012), The slowdown hypothesis, in ‘Singularity Hypotheses’, Springer, pp. 349–365.
Posner, R. A. (2004), Catastrophe: risk and response, Oxford University Press.
Richards, M. A. and Shaw, G. A. (2004), ‘Chips, architectures and algorithms: Reflections on the exponential growth of digital signal processing capability’, Unpublished manuscript, Jan 28.
Schmidhuber, J. (2012), ‘Philosophers & futurists, catch up! response to the singularity’, Journal of Consciousness Studies 19(1-2), 1–2.
Toplak, M. E., West, R. F. and Stanovich, K. E. (2011), ‘The cognitive reflection test as a predictor of performance on heuristics-and-biases tasks’, Memory & Cognition 39(7), 1275–1289.
Van der Velde, F. (2010), ‘Where artificial intelligence and neuroscience meet: The search for grounded architectures of cognition’, Advances in Artificial Intelligence
2010, 5.
Veness, J., Ng, K. S., Hutter, M., Uther, W. and Silver, D. (2011), ‘A monte-carlo aixi approximation’, Journal of Artificial Intelligence Research 40(1), 95–142.

About basicrulesoflife

Year 1935. Interests: Contemporary society problems, quality of life, happiness, understanding and changing ourselves - everything based on scientific evidence. Artificial Intelligence Foundation Latvia, Editor.
This entry was posted in Artificial Intelligence. Bookmark the permalink.

One Response to The singularity: fact or fiction?

  1. Intelligence measures an agent’s capacity for efficient cross-domain optimization of the world according to the agent’s preferences. (Muehlhauser and Salamon, 2012, p. 17) Šī ir nepilnīga definīcija. Atbilstoši mūsdienu zināšanām inteliģence ir programma, kura, sadarbojoties ar apkārtējo vidi, dažādu simbolu valodā izveido tās modeļus un, izmantojot informācijas jaunradi, lieto tos vides reakciju prognozēšanai un savas darbības plānošanai.
    The third premise relies on the fact that if we emulate something approximately 1 on 1, there is no reason to believe that the relevant characteristics wouldn’t be emulated as well. Tas izskatās sarežģītāk: daļa inteliģences ir cilvēka ķermeņa struktūrā ielikta ģenētiski. Tās ir tendences, spējas, talanti un … novirzes. Tās būs jāieliek struktūrā. Piemēram, spēja neapzināti veidot tūkstošiem sadarbības modeļus ar apkārtējo vidi. Šodien mēs nezinām, kā to izdarīt. Lai izveidotu inteliģentu būtni ar mums saprotamām un pieņemamām sociālām iemaņām, robotu būs jāapmāca sociālā vidē (dzīvojot kopā ar cilvēkiem).
    The only way in which we can create more human brains at the moment is by having children. This is a very labor-intensive thing to do compared to copying software. In no time we could provide every piece of suitable hardware with a software mind and perhaps even surpass the human population (Muehlhauser and Salamon, 2012). Šeit divas problēmas. Lai nokopētu cilvēka apziņu, būs jāprot nolasīt pilnu bioloģisko smadzeņu struktūru un jāieraksta nolasītos datus robota struktūrā. Šodien mēs nezinām, kā to darīt. Otra problēma: cilvēkam līdzīga robota izgatavošana prasīs milzīgus resursus, kas daudzkārt pārsniegs mums zināmo biololoģisko vairošanos.
    We humans think we are a rational species. But in fact we tend to behave very irrational and have lots of biases (Toplak et al., 2011). Machine intelligences do not have to deal with these problems if we design them right. All of these advantages make it likely that if we create AGI, super intelligence might follow (Muehlhauser and Salamon, 2012). Ir tiesa, ka mūsu smadzeņu irracionalitāte sagādā mums daudzas problēmas: mēs bieži, sev par kaitējumu, neapzināti klausām limbisko smadzeņu impulsiem. Bet, no otras puses, tieši emociju vajadzību piepildīšana sagādā mums, Homo sapiens, vislielāko gandarījumu, piepildījumu un dzīves jēgu. Dzīves kvalitātes lielāko daļu. Mums jāpiekrīt, ka neviens, gandrīz neviens negribēs dzīvot kādā AI vidē (robota ķermenī) bez emocijām un vajadzību apmierināšanas piepildījuma. Tādēļ nākotnes cilvēku būs jāveido tā, lai šīs divas milzīgās, pretrunīgās vajadzības – loģiskā prāta un dziļu emociju dominance – būtu optimālā līdzsvarā. Kā to veidot un panākt, šodien nav zināms. Šajā jautājumā publikācijas vēl nav manītas.
    In scientific theories often some aspects are idealized. For example, Galileo who imagined a perfectly round and smooth ball rolling off a perfectly flat surface, made advances in mechanics. This is due to the idealization of some aspects of the theory, in this case a perfectly smooth and perfectly flat surface, that in the real world doesn’t actually exist. It basically boils down to the fact that to explain a certain phenomenon, one has to sometimes deliberately ignore some parts of the real world (Plebe and Perconti, 2012). Tā ir tā pati ārējās pasaules modeļu veidošana apziņā.
    Finally, I can conclude from my research that it isn’t just a bunch of fiction and that we should in all seriousness ask ourselves some questions. If this recursive selfimproving theory a.k.a. the singularity is a feasible scenario, ought we not take action?
    This action could consist in figuring out whether or not such an outcome would be beneficial to us or disadvantageous. Protams, mēs zinām, ka mūsu smadzenes ir ļoti nepilnīgas: tūkstošiem paaudžu laikā mēs neesam iemācījušies paši un nespējam iemācīt saviem bērniem, kā nodzīvot laimīgu, piepildītu dzīvi. Desmitiem biologu un evolucionistu raksta, ka mūsu smadzenes ir piemērotas augu savācēju-mednieku cilšu (hunter-gatherer) dzīvei, bet nav piemērotas mūsdienu modernās dzīves apstākļiem un mūsu pašu radītajām problēmām, kuru neveiksmīga risināšana vai nemaz nerisināšana draud ar mūsu civilizācijas bojāeju. Protams, ka mums ir vajadzīga augstāka inteliģence, kas mums nodrošinātu ne tikai augstāku dzīves kvalitāti un pašreizējo globālo problēmu atrisināšanu, bet arī (tālākā nākotnē) – katra indivīda neierobežotu dzīveslaiku, ‘mūžīgu’ dzīvību. Bet pāri visiem šiem spriedelējumiem stāv vienkāršs jautājums – vai mēs pratīsim sevi neiznīcināt pirms sasniegsim augstāku inteliģenci? I.V.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s