IBM supercomputer used to simulate a typical human brain

By   November 19, 2012


Using the world’s fastest supercomputer and a new scalable, ultra-low power computer architecture, IBM has simulated 530 billion neurons and 100 trillion synapses – matching the numbers of the human brain – in an important step toward creating a true artificial brain. 

Cognitive computing.

The human brain, arguably the most complex object in the known universe, is a truly remarkable power-saver: it can simultaneously gather thousands of sensory inputs, interpret them in real time as a whole and react appropriately, abstracting, learning, planning and inventing, all on a strict power budget of about 20 W. A computer of comparable complexity that uses current technology, according to IBM’s own estimates, would drain about 100 MW of power.

Clearly, such power consumption would be highly impractical. The problem, then, begs for an entirely new approach. IBM’s answer is cognitive computing, a newly coined discipline that combines the latest discoveries in the field of neuroscience, nanotechnology and supercomputing.

Neuroscience has taught us that the brain consumes little power mainly because it is “event-driven.” In simple terms this means that individual neurons, synapses and axons only consume power as they are activated – e.g. by an external sensory input or other neurons – and consume no power otherwise. This is however not the case with today’s computers, which, in comparison, are huge power wasters.

The IBM engineers have leveraged this knowledge to build a novel computer architecture, and then used it to simulate a number of neurons and synapses comparable to what would be found in a typical human brain. The result is not a biologically or functionally accurate simulation of the human brain – it cannot sense, conceptualize, or “think” in any traditional sense of the word – but it is still a crucial step toward the creation of a machine that, one day, might do just that.

How it works.

The advantages of the TrueNorth architecture, which was developed as part of DARPA’s SyN...


The researchers’ starting point was CoCoMac, a comprehensive but incomplete database detailing the wiring of a macaque’s brain. After four years of painstaking work patching the database, the team members were able to obtain a workable dataset which they used to inspire the layout of their artificial brain.

Inside the system, the two main components are neurons and synapses.

Neurons are the computing centers: each neuron can receive input signals from up to ten thousand neighboring neurons, elaborate the data, and then fire an output signal. Approximately 80 percent of neurons are excitatory – meaning that, if they fire a signal, they also tend to excite neighboring neurons. The remaining 20 percent of neurons are inhibitory – when they fire a signal, they also tend to inhibit neighboring neurons.

Synapses link up different neurons, and it is here that memory and learning actually take place. Each synapse has an associated “weight value” that changes based on the number of signals, fired by the neurons, that travel along them. When a large number of neuron-generated signals travel through the same synapse, the weight value increases and the virtual brain begins to learn by association.

The algorithm periodically checks whether each neuron is firing a signal: if it is, the adjacent synapses will be notified, and they will update their weight values and interact with other neurons accordingly. The crucial aspect here is that the algorithm will only expend CPU time on the very small fraction of synapses that actually need to be fired, rather than on all of them – saving massive amounts of time and energy.

The beauty of this new computer architecture is that – just like an organic brain – it is event-driven, distributed, highly power-conscious, and bypasses some of the well-known limitations intrinsic to the way standard computers are designed.

IBM’s end goal is to eventually build a machine with human-brain complexity in a comparably small package, and with a power consumption approaching 1 kW. For the time being, however, this milestone has been accomplished by the not so portable (nor particularly power-conscious) Blue Gene/Q Sequoia supercomputer, using 1,572,864 processor cores, 1.5 PB (1.5 million GB) of memory, and 6,291,456 threads.

Neurosynaptic core (Image: IBM)

In an effort to dramatically reduce power consumption, IBM is also building its own custom chip – so-called “neurosynaptic cores” – that harness the full potential of the new computer architecture and will eventually replace the supercomputer for these simulations.

Making up each core are “neurons,” “synapses” and “axons.” Despite their names, the design of these components wasn’t biologically inspired, but was rather highly optimized for the sake of minimizing manufacturing costs and maximizing performance.



The new computer architecture could be used to better assist patient diagnosis (Image: IBM...


Because of the extreme parallelism built into this architecture, the chips built using this technology could be well-suited to solving any problem in which very large amounts of input data need to be fed into a machine – not unlike a standard neural network, but with massively improved performance and power consumption.

The experiment allowed IBM to better understand the limitations of the standard computer architecture, including the trade-offs between memory, computation and communication on a very large scale. Looking forward, it also gathered the know-how that will serve design and enable even better low-power, massively parallel chips with improved performance.

Future applications could include dramatically improved weather forecasts, stock market predictions, intelligent patient monitoring systems that can perform diagnoses in real time, and optical character recognition (OCR) and speech recognition software matching human performance, to name just a few.

As for recreating the actual behavior of a human brain, we’re still many, many years away by all accounts. But at least, it seems, progress is being made.

The video below is a short introduction to the cognitive computing paradigm by IBM’s Dharmendra Modha.

Sources: IBM (PDF), Dharmendra S ModhaDesign Automation Conference

The video below is a short introduction to the cognitive computing paradigm by IBM’s Dharmendra Modha. 

No raksta redzam, ka šodien mākslīga intelekta izveidošanai par paraugu izmanto smadzeņu struktūras, apzinoties, ka vēl jāveic minituarizācija, tīklā izkliedētā jauda jāsamazina 1000 reizes un vēl daudz kas cits jāatrisina un jāpaveic: “As for recreating the actual behavior of a human brain, we’re still many, many years away by all accounts. But at least, it seems, progress is being made.” Sadaļā ‘How it works’ izteikta doma, ka varbūt mums arī būs jāimitē divu smadzeņu  pusložu darbību: kreisā veido detalizētu attēlu un analīzi, labā attēlo kopsakarības. Vairāk par šo jautājumu Iain McGilchrist grāmatas ‘The Master and His Emissary: The Divided Brain and the Making of the Western World‘ recenzijā

 “Back in junior high school health class, we were told that the brain has two different hemispheres — the left and the right. The left brain, the textbook stated, is responsible for language, math, and science, logic and rationality. The right brain was the artistic one, the creative half of the brain. But that’s not quite true. Neuroimaging and experiments on patients with split brains and brain damage to only one hemisphere have allowed a much more detailed, and fascinating, accounting of how the two parts interact with the world, and how they combine to become a unified consciousness (and, in some cases of mental disorders, how they occasionally don’t). Iain McGilchrist has combined scientific research with cultural history in his new book The Master and His Emissary: The Divided Brain and the Making of the Western World to examine how the evolution of the brain influenced our society, and how the current make up of the brain shapes art, politics, and science, as well as the rise of mental illness in our time — in particular schizophrenia, anorexia, and autism.”

“That eighth-grade level science textbook was kind of correct. While the left brain does contain much of the language center of the brain, a person cannot understand context without the right hemisphere. Metaphor, irony, and humor are all processed by the right brain. When engaging in face-to-face conversation, it processes facial expressions to add depth to the meaning. Most activities, from painting to mathematics, are processed by both the left and the right hemispheres of the brain. The differences between them have to be defined in a different way. The left brain brings precision, focus, abstraction, rationality, and fixity. The right brain has a more open view of the world. It provides context, whether finding humor in a punch line or bringing a sense of history to a question posed to it. In a healthy, functioning brain, the right hemisphere sends information about a situation to the left hemisphere, which “unpacks” the information using its tools to find clarity, and then it vocalizes the response, either in thought or expression.”


About basicrulesoflife

Year 1935. Interests: Contemporary society problems, quality of life, happiness, understanding and changing ourselves - everything based on scientific evidence. Artificial Intelligence Foundation Latvia, Editor.
This entry was posted in Artificial Intelligence. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s