By Tim Cole
As we enter the age of Big Data, computers as we know them are simply no longer capable of crunching the numbers fast enough. This is do to the limitations imposed by what is known as the “Von Neuman model” or “Princton architecture”.First described by John von Neumann in 1945, this means that the processor and the data are separated; the data are fed to the processor in sequence, namely bit by bit. This worked okay as long as we were dealing with relatively small amounts of data, but it creates what computer scientists call the “Von Neuman bottleneck” when called upon to handle enormous loads of information.
Even more important: Von Neuman machines (viz. all computers used today) use way too much energy in the process. Already, computers around the world use more 10 percent of all energy produced, and the rate will rise, perhaps exponentially, if we try to do Big Data with conventional computers. So we desperately need to move on to a completely new generation of computers that operate like the human brain – which, incidentally, is one of the most energy-efficient computing devices imaginable.
Neural computers like IBM’s “Watson” are called a “cognitive computer”. Watson is able to create links between widely separated and seemingly unrelated bits of information, just like the human brain does by creating “synapses” that crisscross our dura mater and enable us to arrive at astonishing and unlikely perceptions and discoveries.
The good news is that work on cognitive computers is going on at a rapid pace, not just in the IBM labs. I found this description of the difference between cognitive and normal” computers in an article by Sue Feldman and Hadley Reynolds in KM Magazine which I found helpful: “The first wave of computing made numbers computable. The second wave has made text and rich media computable and accessible digitally. The next wave will make context computable.“