Flipping through some New Scientist issues from this past year, I was reminded of an article in their July 19 issue that brought together a discussion of the brain and mathematics with particular emphasis on the effectiveness of employing the sometimes counter-intuitive notion of the infinity of the real numbers. The content of the article, Know it all, by Michael Brooks, explores the viability of Alan Turing’s idea of the “oracle” – a computer that could decide undecidable problems. It highlights the work of Emmett Redd and Steven Younger of Missouri State University who think that they see a path to the development of this “super-Turing” computer that would also bring new insight into how the brain works.
The limitations on even the most sophisticated computing tools is essentially a consequence of limited power of logic. Mathematician Kurt Gödel’s Incompleteness Theorem shows clearly that any system of logical axioms will always contain unprovable statements. Turing made the same observation about a universal computer built on logic alone. Such a computer will inevitably come up against ‘undecidable’ problems, regardless of the amount of processor power available. But Turing did imagine something else.
…An oracle as Turing envisaged it was essentially a black box whose unspecified contents would be able to solve undecidable problems. An “O-machine,” he proposed, would exploit whatever was in this black box to go beyond the bounds of conventional human logic – and so surpass the abilities of every computer ever build.
Brooks then tells us about a computer scientist working on neural networks – circuits designed to mimic the human brain. Hava Siegelmann wanted to prove the limits of neural networks, despite their great flexibility.
In a neural net, many simple processors are wired together so that the output of once can act as the input of others. These inputs are weighted to have more or less influence, and the idea is that the network “talks” to itself, using its outputs to alter its input weightings until it is performing tasks optimally – in effect, learning as it goes along just as the brain does.
Siegelmann eventually observed an unexpected possibility. She showed that, in theory, if a network was weighted with the infinite, non-repeating numbers in the decimal expansion of irrational numbers such as pi, it could transgress the limitations of a universal computer built on logic alone. And this relies, it seems, on the generation of randomness produced by the irrational number.
While Siegelmann published her proof in 1995, it was not enthusiastically welcomed by fellow computer scientists.
…she soon lost interest too. “I believed it was mathematics only, and I wanted to do something practical,” she says. I turned down giving any more talks on super-Turing computation.
Ah, “mathematics only…,” she says.
Redd and Younger, aware of Siegelmann’s work, saw their own work headed in the same direction.
… In 2010, they were building neural networks using analogue inputs that, unlike the conventional digital code of 0 (current on) and 1 (current off), can take a whole range of values between fully off and fully on. There was more than a whiff of Siegelmann’s endless irrational numbers in there. “There is an infinite number of numbers between 0 and 1,” says Redd.
This infinity of numbers between 0 and 1, was one of the first things to intrigue me about mathematics. What are we looking at when we look at this infinity of numbers, whose size is the same as the infinity of the whole line?
In 2011 they approached Siegelmann, by then director of the Biologically Inspired Neural & Dynamical Systems lab at the University of Massachusetts in Amherst, to see if she might be interested in a collaboration. She said yes. As it happened, she had recently started thinking about the problem again, and was beginning to see how irrational-number weightings weren’t the only game in town. Anything that introduced a similar element of randomness or unpredictability might do the trick, too. “Having irrational numbers is only one way to get super-Turing power,” she says.
The route the trio chose was chaos. A chaotic system is one whose response is very sensitive to small changes in its initial conditions. Wire up an analogue neural net in the right way, and tiny gradations in its outputs can be used to create bigger changes at the inputs, which in turn feed back to cause bigger or smaller changes, and so on. In effect, the system becomes driven by an unpredictable, infinitely variable noise.
The idea is met with some skepticism. Scott Aaronson, Professor of Electrical Engineering and Computer Science at MIT, argues that models involving infinities inevitably run into trouble.
People ignore the fact that the physical system cannot implement the idea with perfect precision.
Jérémie Cabessa of the University of Lausanne, Switzerland co-authored a paper with Siegelmann published in the International Journal of Neural Systems in September 2014 which supports the idea that “the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation. In Brook’s article, however, he’s skeptical that such a machine is buildable.
Again, it’s not that the maths doesn’t work – it is just a moot point whether true randomness is something we can harness, or whether it even exists.
Brooks tells us that Turing often speculated about the connection between intrinsic randomness and creative intelligence.
This is not the first pairing of randomness and creativity that I’ve seen. Gregory Chaitin’s work relies heavily on randomness. Metabiology, the field he has introduced, investigates randomly evolving computer software as it relates to “randomly evolving natural software” or DNA. And here, mathematical creativity is equated with biological creativity. And Chaitin has remarked (probably more than once) that he doesn’t believe that continuity really works for physics theories, a perspective echoed by Aaronson. Chaitin leans instead toward a discrete, digital, worldview.
But I find it important to take note here of the fact that the infinities of mathematics, so often problematic within physical theories, have, nonetheless, very effectively aided our imagination. The continuity of the real numbers is largely characterized by the irrational number and took years of devoted effort to be firmly established in mathematics. In this discussion, the irrational number also opened the door to the effect of randomness in neural networks. Mathematical notions of continuity have been the mind’s way, of bridging arithmetic and geometric ideas. These bridges allow conceptual structures to develop. The roots of these ideas are in our experiences of things like space, time and object, but they somehow give the intuition more room to grow. Just a few of the fruits of their development have brought the inaccessible subatomic and intergalactic worlds within reach. Even if the world turns out not mirror this continuity, the work of Siegelmann, Redd and Younger suggests that the mind might.