When I first became interested in studying mathematics an artist friend of mine expressed his disapproval by characterizing mathematicians as people who made bombs. Although I didn’t know very much mathematics at the time, I knew enough to know that he was wrong. But I was reminded today of one of the ways his mistake may have taken shape. George Dyson has written a soon to be released book Turing’s Cathedral: The Origins of the Digital Universe. John Naughton interviews Dyson for The Guardian and characterizes the book in this way:
Turing’s Cathedral is much more than a chronicle of engineering progress: it includes fascinating digressions into the history and physics of nuclear weapons, the fundamentals of mathematical logic, the mathematical insights of Hobbes and Leibniz, the history of weather forecasting, Nils Barricelli’s pioneering work on artificial life and lots of other interesting stuff.
According to Naughton, Dyson
focuses on a small group of mathematicians and engineers working on the hydrogen bomb, led by John von Neumann at the Institute for Advanced Study (IAS) in Princeton, New Jersey (but not at Princeton University), who not only built one of the first computers to realize Turing’s vision of a universal machine, but – more importantly – defined the architectural principles of a general-purpose “stored program computer” on which all succeeding computers were based. Dyson’s argument, crudely summarized, is that the IAS machine should be regarded as the fons et origo of the modern world rather than the ENIAC or Colossus machines that preceded it.
We learn in the interview that running codes developed by von Neumann for the IAS (Institute for Advanced Study) machine were delayed by hardware problems for a couple of years. Under pressure to run bomb calculations, they reconfigured an older machine to run their codes. This, Dyson says. “may have diminished their own prominence as pioneers.” But, he adds, it may be that there was also some reluctance to herald accomplishments associated with the development of the hydrogen bomb.
It seems Dyson also challenges the distinctions made between ‘pure’ and applied mathematics or between the abstract and the pragmatic. From the interview:
JN Another theme that comes over strongly relates to WH Hardy’s famous misconception about the “uselessness” of pure mathematics. You trace very clearly the progression from Hilbert to Gödel to Turing to von Neumann to the IAS machine. My guess is that nobody at the time could have supposed that arguments about the foundations of mathematics would ever have a practical outcome.
GD Yes! It is quite astonishing, for instance, that Turing, who was more or less an outcast, except among a small group of fellow logicians, during the two years he spent in Princeton, was recently voted the second-most influential alumnus of Princeton University (and this from a field going back to 1746!)
But, more consistent with my own concerns are some of the things I read in a piece, by the same title, written by Dyson for Edge in 2005 (which I would expect are now found in the book).
By breaking the distinction between numbers that mean things and numbers that do things, von Neumann unleashed the power of the stored-program computer, and our universe would never be the same. It was no coincidence that the chain reaction of addresses and instructions within the core of the computer resembled a chain reaction within the core of an atomic bomb.
It’s where von Neuman seemed to be going that gets my attention today. Again, from the Dyson piece, a rather long excerpt:
As organisms, we possess two outstanding repositories of information: the information conveyed by our genes, and the information stored in our brains. Both of these are based upon non-von-Neumann architectures, and it is no surprise that von Neumann became fascinated with these examples as he left his chairmanship of the AEC (where he had succeeded Lewis Strauss) and began to lay out the research agenda that cancer prevented him from following up. He considered the second example in his posthumously-published The Computer and the Brain.
“The message-system used in the nervous system… is of an essentially statistical character,” he explained. “In other words, what matters are not the precise positions of definite markers, digits, but the statistical characteristics of their occurrence… a radically different system of notation from the ones we are familiar with in ordinary arithemetics and mathematics… Clearly, other traits of the (statistical) message could also be used: indeed, the frequency referred to is a property of a single train of pulses whereas every one of the relevant nerves consists of a large number of fibers, each of which transmits numerous trains of pulses. It is, therefore, perfectly plausible that certain (statistical) relationships between such trains of pulses should also transmit information…. Whatever language the central nervous system is using, it is characterized by less logical and arithmetical depth than what we are normally used to [and] must structurally be essentially different from those languages to which our common experience refers.”
Or, as his friend Stan Ulam put it,” What makes you so sure that mathematical logic corresponds to the way we think?”
Pulse-frequency coding, whether in a nervous system or a probabilistic search-engine, is based on statistical accounting for what connects where, and how frequently connections are made between given points. As von Neumann explained in 1948: “A new, essentially logical, theory is called for in order to understand high-complication automata and, in particular, the central nervous system. It may be, however, that in this process logic will have to undergo a pseudomorphosis to neurology to a much greater extent than the reverse.”
Von Neumann died just as the revolution in molecular biology, sparked by the elucidation of the structure of DNA in 1953, began to unfold. Life as we know it is based on digitally-coded instructions, translating between sequence and structure (from nucleotides to proteins) exactly as Turing prescribed. Ribosomes and other cellular machinery play the role of processors: reading, duplicating, and interpreting the sequences on the tape. But this uncanny resemblance has distracted us from the completely different method of addressing by which the instructions are carried out.
In a digital computer, the instructions are in the form of COMMAND (ADDRESS) where the address is an exact (either absolute or relative) memory location, a process that translates informally into “DO THIS with what you find HERE and go THERE with the result.” Everything depends not only on precise instructions, but on HERE, THERE, and WHEN being exactly defined. It is almost incomprehensible that programs amounting to millions of lines of code, written by teams of hundreds of people, are able to go out into the computational universe and function as well as they do given that one bit in the wrong place (or the wrong time) can bring the process to a halt.
Biology has taken a completely different approach. There is no von Neumann address matrix, just a molecular soup, and the instructions say simply “DO THIS with the next copy of THAT which comes along.” The results are far more robust. There is no unforgiving central address authority, and no unforgiving central clock. This ability to take general, organized advantage of local, haphazard processes is exactly the ability that (so far) has distinguished information processing in living organisms from information processing by digital computers.
I would like to propose that we shift our focus away from how we can (or cannot yet) imitate what our biology accomplishes, and look, for a moment, at what it is we have found within it. We have traversed territories opened up by our cognizance of number patterns that lead, not only to electronic calculation, but to the creation of electronic images, as well as the simulation of events and experience. What might this tell us about the life of the organism we call the human organism? I will likely follow this up with a look at the von Neumann book.