Categories

A mathematical philosophy – a digital view

I’ve become fascinated with Gregory Chaitin’s exploration of randomness in computing and his impulse to bring these observations to bear on physical, mathematical, and biological theories. His work inevitably addresses epistemological questions – what it means to know, to comprehend – and leads him to move (as he says in a recent paper) in the direction of “a mathematical approach to philosophical questions.” I do not have any expertise in computing (and do not assume the same about my readers) and so I am not in a position to clarify the formal content of his papers. However, the path Chaitin follows is from Leibniz to Hilbert to Gödel and Turing. With his development of algorithmic information theory, he has studied the expression of information in a program, and formalized an expression of randomness.

The paper to which I referred above, Conceptual Complexity and Algorithmic Information, is from this past June. It can be found on academia.edu. As is often the case, Chaitin begins with Leibniz:

In our modern reading of Leibniz, Sections V and VI both assert that the essence of explanation is compression.  An explanation has to be much simpler, more compact, than what it explains.

The idea of ‘compression’ has been used to talk about how the brain works to interpret a myriad of what one might call repeated sensory information, like the visual attributes of faces. Language, itself, has been described as cognitive compression. Chaitin reminds us of the Middle Ages’ search for the perfect language, that would give us a way to analyze the components of truth, and suggests that Hilbert’s program was a later version of that dream.  And while Hilbert’s program to find a complete formal system for all of mathematics failed, Turing had an idea that has provided a different grasp of the problem.  For Turing,

there are universal languages for formalizing all possible mathematical algorithms, and algorithmic information theory tells us which are the most concise, the most expressive such languages.

Compression is happening in the search for ‘the most concise.’  Chaitin then defines conceptual complexity, which is at the center of his argument.  The conceptual complexity of an object X is defined to be

the size in bits of the most compact program for calculating X, presupposing that we have picked as our complexity standard a particular fixed, maximally compact, concise universal programming language U. This is technically known as the algorithmic information content of the object X, denoted Hu(X) or simply H(X) since U is assumed fixed. In medieval terms, H(X) is the minimum number of yes/no decisions that God would have to make to create X.

He employs this idea, this “new intellectual toolkit,” in a brief discussion of mathematics, physics, and evolution, modeling evolution with algorithmic mutations. He also suggests an application of one of the features of algorithmic information theory, to Giulio Tononi’s integrated information theory of consciousness. As I see it, a mathematical way of thinking brings algorithmic information theory to life, which then appears to hold the keys to a clearer view of physical, biological and digital processes.

In his discussion of consciousness Chaitin suggests an important idea – that thought reaches down to molecular activity.

If the brain worked only at the neuronal level, for example by storing one bit per neuron, it would have roughly the capacity of a pen drive, far too low to account for human intelligence. But at the RNA/DNA molecular biology level, the total information capacity is quite immense.

In the life of a research mathematician it is frequently the case that one works fruitlessly on a problem for hours then wakes up the next morning with many new ideas. The intuitive mind has much, much greater information processing capacity than the rational mind. Indeed, it seems capable of exponential search.

We can connect the two levels postulated here by having a unique molecular “name” correspond to each neuron, for example to the proverbial “grand- mother cell.” In other words, we postulate that the unconscious “mirrors” the associations represented in the connections between neurons. Connections at the upper conscious level correspond at the lower unconscious level to enzymes that transform the molecular name of one neuron into the molecular name of another. In this way, a chemical soup can perform massive parallel searches through chains of associations, something that cannot be done at the conscious level.

When enough of the chemical name for a particular neuron forms and accumulates in the unconscious, that neuron is stimulated and fires, bringing the idea into the conscious mind.

And long-chain molecules can represent memories or sequences of words or ideas, i.e., thoughts.

This possibility is suggested in the light of a digital view of things. The paper concludes in this way:

We now have a new fundamental substance, information, that comes together with a digital world-view.

And – most ontological of all – perhaps with the aid of these concepts we can begin again to view the world as consisting of both mind and matter. The notion of mind that perhaps begins to emerge from these musings is mathematically quantified, which is why we declared at the start that this essay pretends to take additional steps in the direction of a mathematical form of philosophy.

The eventual goal is a more precise, quantitative analysis of the concept of “mind.” Can one measure the power of a mind like one measures the power of a computer?

Quantification as a goal can be misunderstood. To many it signifies a deterministic, controllable world. Chaitin’s idea of quantification is motivated by the exact opposite. His systems are necessarily open-ended and creative. Quantification is more the evidence of comprehension.

There is one more thing in this paper that I enjoyed reading.  It comes up when he introduces the brain to his discussion of complexity.  I’ll just reproduce it here without comment.

Later in this essay, we shall attempt to analyze human intelligence and the brain. That’s also connected with complexity, because the human brain is the most complicated thing there is in biology. Indeed, our brain is presumably the goal of biological evolution, at least for those who believe that evolution has a goal. Not according to Darwin! For others, however, evolution is matter’s way of creating mind.   (emphasis added)

 

2 comments to A mathematical philosophy – a digital view

  • fascinating stuff, especially the part about intuition, and thought reaching down to the molecular level…
    I’ve always believed though (as some others do), that ultimately the brain is incapable of fully analyzing itself — that there’s a recursivity there that can’t be ‘broken.’ We can study/comprehend the reductionist or deterministic elements of a biological system, but I’m not sure we can ever really grasp Chaitin’s “open-ended and creative” aspects of “mind.”

    • Joselle

      Good to hear from you. It is really interesting stuff. I enjoy many things about Chaitin’s approach to mathematics.