I have been particularly concentrated on whether mathematics can tell us something about the nature of thought, something that we have not yet understood about what thought is made from, how it happens, how it is connected to everything else in the universe. These questions inevitably point me in the direction of research in cognitive science, neuroscience, philosophical debates about the viability of the objectivity on which science relies, and discussions of what we even mean by ‘knowledge.’ Mathematics shows up everywhere, in the abstractions and probabilities involved in how the brain learns, for example, or how the brain constructs what we see, or how the brain navigates the space around us. One of the avenues I’ve followed has led me through the science of self-organizing systems and the application of information theory to biology in particular, some of which was discussed in a recent post. In this context, we see biologists exploiting the value of math ideas. And the modeling that happens in these research efforts doesn’t just predict outcomes. It often characterizes the action. The behavior of swarms, flocks, insect colonies, and even cells is mathematical.

It happens in the other direction as well. Mathematician and computer scientist Gregory Chaitin has approached biology mathematically, not in the sense of modeling behavior, but more in the way of expressing the creativity of evolution using the creativity of mathematics. Here’s a little piece of a post from about six years ago:

Chaitin believes that Gödel and Turing (in his 1936 paper) opened the door to a provocative connection between mathematics and biology, between life and software. I’ve looked at how Turing was inspired by biology in two of my other posts. They can be found here and here.

But Chaitin is working to understand it with a new branch of mathematics called Metabiology. I very much enjoyed hearing him describe the history of the ideas that inspired him in one of his talks: Life as Evolving Software in which he says:

“After we invented software we could see that we were surrounded by software. DNA is a universal programming language and biology can be thought of as software archeology – looking at very old, very complicated software.”

Chaitin is also one of the mathematicians who developed what is known as algorithmic information theory. And I recently happened upon a paper from Giulio Ruffini at Starlab Barcelona, with the title *An Algorithmic Information Theory of Consciousness*. This paper was published near the end of 2017. Ruffini’s research is motivated, to some extent by the value of being able to provide a metric of conscious states. But the course he’s chosen is described in the abstract:

In earlier work, we argued that the experience we call reality is a mental construct derived from information compression. Here we show that algorithmic information theory provides a natural framework to study and quantify consciousness from neurophysiological or neuroimaging data, given the premise that the primary role of the brain is information processing.

Ruffini argues that characterizing consciousness is “a profound scientific problem,” and progress in this area will have important practical implications with respect to any one of a number of disorders of consciousness. While the paper is mostly aimed at justifying the fit of algorithmic information theory (which he refers to as AIT) to this endeavor, one can also see some of the deeper philosophical convictions that motivate his approach. He says the following, for example, in his introduction:

We begin from a definition of cognition in the context of AIT and posit that brains strive to model their input/output fluxes of information with simplicity as a fundamental driving principle (Ruffini 2007,2009). Furthermore, we argue that brains, agents, and cognitive systems can be identified with special patterns embedded in mathematical structures enabling computation and compression.

But I found the conviction that seems to be driving his perspective clearly laid out in his 2007 paper *Information, complexity, brains and reality (Kolmogorov Manifesto).* There he says that information theory gives us the conceptual framework we need to comprehend how brains and the universe are related. That seems like the really big picture. He also says:

I argue that what we call the universe is an interpreted abstraction—a mental construct—based on the observed coherence between multiple sensory input streams and our own interactions (output streams). Hence, the notion of Universe is itself a model. Rules or regularities are the building blocks of what we call Reality—an emerging concept. I highlight that physical concepts such as “mass”, “spacetime” or mathematical notions such as “set” and “number” are models (rules) to simplify the sensory information stream, typically in the form of invariants. The notion of repetition is in this context a fundamental modelling building block.

Compression is one of the key ideas. Relations that are expressed in equations, or events that are captured by programs have been compressed, and the simplification is productive. The Kolmogorov complexity of a data set, in algorithmic information theory, is defined as the length of the shortest program able to generate it. Experience is a consequence of the brain’s compression (and hence simplification) of an ongoing flood of sensory data. And so one of Ruffini’s ideas is that science is what brains do. And this, he says, is to be taken as a definition of science. Here are a few of the ideas his paper means to address, some more provocative than others:

Reality is a byproduct of information processing.

Mathematics is the only really fundamental discipline and its currency is information.

The nervous system is an information processing tool. Therefore, information science is crucial to understand how brains work.

The brain compression efforts provide the substrate for reality (the brain is the “reality machine”).

The brain is a pattern lock loop machine. It discovers and locks into useful regularities in the incoming information flux, and based on those it constructs a model for reality.

…the concept of repetition is a fundamental block in “compression science.” This concept is rather important and leads brains naturally to the notion of number and counting (and probably mathematics itself)

Compressive processes are probably very basic mechanisms in nervous systems…Counting is just the next step after noticing repetition. And with the concept of repetition also comes the notion of multiplication and primality. More fundamentally, repetition is the core of algorithmics.

This sketchy survey of the paper does not do it justice. But I bring it to your attention as yet another indication that the blend of information theory and biology is running deep.

C.S. Peirce put forth the idea that what he called “the laws of information” were key to solving “the puzzle of the validity of scientific inference” and thus to understanding the “logic of science”. See my notes on his notorious formula:

Information = Comprehension × Extension

Thank you,Jon. I will look into this.