New Scientist did an article in their February 6 issue called Mind Maths: Five laws that rule the brain.
As is usually the case, the article’s allure is the suggestion that new research may hold the promise of capturing the brain’s complexity in just a few mathematical models. And, as is usually the case, I find that studies such as these could be used as a springboard to ideas about the source of mathematics’ itself. Unfortunately, you can’t read the article for free. So I will note here what I believe are key features and why I care. One of the things they say early on is this:
What’s surprising is just how often the brain’s dynamics mimic other natural phenomena, from earthquakes and avalanches to the energy flow in a steam engine.
Yet this observation is not used to make a connection between internal and external events (by finding the activity of thought and sensation like the activity of the world around us). Despite this shortcoming, however, the broad range of things discussed is worth a look. Mikhail Rabinovich at the University of California San Diego, for example, finds that the behavior of cognitive patterns that fight for our attention are captured by predator-prey equations that predict fluctuations in populations of interacting species.
None ever manages to gain more than a fleeting supremacy, which Rabinovich thinks might explain the familiar experience of the wandering mind. “We can all recognize that thinking is a process,” he says. “You are always shifting your attention, step-by-step, from one thought to another through these temporary stable states.”
This is interesting and even reminds me of what 19th century philosopher Johann Friedrich Herbart once proposed, namely, that all ideas struggle to gain expression in consciousness, and compete with each other to do so. He even used the term self-preservation to describe an idea’s tendency to seek and maintain conscious expression. (source in an earlier blog)
I’m attracted to the notion of “temporary stable states,” as this suggests the consistent potential for revolutions of thought, creative breakthroughs, and unexpected new structure, that is the very life of mathematics.
The article also discusses what is referred to as the avalanche of cascading firing in neurons.
The familiar chords of our favorite song reach the ear, and moments later a neuron fires. Because that neuron is linked into a highly connected small-world network, the signal can quickly spread far and wide, triggering a cascade of other cells to fire. Theoretically it could even snowball chaotically, potentially taking the brain offline in a seizure… This suggests there is a healthy balance in the brain – it must inhibit neural signals enough to prevent a chaotic flood without stopping the traffic altogether.
Jack Cowan, at the University of Chicago, has found that this balance represents a state known as the critical point named “the edge of chaos” by theoretical physicists.
But the two observations I found most interesting had to do with the integrating functions of the brain (those that are thought to produce conscious experience) and the brain’s predictive functions (that establish our expectations). With respect to the former, the article explains:
An experience’s colors, smells and sounds are impossible to isolate from one another, except through deliberate actions such as closing your eyes. At the same time, each conscious experience is a unique, never-to-be-repeated event. In computational terms, this means that a seat of consciousness in the brain does two things: it makes sense of potentially vast amounts of information and, just as importantly, it internally binds this information into a single, coherent picture that differs from everything we have ever – or will ever – experience.
The latter considers the brain’s use of Bayesian statistics, named after 18th century mathematician, Thomas Bayes. It is a way to calculate the probability of a future event, using what has happened in the past, while consistently updating expectations with new data.
For decades neuroscientists had speculated that the brain uses this principle to guide its predictions of the future, but Karl Friston at University College London took the idea one step further. Friston looked specifically at the way the brain minimizes the errors that can arise from these Bayesian predictions; in other words, how it avoids surprises. Realizing that he could borrow the mathematics of thermodynamic systems like a steam engine to describe the way the brain achieves this, Friston called his theory “the free energy principle.”
Even the authors of this free energy principle are hoping that it might provide a ‘unified brain theory.’ But if we just look at it another way, what we see is the brain making statistical calculations of some kind. It’s using a Bayesian-like thing to establish our expectations. This is a provocative idea. It reminds me of Gregory Chaitin’s comment about what he calls biological software. Chaitin has said, on more than one occasion, that only after we discovered artificial software could we imagine biology as an archeology of software, with respect to things like the coding property of DNA.
very interesting
wish i could do the math for these so i could examine their equations