New strategies, new circuitries, new mathematics

I came upon an MIT News article about the work of Ila Fiete who studies brain functions, like the neurological processes that govern navigational reasoning about our surroundings. Fiete uses computational and mathematical tools. Her interest in biology, and her respect for the “aesthetic to thinking mathematically,” (as she put it) led her to systems biology, where computational and mathematical analyses and modeling, are applied to the understanding of complex biological systems. She did most of her PhD research at MIT,

….where she studied how the brain uses incoming signals of the velocity of head movement to control eye position. For example, if we want to keep our gaze fixed on a particular location while our head is moving, the brain must continuously calculate and adjust the amount of tension needed in the muscles surrounding the eyes, to compensate for the movement of the head.”

Later, at the University of California at Santa Barbara, Fiete began working on grid cells, a system of neurons I have written about on more than one occasion. These cells, located in the entorhinal cortex of the brain, actually create a grid-like, neural representation of the space around us that allows us to know where we are. Grid cell firings correspond to points on the ground that are the vertices of an equilateral triangular grid. It happens with surprising regularity.

In a Dec 2014 article in Neuron, Nobel laureate Neil Burgess said the following:

Grid cell firing provides a spectacular example of internally generated structure, both individually and in the almost crystalline organization of the firing patterns of different grid cells. A similarly strong organization is seen in the relative tuning of head-direction cells. This strong internal structure is reminiscent of Kantian ideas regarding the necessity of an innate spatial structure with which to understand the spatial organization of the world.

I have written on this topic before:
The mathematical nature of self-locating and
Grid cells and time cells in rats, continuity, and the monkey’s mind

At MIT again, Fiete has continued to explore her PhD thesis topic, specifically, how the brain maintains neural representations of the head’s direction, where it is pointed, at any given time. Now, in a paper published in Nature, she explains how she identified a brain circuit in mice that produces a one-dimensional ring of neural activity that acts like a compass. It allows the brain to calculate the direction of the head, with respect to the external world, at any given moment.

MIT’s report on that paper explains her approach:

Trying to show that a data cloud has a simple shape, like a ring, is a bit like watching a school of fish. By tracking one or two sardines, you might not see a pattern. But if you could map all of the sardines, and transform the noisy dataset into points representing the positions of the whole school of sardines over time, and where each fish is relative to its neighbors, a pattern would emerge. This model would reveal a ring shape, a simple shape formed by the activity of hundreds of individual fish.

Fiete, who is also an associate professor in MIT’s Department of Brain and Cognitive Sciences, used a similar approach, called topological modeling, to transform the activity of large populations of noisy neurons into a data cloud in the shape of a ring.

More than one aspect of these studies impresses me. The approach is interesting. Topological modeling, in some sense, is a shape-directed computation rather than a purely numerical one. And the simplicity of the ring impresses me. The brain somehow isolates, from a flood of sensory data, the variables that produce this simple representation of the head’s position. And the value of a circle, something we think of as a purely abstract idealization, is something the body seems to know, constructing it, as it does, outside what we call the mind.

“There are no degree markings in the external world; our current head direction has to be extracted, computed, and estimated by the brain,” explains Ila Fiete, … “The approaches we used allowed us to demonstrate the emergence of a low-dimensional concept, essentially an abstract compass in the brain.”

This abstract compass, according to the researchers, is a one-dimensional ring that represents the current direction of the head relative to the external world.

Their method for characterizing the shape of the data cloud allowed Fiete and colleagues to identify the variable that the circuit was devoted to representing.

My first reaction to this story was that it was beautiful. On the one hand, Fiete seems to be making increasingly creative use of the mathematics for which she has always had an affinity. On the other hand that simple ring, or compass, that tells us which way we are looking, highlights the presence of inherent mathematical tools that just exist in the body, in how brain processes just work. And the ring is stable, even through sleep.

Her lab also studies cognitive flexibility — the brain’s ability to perform so many different types of cognitive tasks.

“How it is that we can repurpose the same circuits and flexibly use them to solve many different problems, and what are the neural codes that are amenable to that kind of reuse?” she says. “We’re also investigating the principles that allow the brain to hook multiple circuits together to solve new problems without a lot of reconfiguration.

It would not surprise me if, when viewed from a careful, neuro-scientific perspective, we would find some resemblance between the way the brain makes new use of already configured circuits and the way mathematicians consistently build novel mathematical structures with tested strategies that have built other structures, like ordered pairs, symmetries, compositions, closure properties, identities, homotopy and equivalence, etc….

Information, mathematics,and the consciousness of the universe

A New Scientist article began with a now familiar refrain:

They call it the “unreasonable effectiveness of mathematics.” Physicist Eugene Wigner coined the phrase in the 1960s to encapsulate the curious fact that merely by manipulating numbers we can describe and predict all manner of natural phenomena with astonishing clarity…

The article by Michael Brooks has the title Is the universe conscious? It seems impossible until you do the maths.

As I expected, the primary focus of the article is Integrated Information Theory, a way to understand consciousness proposed by neuroscientist Giulio Tononi in 2004. More than one of my previous posts refers to the development of this proposal, in particular there is Where does the mind begin? posted in 2017.

Neuroscientist Giulio Tononi proposed the Integrated Information Theory of Consciousness (IIT) in 2004.  IIT holds that consciousness is a fundamental, observer-independent property that can be understood as the consequence of the states of a physical system. It is described by a mathematics that relies on the interactions of a complex of neurons, in a particular state, and is defined by a measure of integrated information. Tononi proposes a way to characterize experience using a geometry that describes informational relationships. In an article co-authored with neuroscientist Christof Koch, an argument is made for opening the door to the reconsideration of a modified panpsychism, where there is only one substance from the smallest entities to human consciousness.

Brooks describes Tononi’s idea with broad strokes, saying that the theory tells us that a system’s consciousness arises from the way information moves between its subsystems.

One way to think of these subsystems is as islands, each with their own population of neurons. The islands are connected by traffic flows of information. For consciousness to appear, Tononi argued, this information flow must be complex enough to make the islands interdependent. Changing the flow of information from one island should affect the state and output of another. In principle, this lets you put a number on the degree of consciousness: you could quantify it by measuring how much an island’s output relies on information flowing from other islands. This gives a sense of how well a system integrates information, a value called “phi.”

If there is no dependence on a traffic flow between the islands, phi is zero and there is no consciousness. But if strangling or cutting off the connection makes a difference to the amount of information it integrates and outputs, then the phi of that group is above zero. The higher the phi, the more consciousness a system will display.

ITT has attracted many proponents but at the same time has come under quite a bit of critical scrutiny. The mathematics of the theory is very complex. The language of the mathematics is borrowed from information theory, and informational relationships are characterized geometrically as shapes. These shapes, Tononi says, are our experience.

Brooks points to a number of computational issues that require attention as well as the problem of “explaining why information flow gives rise to an experience such as the smell of coffee.” But proponents of the idea are committed to resolving the technical issues, and continue to have faith in the model.

Rather than abandoning a promising model, he [mathematician Johannes Kleiner] thinks we need to clarify and simplify the mathematics underlying it. That is why he and Tull [mathematician Sean Tull] set about trying to identify the necessary mathematical ingredients of IIT, splitting them into three parts. First is the set of physical systems that encode the information. Next is the various manifestations or “spaces” of conscious experience. Finally, there are basic building blocks that relate these two: the “repertoires” of cause and effect.

They posted a preprint paper in February that describes the work.

“We would be glad to contribute to the further development of IIT, but we also hope to help improve and unite various existing models,” Kleiner says. “Eventually, we may come to propose new ones.”

A philosophical difficulty with the theory, however, raises questions close to my heart. The calculations involved in IIT imply that inanimate things possess some degree of consciousness. This may be considerably difficult for many to accept, but I’m happy to report that not everyone has a problem with the possibility. From the Brooks article:

“Particles or other basic physical entities might have simple forms of consciousness that are fundamental, but complex human and animal consciousness would be constituted by or emergent from this,” says Hedda Hassel Mørch at Inland Norway University of Applied Sciences in Lillehammer.

The idea that electrons could have some form of consciousness might be hard to swallow, but panpsychists argue that it provides the only plausible approach to solving the hard problem. They reason that, rather than trying to account for consciousness in terms of non-conscious elements, we should instead ask how rudimentary forms of consciousness might come together to give rise to the complex experiences we have.

With that in mind, Mørch thinks IIT is at least a good place to start.

How does it happen? How to the cells that grow and specialize in embryonic development produce what we call the mind? I became preoccupied with questions like this when I tried to understand the degree to which lesions in the frontal lobe of my mother’s brain changed her experience (and her response to the reality that her new experience created) I began to wonder, in a new way, about the nature of the relationship between her body and her mind. Where was the person in the body? One of the things I heard over and over was “she isn’t there anymore.” And I continued to think, “of course she is.” I now find that the kinds of questions I wanted to ask are increasingly present among researchers, in both neuroscience and mathematics. I have also found that my own hunch that mathematics can shed some light on the mystery is also supported.

IIT is particularly interesting to me because it isn’t just proposing a mathematical description of consciousness, it’s actually suggesting a fundamental or natural relationship between mathematical things and experience. It doesn’t just quantify aspects of consciousness (given by the amount of integrated information), it is also aimed at specifying the quality of experience with informational relationships that are defined by geometric shapes. The points in this geometry are determined using the probability distributions for the different states that a complex of neurons may be in.

I think Tononi himself gets at the significance of this point of view when he writes this in his 2008 Provisional Manifesto:

We are by now used to considering the universe as a vast empty space that contains enormous conglomerations of mass, charge, and energy—giant bright entities (where brightness reflects energy or mass) from planets to stars to galaxies. In this view (that is, in terms of mass, charge, or energy), each of us constitutes an extremely small, dim portion of what exists—indeed, hardly more than a speck of dust.

However, if consciousness (i.e., integrated information) exists as a fundamental property, an equally valid view of the universe is this: a vast empty space that contains mostly nothing, and occasionally just specks of integrated information —mere dust, indeed—even there where the mass-charge–energy perspective reveals huge conglomerates. On the other hand, one small corner of the known universe contains a remarkable concentration of extremely bright entities (where brightness reflects high levels of integrated information), orders of magnitude brighter than anything around them. Each bright “star” is the main complex of an individual human being (and most likely, of individual animals). I argue that such a view is at least as valid as that of a universe dominated by mass, charge, and energy.

Like many of the articles about how mathematics is shedding new light on things, the New Scientist article is not highlighting the mathematics itself, nor what it has the potential to show us. As I see it, mathematics has the potential to show us how we are likely misreading our reality, by showing us how perspectives are built in the mind, or how perspectives are built with the arrangement of concepts and not just the arrangements of neurons firing. Mathematics is in a unique position to show us the bridge between the physical and the conceptual. But this recent New Scientist article still reports good news – specifically that ITT continues to attract the interest of neuroscientists and mathematicians alike.

Mathematics says, “here is a point of view.”

Category theory in mathematics is a relatively new and provocative branch of mathematics that has found many faithful followers and some critics. By relatively new I mean that category theory notions were first introduced only as far back as 1945. Criticism of the theory is often related to the level of abstraction it requires. But abstraction is also critically important to its strength. I’ve chosen to highlight things about category theory in these earlier posts:


Category Theory and the extraordinary value of abstraction
More on category theory and the brain
Quantum Mechanical Words and Mathematical Organisms (for Scientific American)

But the inspiration for this post is something I heard from mathematician Eugenia Chang. It was in a talk she gave, at the School of the Art Institute of Chicago, on The Power of Abstraction. Early in her presentation, Chang uses a turn of phrase that I like very much. Mathematics is useful, she says,

…because of the general light that it sheds on all aspects of our thinking.

Notice she doesn’t say, “on all aspects of things,” but rather “on all aspects of our thinking.” I believe this is important. There is an old tradition among educators to tell reluctant students that, while learning mathematics seems to have nothing to do with their day-to-day lives, or the issues they hope to explore, it’s value lies in the fact that it teaches us how to think. But what Chang is saying is bigger and more important than that. Shedding light on ‘thinking’ is not the same as teaching us how to think. Shedding light on thinking means that mathematics is telling us something about ourselves.

To clarify the value of abstraction Chang uses illumination again:

It’s just like when you shine a light on something (and that’s what mathematics is always doing – trying to illuminate the situation)…if we shine the light very close up, then we will have a very bright light but only a very small area. But if we raise the light further up, then we get a dimmer light, but we illuminate a broader area, and we get a bit more context on the situation…Abstraction enables us to study more things, maybe in less detail, but with more context.

Category theory, as she discusses it, is about relationships among things, the notion of sameness, universal properties, and the efficacy of visual representation. About sameness Chang makes the observation that nothing is actually the same as anything else, and that the old notion of an equation is a lie. I haven’t heard anyone apply the term ‘lie’ to a mathematical thing since my first calculus teacher complained about a popular (thick and heavy) calculus text! But the value of an equation, she explains, is that, while it identifies the way two things are the same, equality also points to the way they are different. 2 + 2 = 4 tells us that, in some way, the left side of the equation is the same as the right side, but it other ways, it is not. Equivalences in category theory are understood as sameness within a context.

When first introduced to the notion of equivalence classes in topology, I thought of it as a powerful offspring of equality, not a correction. But, either way, the broad applicability of category theory (even within mathematics itself) is certainly fueling its development. The Stanford Encyclopedia of Philosophy says this about it:

Category theory has come to occupy a central position in contemporary mathematics and theoretical computer science, and is also applied to mathematical physics. Roughly, it is a general mathematical theory of structures and of systems of structures. As category theory is still evolving, its functions are correspondingly developing, expanding and multiplying. At minimum, it is a powerful language, or conceptual framework, allowing us to see the universal components of a family of structures of a given kind, and how structures of different kinds are interrelated. Category theory is both an interesting object of philosophical study, and a potentially powerful formal tool for philosophical investigations of concepts such as space, system, and even truth.

Chang also wrote the account of the concept category in Princeton’s Companion to Mathematics. There she says the following:

An object exists in and depends upon an ambient category…There is no such thing, for instance, as the natural numbers. However, it can be argued that there is such a thing as the concept of natural numbers. Indeed, the concept of natural numbers can be given unambiguously, via the Dedekind-Peano-Lawvere axioms, but what this concept refers to in specific cases depends on the context in which it is interpreted, e.g., the category of sets or a topos of sheaves over a topological space.

If you look back at the earlier posts to which I referred, you will see how the simplicity of the abstractions can serve situations where traditional mathematical approaches contain some ambiguity. I’ve chosen to return to it all today because Eugenia Chang’s language has encouraged me to see mathematics the way I do, as a reflection of thought itself, among other things. Contrary to expectations, she says:

Mathematics is not definitive. It says, here is a point of view.

From coin flipping to branching universes

A recent column in Quanta Magazine, by theorist Seam Carroll, highlights the far reaching implications of the role played by probability theory in quantum mechanics. Carroll’s intention is to bring into focus the need, which does seem to exist, for us to understand what, exactly, those probabilities are telling us. In quantum mechanics, the partnership of mathematics and physics has the unusual effect of both clarifying and mystifying things. Carroll’s concern is whether the probabilities that seem to contradict long held deterministic views of the physical world, should be thought of as properties of the objects studied, or just the cognitive strategy of the subjects studying them. As I see it, this difficulty of unraveling the thought from the material, may help us get a better look at the multidimensional nature of mathematics itself.

Probability is inextricably bound to our experience of uncertainty. When, in the 17th century, Pascal explored the calculation of probabilities, his efforts were aimed at finding ways to predict the results of games of chance. But the use of these strategies was fairly quickly adopted to address questions of law and insurance, as these concerned chance (or random) events (like weather or disease) in the day-to-day lives of individuals. The mathematics of probability provided a way to think about future events, about which we are always uncertain. I read in a Britannica article that in the early 19th century, LaPlace characterized probability theory as “good sense reduced to calculation.”

By the 18th century, Bayes’ theorem was already getting a lot of attention. It was beginning to look like the best calculation of likelihoods also relied on the experience of the individual doing the calculation. Bayes’ Theorem is a formula for calculating conditional probabilities, probabilities that are changed when conditions are altered. One of the conditions that could become altered is what the observer knows. This brings attention back to the subject, which is different from the way we understand the likelihood of heads or tails in a coin toss. Since a coin toss can only yield one of two possible outcomes, we have come to understand that there is a 50/50 chance of either. The more times we toss the coins the closer we get to seeing that 50/50 split in the outcomes. What we expect of the coin toss is entirely dependent on the nature of the coin. But conditional probabilities are not so clear. So how should physicists view our reliance on probabilities in quantum mechanical theory. This is what Carroll addresses.

There are numerous approaches to defining probability, but we can distinguish between two broad classes. The “objective” or “physical” view treats probability as a fundamental feature of a system, the best way we have to characterize physical behavior. An example of an objective approach to probability is frequentism, which defines probability as the frequency with which things happen over many trials.

Alternatively, there are “subjective” or “evidential” views, which treat probability as personal, a reflection of an individual’s credence, or degree of belief, about what is true or what will happen. An example is Bayesian probability, which emphasizes Bayes’ law, a mathematical theorem that tells us how to update our credences as we obtain new information. Bayesians imagine that rational creatures in states of incomplete information walk around with credences for every proposition you can imagine, updating them continually as new data comes in. In contrast with frequentism, in Bayesianism it makes perfect sense to attach probabilities to one-shot events, such as who will win the next election.

In an aeon article about Einstein’s rejection of unresolved randomness in any physical theory, Jim Baggott say this:

In essence, Bohr and Heisenberg argued that science had finally caught up with the conceptual problems involved in the description of reality that philosophers had been warning of for centuries. Bohr is quoted as saying: ‘There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.’ This vaguely positivist statement was echoed by Heisenberg: ‘[W]e have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.’ Their broadly antirealist ‘Copenhagen interpretation’ – denying that the wave function represents the real physical state of a quantum system – quickly became the dominant way of thinking about quantum mechanics. More recent variations of such antirealist interpretations suggest that the wave function is simply a way of ‘coding’ our experience, or our subjective beliefs derived from our experience of the physics, allowing us to use what we’ve learned in the past to predict the future.

But this was utterly inconsistent with Einstein’s philosophy. Einstein could not accept an interpretation in which the principal object of the representation – the wavefunction – is not ‘real’.

Today, proponents exist for more than one model of the universe. There are models where probability is “fundamental and objective,” as Carroll says.

There is absolutely nothing about the present that precisely determines the future…What happens next is unknowable, and all we can say is what the long=term frequency of different outcomes will be.

In other theories, nothing is truly random and probability is entirely subjective. If we knew, not just the wave function, but all the hidden variables, we could predict the future exactly. As it stands, however, we can only make probabilistic predictions.

Finally there is the many-worlds resolution to the problem, which is Carroll’s favorite.

Many-worlds quantum mechanics has the simplest formulation of all the alternatives. There is a wave function, and it obeys Schrödinger’s equation, and that’s all. There are no collapses and no additional variables. Instead, we use Schrödinger’s equation to predict what will happen when an observer measures a quantum object in a superposition of multiple possible states. The answer is that the combined system of observer and object evolves into an entangled superposition. In each part of the superposition, the object has a definite measurement outcome and the observer has measured that outcome.

Everett’s brilliant move was simply to say, “And that’s okay” — all we need to do is recognize that each part of the system subsequently evolves separately from all of the others, and therefore qualifies as a separate branch of the wave function, or “world.” The worlds aren’t put in by hand; they were lurking in the quantum formalism all along.

I find it foolish to ignore that probability theory keeps pointing back at us. Christopher Fuchs, a physicist at the University of Massachusetts, is the founder of a school of thought dubbed Qbism (for quantum bayesianism). In an interview published in Quanta Magazine, Fuchs explains that QBism goes against a devotion to objectivity “by saying that quantum mechanics is not about how the world is without us; instead it’s precisely about us in the world. The subject matter of the theory is not the world or us but us-within-the- world, the interface between the two.” And later:

QBism would say, it’s not that the world is built up from stuff on “the outside” as the Greeks would have had it. Nor is it built up from stuff on “the inside” as the idealists, like George Berkeley and Eddington, would have it. Rather, the stuff of the world is in the character of what each of us encounters every living moment — stuff that is neither inside nor outside, but prior to the very notion of a cut between the two at all.

The effectiveness of our thoughts on likelihood is astounding. Cognitive neuroscientists suggest that statistics is part of our intuition. They argue that we learn everything through probabilistic inferences. Optical illusions have been understood as the brain’s decision about the most likely source of a retinal image. Anil Seth, at the University of Sussex, argues that all aspects of the brain’s construction of our world are built with probabilities and inferences. The points in the geometry of Tonini’s Integrated Information Theory of consciousness are defined using probability distributions. Karl Friston’s free energy principal, first aimed at a better understanding of how the brain works, defines the boundaries around systems (like cells, organs, or social organizations) with a statistical partitioning – things that belong to each other are defined by the probability that the state of one thing will effect another. Uncertainty defines Claude Shannon’s information entropy and Max Tegmark’s laws of thermodynamics. It’s also interesting that a thought experiment, proposed by James Clerk Maxwell in 1871 and known as Maxwell’s demon, was designed to examine the question of whether or not the second law of thermodynamics is only statistically certain.

As Carroll sees it, “The study of probability takes us from coin flipping to branching universes.” So what’s me and what’s not me? Mathematics has a way of raising this issue over and over again. Maybe we are beginning to look to it for guidance.

The monad, autopoiesis and Christmas

If you were listening, the season brought the usual surge of Christmas music through all manner of electromagnetic transmission, wired and wireless, causing me to remember again my mild preoccupation with one tune in particular, namely – Do You Hear What I Hear? For the past few years I found myself listening more closely to the lyrics of this piece because, for me, they created an image related to the many things I have written about mathematics and cognition. I decided this year to try to pin down my thoughts more clearly, and share them.

The song describes the ‘transfer of information,’ if you will, that moves through the wind to the lambs, from the lambs to a shepherd, from the shepherd to the king, and finally from the king to the people. It goes like this:

Said the night wind to the little lamb
Do you see what I see
Way up in the sky little lamb
Do you see what I see
A star, a star
Dancing in the night
With a tail as big as a kite
With a tail as big as a kite

Said the little lamb to the shepherd boy
Do you hear what I hear
Ringing through the sky shepherd boy
Do you hear what I hear
A song, a song
High above the trees
With a voice as big as the sea
With a voice as big as the sea

Said the shepherd boy to the mighty king
Do you know what I know
In your palace wall mighty king
Do you know what I know
A child, a child
Shivers in the cold
Let us bring him silver and gold
Let us bring him silver and gold

Said the king to the people everywhere
Listen to what I say
Pray for peace people everywhere
Listen to what I say
The child, the child
Sleeping in the night
He will bring us goodness and light
He will bring us goodness and light

The wind perceives and communicates what it sees to the lamb. The lamb hears the wind, as a song, a formulation, and somehow communicates what he hears to the boy. The boy then knows something, has a fully conscious perception, which he brings to the king (the one responsible for organizing the human world) and from there it is broadcast so that everyone knows.

I was raised Catholic and so I remember the birth of Jesus described to us as the marriage of heaven and earth, which may be said to be the reconciliation of the eternal and the temporal, or the ideal and the instantiated. It’s the last of these that has gotten considerable attention from me, in these past many years, as I have worked to square conceptual reality with physical reality through a refreshed look at mathematics. And so the song got my attention because it suggests a continuum of knowing, from the wind to the King, and a oneness to the world of the physical and the devine. The idea that sensation and cognition are somehow in everything reminds me of the polymath and mathematician Leibniz’s monads for one thing, and cognition as understood by biologists Francisco Varela and Humberto Maturana. The rigor of Leibniz’s work in logic and mathematics, together with what he understood about the physical world, and his faith in reason, he dissociated ‘substance’ from ‘material’ and reasoned that the world was not built from passive material but from fundamental objects he called monads – simple mind-like substances equipped with perception and appetite. But the monad takes up no space, like a mathematical point. I wrote about these things in 2012 and made this remark:

All of this new rumbling about mathematics and reality encourages a hunch that I have had for a long time – that the next revolution in the sciences will come from a newly perceived correspondence between matter and thought, between what we are in the habit of distinguishing as internal and external experience, and it will enlighten us about ourselves as well as the cosmos. New insights will likely remind us of old ideas, and the advantage that modern science has over medieval theology will wane. I expect mathematics will be at the center of it all.

For Varela and Maturana, every organism lives in a medium to which it is structurally coupled and so the organism can be said to already have knowledge of that medium, even if only implicitly. Living systems exist in a space that is both produced and determined by their structure.Varela and Maturana extend the notion of cognition to mean all effective interactions – action or behavior that accomplishes the continual production of the system itself. “All doing is knowing and all knowing is doing,” as they say in The Tree of Knowledge. I wrote about some of the implications of this idea last year.

There is certainly mystery in Christmas images, from the return of the life-giving presence of the sun on the solstice, to the generous red-suited giver of gifts who lives where there is no life, to the unexpected marriage of the heaven and earth. The song Do You Hear What I Hear? has an interesting history. It was written in 1962 by Noel Regney and Gloria Shayne. Regney wrote the lyrics. He was a French-born musician and composer forced into the German army by Hitler’s troops during World War II. He became a member of the French underground and, while in the required German uniform, he collected information and worked in league with the French resistance. He moved to Manhattan in 1952 and continued his career as a composer. Although he once expressed that he had no interest in writing Christmas songs, amidst the the distress of the Cuban missile crisis in October of 1962, he has said that he was inspired to write the lyrics in question when he saw the hopeful smiles of two babies in strollers, in friendly exchange on a street in Manhattan.

I’m not arguing that my observation of the lyrics defend any particular religious perspective. I want more to express the fact that I can’t help but notice that the song sits comfortably within world views once considered by a 17th century polymath, known for his development of the calculus, and by 20th century biologists whose work redefines life as well as our experience of reality! And there is value in taking note of unintended science-like perspectives in religious images. Even the notion of The Word in Christian literature, translated from the Greek logos, is replete with fundamental views of reality in Ancient Greek philosophy. For the Stoics logos was reason both in the individual and in the cosmos. It was nature as well as God.

Religion and science have a common ancestor and may have a shared destiny.

Timeless geometry and what we say time is

Another article about physics and mathematics by Natalie Wolchover, published in both Wired and Quanta Magazine, got my attention because it began like this:

In late August, paleontologists reported finding the fossil of a flattened turtle shell that “was possibly trodden on” by a dinosaur, whose footprints spanned the rock layer directly above. The rare discovery of correlated fossils potentially traces two bygone species to the same time and place.

Cosmologist Nima Arkani-Hamed makes the connection:

Paleontologists infer the existence of dinosaurs to give a rational accounting of strange patterns of bones…We look at patterns in space today, and we infer a cosmological history in order to explain them.

I doubt my 12-year old son has ever thought that the existence of dinosaurs is inferred. For him, the facts are clear. The dinosaurs are just not here anymore. But Arkani-Hamed’s observation caused a few things to go through my mind quickly. First I thought, this is cool – corresponding a tactic in paleontology to one in physics. And then, I realized what very little thought I have given to how we have come to know so much about creatures whose lives occurred completely outside the range of our experience. We have fully life-like images of them, and treat their existence as an unquestionably known quantity. Thinking about the labor it took to transform fossil discoveries into these convincing images highlighted the need, as I see it, to make the labor of science as apparent to non-science audiences as the results of that labor have been. The creativity involved in all of our inquiries is as important to see as the outcomes of those inquiries.

As a species, it seems that we are very good at piecing things together. Some facet of our reasoning and cognitive skills is always on the hunt for patterns with which our intellect or our imagination will then build countless structures – from the brain’s production of visual images created by the flow of visual data it receives, to the patterns in our experience that facilitate our day-to-day navigation of our earthbound lives, to the patterns in the sky that hint at things that are far beyond our experience, and the purely reasoned patterns of science and mathematics. We use these structures to capture, or harness, things like the detail of astronomical events billions of light years away, or the character of particles of matter that we cannot see, or species of animals that we can never meet. The reach or breadth of these reasoned structures likely rivals the extent of the universe itself or, at least our universe. I would argue that it is useful to reflect on how our now deep scientific knowledge is built on pattern and inference because, in the end, it is the imagination that has built them. By this I do not mean to discredit the facts. Rather, I mean to elevate what we think of the imagination and of abstract thought in general.

Wolchover’s article describes how Arkani-Hamed and colleagues have worked on schemes that use spatial patterns among astronomical objects to understand the origins of the universe. (Based on the paper, The Cosmological Bootstrap: Inflationary Correlators from Symmetries and Singularities). Physicists have considered simple correlated pairs of objects for some time.

The simplest explanation for the correlations traces them to pairs of quantum particles that fluctuated into existence as space exponentially expanded at the start of the Big Bang. Pairs of particles that arose early on subsequently moved the farthest apart, yielding pairs of objects far away from each other in the sky today. Particle pairs that arose later separated less and now form closer-together pairs of objects. Like fossils, the pairwise correlations seen throughout the sky encode the passage of time—in this case, the very beginning of time.

But cosmologists are also considering the possibility that rare quantum fluctuations involving three, four or more particles may have also occurred in the birth of the universe. These would create other arrangements, like triangular arrangements of galaxies, or objects forming quadrilaterals, or pentagons. Telescopes have not yet identified such arrangements, but finding them could significantly enhance physicists’ understanding of the earliest moments of the universe.

Wolchover’s article describe physicists’ attempts to access these moments.

Cosmology’s fossil hunters look for the signals by taking a map of the cosmos and moving a triangle-shaped template all over it. For each position and orientation of the template, they measure the cosmos’s density at the three corners and multiply the numbers together. If the answer differs from the average cosmic density cubed, this is a three-point correlation. After measuring the strength of three-point correlations for that particular template throughout the sky, they then repeat the process with triangle templates of other sizes and relative side lengths, and with quadrilateral templates and so on. The variation in strength of the cosmological correlations as a function of the different shapes and sizes is called the “correlation function,” and it encodes rich information about the particle dynamics during the birth of the universe.

This is pretty ambitious. In the end, Arkani-Hamed and colleagues found a way to simplify things. They borrowed a design from particle physicists who found shortcuts to analyzing particle interactions using what’s called the bootstrap.

The physicists employed a strategy known as the bootstrap, a term derived from the phrase “pick yourself up by your own bootstraps” (instead of pushing off of the ground). The approach infers the laws of nature by considering only the mathematical logic and self-consistency of the laws themselves, instead of building on empirical evidence. Using the bootstrap philosophy, the researchers derived and solved a concise mathematical equation that dictates the possible patterns of correlations in the sky that result from different primordial ingredients.

Arkani-Hamed chose to use the geometry of “de Sitter space,” to investigate various correlated objects because the geometry of this space looks like the geometry of the expanding universe. De Sitter space is a 4-dimensional sphere-like space with 10 symmetries.

Whereas in the usual approach, you would start with a description of inflatons and other particles that might have existed; specify how they might move, interact, and morph into one another; and try to work out the spatial pattern that might have frozen into the universe as a result, Arkani-Hamed and Maldacena translated the 10 symmetries of de Sitter space into a concise differential equation dictating the final answer.

It is significant that there is no time variable in this analysis. Time emerges within the geometry. Yet it predicts cosmological patterns that provide information about the rise and evolution of quantum particles at the beginning of time. This suggests that time, itself, is an emergent property that has its origins in spatial correlations.

It should be clear that confidence in the geometric calculations is coming from how they square (no pun intended) with empirical measurements that we do have.

By leveraging symmetries, logical principles, and consistency conditions, they could often determine the final answer without ever working through the complicated particle dynamics. The results hinted that the usual picture of particle physics, in which particles move and interact in space and time, might not be the deepest description of what is happening. A major clue came in 2013, when Arkani-Hamed and his student Jaroslav Trnka discovered that the outcomes of certain particle collisions follow very simply from the volume of a geometric shape called the amplituhedron.

I wrote about this discovery in March.

Arkani-Hamed suspects that the bootstrapped equation that he and his collaborators derived may be related to a geometric object, along the lines of the amplituhedron, that encodes the correlations produced during the universe’s birth even more simply and elegantly. What seems clear already is that the new version of the story will not include the variable known as time.

An important aspect of the issues being discussed is the replacement of time-oriented functional analyses with time-less geometric ones. As I see it, this raises questions broader than how the structure of the universe itself is mathematical. This work highlights the relationships between physical things, abstract or ideal objects, and the constraints of logic. It says as much about us, and what we do, as it says about the origins of the universe or what we say that time is. I’ll stress, as I often do, that these issues are relevant to people, not just to science. This shift from one kind of organization of concepts (dynamic change) to another (geometric relationships) should encourage us to consider where these conceptual structures are emerging from and how are they connecting us to our reality.

I’m convinced that paying more attention to how we participate in building our reality will clarify quite a lot.

Building objects from relations: physics and the monad

Quanta Magazine recently published an interview with physicist and author Lee Smolin. Smolin talked about his most recent book, Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum, and the influence that Gottfried Leibniz, has had on the perspective that Smolin most recently adopted. Seventeenth century polymath, Gottfried Wilhelm Leibniz, known for having developed a system of infinitesimal calculus, is certainly a major contributor to the kind of thinking that has produced the modern sciences. And yet the rigor of his thought, and his careful examination of mechanistic theories, led him to deduce a metaphysical underpinning of reality.

The mathematical notions of infinity and continuity guide a great number of Leibniz’s observations. But Smolin makes a particular reference to Leibniz’s metaphysical account of the whole of reality, his Monadology. It would seem unlikely that a modern physicist would choose this path, but I would argue only because the path is under appreciated. Here’s a little of how the reasoning goes:

As Leibniz saw it, there are no discontinuous changes in nature. The observed absence of abrupt change suggested to him that all matter, regardless of how small, had some elasticity. Since elasticity requires parts, a truly singular thing, with no parts, would not be elastic. That would mean that all material objects, no matter how small, would have to be compounds or amalgams of some sort. If not, they could produce abrupt change. Now anything simple and indivisible, is necessarily without extension, or dimension, like a mathematical point. In other words, it wouldn’t take up any space. Leibniz was convinced that this non-material fundamental substance had to exist. If it didn’t, then everything would be an aggregate of substances. And every aggregate would also be an aggregate, allowing for the endless divisibility of everything, making it impossible to identify anything. According to Leibniz, the universe of extended matter is a consequence of the interaction of simple non-material substances known as monads, or simply the relations among these monads.

But it is not the non-material nature of a monad that Smolin keys on. It is more Leibniz’s conviction that that there is no fundamental space within which the elements of the universe exist, together with the fact that it is relations among the actions of fundamental unities that produce the universe we experience. Here’s what Smolin says:

I first read Leibniz at the instigation of Julian Barbour, when I was just out of graduate school. First I read the correspondence between Leibniz and Samuel Clarke, who was a follower of Newton, in which Leibniz criticized Newton’s notion of absolute space and absolute time and argued that observables in physics should be relational. They should describe the relations of one system with another, resulting from their interaction. Later I read the Monadology. I read it as a sketch for how to make a background- independent theory of physics. I do look at my copy from time to time. There is a beautiful quote in there, where Leibniz says, “Just as the same city viewed from different directions appears entirely different … there are, as it were, just as many different universes, which are, nevertheless, only perspectives on a single one, corresponding to the different points of view of each monad.” That, to me, evokes why these ideas are very suitable, not just in physics but for a whole range of things from social policy and postmodernism to art to what it feels like to be an individual in a diverse society. But that’s another discussion! (Emphasis added)

The key seems to be in what the interviewer refers to as Smolin’s slogan: “The first principle of cosmology must be: There is nothing outside the universe.” Smolin agrees with Leibniz that space, rather than being some thing within which bodies are located and move, it is a system of relations holding between things or, in his terms, ‘an order of situations.’ Space is created by the arrangement of matter, as the family tree is created by the arrangement of ones ancestors (a comparison Leibniz, himself, made). Space comes into existence only when the coexistent parts of the universe come into existence. It seems that Smolin also finds value in Leibniz’s portrayal of the individual monad as something that represents the universe from one of all possible points of view.

Leibniz described monads as complete in the sense that they cannot be changed by anything outside of themselves nor can they influence each other. It is an inner, pre-established solidarity that defines their relationship to each other. Their completeness requires, however, that they hold within themselves, perhaps as potentialities, all of the properties they will exhibit in the future, as well as some trace of all of the properties that they exhibited in the past. This brings timelessness to the fundamental level of our reality, to which Leibniz also attributes a preexisting harmony. The monad’s singularity also requires that they each, somehow, mirror or reflect the entire universe and every other monad.

It may not be the nature of the monad that has Smolin’s attention. But he has chosen to work on a theory about processes rather than things, the “causal relations among things that happen, not the inherent properties of things that are.”

The fundamental ingredient is what we call an “event.” Events are things that happen at a single place and time; at each event there’s some momentum, energy, charge or other various physical quantity that’s measurable. The event has relations with the rest of the universe, and that set of relations constitutes its “view” of the universe. Rather than describing an isolated system in terms of things that are measured from the outside, we’re taking the universe as constituted of relations among events. The idea is to try to reformulate physics in terms of these views from the inside, what it looks like from inside the universe.

There are so many reasons that I am intrigued by Smolin’s choice. It’s beautifully imaginative. But I’ve always been reassured by Leibniz’s view of things – an unexpected amalgam of rigorous formal reasoning, the conceptual possibilities of mathematics, what was known in physics, and the way that God was understood – all brought to bear in an effort to comprehend everything. Leibniz characterizes space and time as beings of reason; they are abstractions, or idealizations (like the geometric continuum) and, as such, are found to be continuous, homogenous, and infinitely divisible. Leibniz was intent on avoiding the blunder of a mind/body duality. His monadology is a unique synthesis of things that sound like biological notions, along with physical observations, and mathematical abstractions. Smolin’s choice to explore Leibniz’s map of the world with the observations of modern physics sounds very promising.

Truth, time, and mathematics

A special September issue of Scientific American is organized around questions about what we seem to know, and how or why we may be deceived about the nature of reality. This special September issue has the title: Truth Lies and Uncertainty. No doubt the editors are inspired, to some extent, by the challenges to the truth that are happening on a daily basis in our social and political lives. But I was also struck by the close connection between the first three articles in Part 1 of the issue (under the heading Truth) and the questions explored here at Mathematics Rising. Part 1 begins with a piece by science writer George Musser, who takes a look at some of the unexpected ways that physicists try to come to terms with the counter-intuitive realities that their theories describe. Among the many interesting conundrums he points to are these:

… according to several mathematical theorems, nothing can be localized in the way that the traditional concept of a particle implies
…Fields, too, are not what they appear to be. Mod­ern quantum theories long ago did away with electric and magnetic fields as concrete structures and re­placed them with a hard-to-interpret mathematical abstraction
…The deeper physicists dive into reality, the more reality seems to evaporate.

And, he asks:

…What differentiates physical from mathematical objects or a simulation from the original system? Both involve the same sets of relations, so there seems to be nothing to tell them apart.

One can argue that it is physics’ increasing reliance on mathematics that causes reality to evaporate the way that Musser describes. He does discuss some of the ideas that physicists use to reconcile their mathematics with their reality. One of these is a perspective called Qbism, which is an interpretation of quantum mechanical theory that acknowledges and addresses the role played by the scientist in the development of theory. Also from Musser:

Immanuel Kant argued that the structure of our minds conditions what we perceive. In that tradition, physicist Markus Müller of the Institute for Quantum Optics and Quantum Information in Vienna and cog­nitive scientist Donald Hoffman of the University of California, Irvine, among others, have argued that we perceive the world as divided into objects situated within space and time, not necessarily because it has this structure but be cause that is the only way we could perceive it. The reality we experience looks the way that it does because of the nature of the perceiving agent.

In the same Part 1 is a piece, written by mathematician Kelsey Houston-Edwards, that addresses the creation vs. discovery arguments about mathematics which is, essentially, the question of whether or not mathematics exists, in some way, independently of human experience. She suggests a useful image:

This all seems to me a bit like improv theater. Mathematicians invent a setting with a handful of characters, or objects, as well as a few rules of interaction, and watch how the plot unfolds. The actors rapidly develop surprising personalities and relationships, entirely independent of the ones mathematicians intended. Regardless of who directs the play, however, the denouement is always the same. Even in a chaotic system, where the endings can vary wildly, the same initial conditions will always lead to the same end point. It is this inevitability that gives the discipline of math such notable cohesion. Hidden in the wings are difficult questions about the fundamental nature of mathematical objects and the acquisition of mathematical knowledge.

I like this image because I find it consistent with what does seem to happen in the research done by mathematicians. But it also suggests a focus for the questions we have about the fundamental nature of mathematical objects, that focus being the significance and nature of the interaction of the thoughts we put forward.

The last piece of this triad is an article by cognitive and computational neuroscientist Anil K.Seth. Seth’s work also proposes that our experience is not really an indication of how things really are, but more what our bodies make of the things that are. His central idea is that perception is a process of active interpretation, that tries to predict the source of signals that originate both outside and within the body.

The central idea of predictive perception is that the brain is attempting to figure out what is out there in the world (or in here, in the body) by continually making and updating best guesses about the causes of its sensory inputs. It forms these best guesses by combining prior expectations or “beliefs” about the world, together with incoming sensory data, in a way that takes into account how reliable the sensory signals are.

For Seth the contents of our perceived worlds are what he calls controlled hallucinations, the brains best guesses about the unknowable causes of the sensory signals it receives.

What I find interesting about this discussion of truth is that no one is looking directly at what mathematics is doing, or what mathematics might have to say about the relationship between the brain and the world in which it is embedded through the body. Mathematics has the peculiar character of existing in both the perceiver and the perceived. And maybe this isn’t really peculiar. But there is a reason why mathematics is always crucial to correcting the deceptions present in our experience (as it has done with general relativity and quantum mechanics). In physics mathematics does the heavy lifting of defining the data, giving it meaning, finding the patterns, for example, in what we see of particles, and fields, and their interactions, and everything else. And in cognitive science we now see the mathematical nature of the brain processes that construct our reality. But what I hope to see in mathematics is not just about science. I’m convinced that mathematics can help us see how thought and physical reality are not only related, or interacting, but are somehow the same stuff. I suspect that the physical world is full of thoughts, and ideas are as physical as flowers. But I don’t think we’re clear yet on what physicality really is. Mathematics may be the thing that cracks open the stubborn duality in our experience that is obscuring our view.

Another recent article, unrelated to this truth discussion, added a point to my collection of data about the profoundness of what mathematics seems to tell us about ourselves. This past February, Quanta magazine reprinted an article from Wired.com about possible breakthroughs in understanding how the brain creates our sense of time and memory. The brain processes that create memory have been difficult to identify. For neuroscientists Marc Howard and Karthik Shankar, memory is a display of sensory information in much the same way that a visual image is a display of visual information. But neurons do not directly measure time the way that some neurons measure wavelength or brightness, or even verticality. So Howard and Shankar looked for a way (i.e. equations) to describe how the brain might encode time indirectly.

…it’s fairly straightforward to represent a tableau of visual information, like light intensity or brightness, as functions of certain variables, like wavelength, because dedicated receptors in our eyes directly measure those qualities in what we see. The brain has no such receptors for time. “Color or shape perception, that’s much more obvious,” said Masamichi Hayashi, a cognitive neuroscientist at Osaka University in Japan. “But time is such an elusive property.” To encode that, the brain has to do something less direct.

It now looks like the way the brain accomplishes this resembles a fairly familiar strategy in mathematics called the Laplace Transform. The Laplace transform translates difficult equations into less difficult ones by replacing the somewhat complex operation of differentiation with the very familiar operation of multiplication. It’s a mapping that changes time and space relations, described by derivatives, into simpler algebraic relations. Once the algebraic relation is understood, there are mechanisms for translating these solutions back into solutions of the original differential equations.

When Howard heard about Tsao’s results, which were presented at a conference in 2017 and published in Nature last August, he was ecstatic: The different rates of decay Tsao had observed in the neural activity were exactly what his theory had predicted should happen in the brain’s intermediate representation of experience. “It looked like a Laplace transform of time,” Howard said — the piece of his and Shankar’s model that had been missing from empirical work.

This model for understanding the neurological components of time, developed by Howard and Shankar, began with just the mathematics. It was a purely theoretical model. But the possibility was demonstrated in the lab work of another neuroscientist, Albert Tsao, working independently of Howard and Shankar. Tsao found, in rats, that firing frequencies for certain neurons increased at the beginning of an event (like releasing a rat into a maze to find food) and diminished over the course of the event. At the start of another trial the firing increased again and diminished again in such a way that each trial could be identified by this pattern of enlivened and diminishing activity of brain activity.

As neuroscientist Max Shapiro sees it:

It’s this coding by parsing episodes that, to me, makes a very neat explanation for the way we see time. We’re processing things that happen in sequences, and what happens in those sequences can determine the subjective estimate for how much time passes.

What I think is important here is that the strategy we developed with the Laplace transform is a strategy the body also employs. This happens all the time, but this seems like a particularly unexpected and intimate instance of it. Mathematics, I expect, is pure structure that exists on the edge of everything that we are and all that there is.

Soul Searching

The Closer to Truth team recently did a series of interviews addressing the following question: Do persons have souls? Interviewees included philosopher and cognitive scientist Daniel Dennett; author, medical doctor and holistic healer, Deepak Chopra; philosopher Eleonore Stump; Warren Brown, Director of the Edward Travis Research Institute at the Fuller Theological Seminary and Professor of Psychology; psychologist and parapsychologist, Charles Tart; author and religious studies scholar, Houston Smith; and cognitive linguist and author George Lakoff.

What struck me about all of these interviews was how traditionally everyone seemed to address the question from their individual philosophical, theological, or psychological perspectives. George Lakoff points to the way the brain creates metaphors that will produce the sense that there is a separation between the experiencing subject and the self. Warren Brown suggests that the notion of the soul might be better understood with the word person, a word which seems to apply to both the physical and idea-driven aspects to human experience. Charles Tart points out that individuals have seen the part of themselves that is not physical in out of body experiences and near death experiences. Deepak Chopra reminds us that Schrodinger once said that consciousness is a singular that has no plural. And he makes useful corrections to some of our habits of thought. He suggests, for example, that the personal aspect of consciousness is like a wave in the ocean, a pattern of movement. It’s real, but it disappears. He also suggests that our minds have produced evidence for a multiverse, so why are we so suspect of the notion of eternity. Bottom line for Chopra, the ultimate truth is consciousness, and we cannot fully express it. In Part One of the series on the soul, philosopher and parapsychologist Stephen Braude made clear that he is not just an anti-physicalist, but he is anti-mechanist. “There is one kind of stuff,” he says, that can be looked at through any number of conceptual grids. He levels the playing field. Every set of descriptive terms leaves something out. No description of nature can be complete.

I’ve spent a significant amount of time looking for philosophical shifts in physics and biology, as well as unexpected developments in, or applications of mathematics. I’ve tried to identify innovative efforts in these areas because they often lend support to to my own ideas about the nature and value of mathematics. So many novel approaches to biology and physics have important implications for how we might think about mind, consciousness, spirit and the soul. Yet none of these were relevant to the Closer to Truth inquiry. When Plato was invoked, there was no acknowledgement that his eternal world of ideals is tied to our empirical study of our surroundings through mathematics – an observation that warrants some thought.

Listening to the interviews also made me more aware that a fairly provocative and well-known idea has still had only modest impact on how we see ourselves and our world. In 1987 biologists Humberto Maturana and Francisco Varela formalized a new approach to biology and to cognition in particular. It is a perspective defined by the notion of autopoiesis, the self-creating nature of life itself, and the more generalized notion of cognition that this perspective brings about. The key to their strategy is to begin with the understanding that our experience is tied to our individual structure in a binding way. From their point of view, what we experience is due more to our own structure than to what exists around us. Maturana and Varela make their case in the book, The Tree of Knowledge: The Biological Roots of Human Understanding. From their book:

The experience of anything out there is validated in a special way by the human structure, which makes possible “the thing” that arises in the description.

The seeds of these ideas appear in The Neurophysiology of Cognition, an article published by Maturana in 1969. In that article he raises an unexpected question. Does cognition just transcribe, for us, the truth of the world around us, or is it a biological phenomenon whose nature we do not actually understand? For most of us, our own immediate sense of what we seem to know, feels like the simple gathering of information from the world around us. We believe that the information that we gather is out there and independent of us. If we take this as our starting point, Maturana argues, then questions about cognition will be mostly concerned with how it works and how to use it. But Maturana takes a step back from this. For him, cognition itself is the unknown. As Maturana sees it, the question we should be asking is, “What kind of biological phenomenon is the phenomenon of cognition?” What is it doing? This is broader than even a question about how the mind is related to the brain. If this question is our starting point, the nature of a reality that is independent of us becomes fairly difficult to discern. We are fully and dynamically embedded in our reality. Since we find mathematics in brain processes themselves, this sets the stage for the possibility that mathematics, itself, is something we participate in rather than something we produce. The effect of this embededness would challenge us to be more rigorous in all of our inquiries, whether physical or metaphysical because we are never fully independent of what we see.

Some of the potential in the notion of autopoiesis appears in the work of Karl Friston, a neuroscientist who is known for his contributions to neuroimaging technology but who, more recently, is receiving a lot of attention because of a theoretical framework he has proposed to describe all living systems. His idea, called the free energy principle, is already enjoying multi-disciplinary application. The free energy principle doesn’t build directly on autopoiesis, but it shares some of its most fundamental concepts.

In particular in a video produced by Serious Science, after a brief account of the main points of the free energy principle, Friston concludes that we are all in the game of garnering information that maximizes the evidence of our own existence. And so, he adds, brain structure speaks exactly to the causal structure of the world we inhabit.

The circularity that these perspectives should have a significant effect on our ideas about mind, consciousness, and soul. I’m not suggesting that they make the inquiry meaningless. On the contrary, they open up the inquiry and require that we be more careful and more creative. They make it more difficult and, I expect, more interesting.

A mathematician’s playground

Recently, I had the opportunity, to listen to Vered Rom-Kedar give a public lecture entitled Billiard is not just a game. Until now, I haven’t thought much about this expanding branch of mathematics but, for me, the lecture highlighted some of the reasons I find mathematics so captivating, and it encouraged me to keep going with my own exploration. In the book Geometry and Billiards, Serge Tabachnikov introduces billiards in this way:

Mathematical billiards describe the motion of a mass point in a domain with elastic reflections from the boundary. Billiards is not a single mathematical theory… it is rather a mathematician’s playground where various methods and approaches are tested and honed. Billiards is indeed a very popular subject…

In her public lecture Rom-Kedar started at the beginning. She described the familiar motion of billiard balls as they hit the sides of a billiards table. Once set in motion, a billiard ball will move along a straight line with a constant speed until it hits the side of the table. Its path, after it hits the side, is subject to a familiar law about the reflection of light, specifically, that the angle of incidence equals the angle of reflection. The billiard ball obeys the same law. If it happens to hit the other side head-on it will return to the first side along the same path.

Rom-Kedar then asked the first scientific question: What would happen if the ball just kept moving, traveling in straight lines, as it hit side after side, for an infinitely long time, each time obeying that law of reflection? Will the ball eventually traverse every point on the table? As it turns out, the answer to this question is yes, for some of these events, and no for others. When the ball’s initial move away from the side of the table begins at an angle whose measure has a rational relationship to the dimensions of the table, a periodic orbit gets locked in, and the periodic repetition of paths will never allow the ball to cover all the points on the table. But when the relation between that angle and the dimensions of the table is irrational, the paths are ergodic, i.e. they impinge on all of the points of the given surface or table. These ideal billiards have no mass and hence there is no friction. But in every other way, their behavior is the same as the ordinary billiard ball. I would suggest that there is already something interesting about the correspondence between the rationality of a geometric measure and the action of the ball. Why would there be such a correspondence? It’s like seeing something about numbers through the back of a mirror.

As it turns out, periodic behavior is fairly rare. Ergodic behavior is far more common, There’s a nice narrative about various approaches to this specialization in a 2014 Plus Magazine article by Marianne Freiberger.

In the 1980s mathematicians proved that for the vast majority of initial directions the trajectory will be much wilder: not only will it not retrace its steps, but it will eventually explore the whole of the table, getting arbitrarily close to every point on it. What is more, a typical trajectory will visit each part of the table in equal measure: if you take two regions of the table whose areas are equal, then the trajectory will spend an equal amount of time in both. This behaviour is a consequence of billiards being ergodic. By “vast majority” mathematicians mean that if you pick a direction at random, it will almost certainly behave in this ergodic way.

The absence of a pattern in ergodic behavior makes it very difficult to predict where the ball, or point, might be after some specified amount of time. A computer program could run all the paths fast enough to see what happens, but in true chaotic fashion a very slight change in the direction of the initial trajectory will dramatically change the ball’s later positions. But, as Freiberger explains, because many dynamic physical systems are ergodic, it does give us a handle on something other than the position of a particular point over time. Rather than being able to trace the path of of point.

you can accurately predict what proportion of its time it spends in a certain region of the table. If it’s a gas you are looking at, then you might not be able to say exactly where its many constituent molecules are at any given moment, but you can predict things like its temperature or pressure. So, as chaotic systems go, ergodicity is actually a good thing.

As always happens, mathematicians hunt for all of the generalities associated with all of the imagined, ideal possibilities. And changing the shape of the table introduces a lot of them. Instead of a rectangle, the table could be triangular, hexagonal or L-shaped. It could be round or elliptical. Rom-Kedar said that with these variations, questions about what will happen become “more delicate.” There are many more periodic trajectories in curved figures like circles and ellipses. Most of these events do not explore all of the table.

It is remarkable that billiards models effectively address many phenomena in the physical sciences, that are already described by alternative mathematical models, as well as open questions in mathematics, even number theory. Physicists use it as a close approximation of particle forces and movement. They are relevant to any systems exhibiting chaotic behavior. And, to be clear, billiard models are not restricted to objects on the plane. Billiard models have been developed on various surfaces, including Riemann surfaces.

There is something beautiful about all of this. An observation, of a very specific and pretty limited physical event (a billiard ball on a table) inspires the thoughtful exploration of imagined ideals, that involve infinite times, and are not limited by physicality. These abstractions are a product of looking through the physical consideration to the endless possibilities captured by ideals. Then these thorough investigations of purely idealized possibilities become a way to look at a surprising number of unrelated physical (and mathematical) phenomena. How does the human intellect manage this? And what motivates us to do things like this? It’s beautiful and fascinating.