Categories

Other kinds of coding

Had I not studied mathematics, I would probably never have written the essay that follows. I wrote it 17 years ago but I decided to post it this holiday season. I hope you enjoy it.

We were on our way home from daycare on a dark December evening. “Gook, mommy.” my two year old daughter yelled out, startled and excited. She spotted a house, unexpectedly lit around the edges with white Christmas lights. “Wow,” she sighed when we passed the next one. Trees and bushes had a new trick. “Pri-tee,” she finally said with a big smile. “Yes, Aiyanna” I said, “pretty.” Then came the full recognition that she didn’t have any Christmas ideas yet, that my husband and I were going to be the source of her Christmas story. Could I make it a good story? I never really liked Santa Claus. And while I love the birth of a savior in a barn story, December 25th is not actually Jesus’ birthday. What is it about this ancient holiday that’s just true, that makes it so important? What are we all so happy about?

The next weekend, we took a shortcut to the imagination. We went to the movies and saw Polar Express. The dark theater closed my daughter’s eyes to the regular world, and mesmerized her with the rich, large, loud stimuli of some other one. I didn’t feel her move the whole length of the film. Still lacking the strength (or the need) to fully control her attention she freely took that ride to the North Pole and she loved it. When it was over, I was feeling pretty good myself and wondered what, exactly, had soothed me. An icy train ride in the dark with some interesting kids, a generous parental conductor, a ghost, caribou, wolves, eagles, a convincingly booming town called the North Pole, and Santa’s spectacular disappearance into the dark cold sky, all of these images woke me up, and made my mind eager to play.

Having been a student of psychology and then mathematics, I love the most abstract and general of ideas, big ideas that easily capture what look like unrelated specifics. I like finding what mathematics calls equivalence classes in my regular experience. Equivalence classes are groups of things that are the same from a particular point of view. I can say, for example, that a coffee mug and a donut are the same thing because they both have only one hole in their otherwise smooth contour. There is a branch of mathematics that contains exactly that idea. A particular kind of abstraction tells us, not that the mug and the donut are similar things, but that they are the same thing. They are topologically equivalent.

The weighty, loud, steaming Polar Express, easily muted the noise of my day-to-day list of things to do, and broke open space for blending ideas. The film was a dream, a really good dream. It was the way the dream was instructing the dreamer that caught my attention. It looked like one of Jung’s big dreams. The kind, this adventurous psychologist proposed, came not from a personal unconscious but from a collective human unconscious. They usually happened, he thought, during critical periods in our lives – like puberty, middle-age, or near death times. They’re there to inform us, to give us insight, to break through our tiny me-centered awareness, and make more room (something I believe mathematics does always).

When I gave this movie dream the import Jung proposed, the story on the screen triggered Christmas image after image in my memory. I skipped easily from thought to thought and enjoyed a sensory pleasure not unlike what you might feel watching the successful fall of an elaborate domino construction. There was a way to speak to Aiyanna about this holiday that could deepen with her development instead of wither away (as often happens when our world begins to shrink under the pressure of the rationalizations of adulthood). This holiday celebrated unexpected, unexplainable life.

I kept thinking about Santa in the car on the way home, and maybe I saw him for the first time. He rises out of the best of our imagination, the part that gives the hazy sensations of our intuition some form – a face maybe, a voice, and a story. This bearded icon has been through his share of transformations, but he settled, at some point, into the red-suited reindeer keeper we know, and took up residence at the North Pole – the coldest of places, a barren place in fact, where we say no life is possible. Yet there he lives, with an odd devotion to us.

His moment with us is one of those midnight moments where we split apart our time ordered days and give him the space to be everywhere in an instant. Out of the dark he flies. He laughs loud, walks through every single house in the world, and even eats in some of them. He gives us stuff, wants us to be happy, to not worry about the things that don’t matter. He’s alive after all – “ho, ho, ho!” We expect him to return year after year, like the sun’s life-giving power. We squint to find his silhouetted image across the moon, near the stars. But we’re not permitted to celebrate until we see the evidence of his return. He is our hope and the answer to our hope.

On the solstice, life hopes for life, and life celebrates life’s endurance. Our usual worries about small things, and our self-centered anxieties grow like bacteria – fast and easy – slowly killing our simple respect for the world that gives us life. It’s hard to control our worrisome nature. But we once took note of the sun’s retreat, and with the threat of approaching darkness, the vulnerability of our existence was exposed. On the edge of the sun’s availability, on the shortest of days, life looks at itself, and its reliance on a star. Somehow, out of the dark, cold space beyond our sky comes our warm- blooded selves. We will never find the reason we came to be, nor any definitive assurance that we will continue to be. But, with this admonition from our sun, we glimpse how remarkable our living is.

Placing Jesus’ birth on December 25 (the birthday of more than one pagan sun god) hooks stories up – the new deliverer of life is laid over our humble devotion to the sun. The focus of the new story is a man who says things like, “I am the way and the life.” What life is this? Lining up the birth of this man with our ancient devotion must be telling us where to start (regardless of the political motives satisfied by doing this). And after a couple of thousand years we still struggle with the rest of the narrative. This man is the myth, the hero and the god, in our history, in real time. He’s so fully alive, we’re told, that death loses its hold on him, so intimately tied to life’s source he becomes that source – he is the fullness of life in what has been called the fullness of time. We don’t really understand what these things mean, but churches tell this story year after year, through our thousands of trips around the sun.

Certainly, all of these Christmas images are manipulated, even contaminated by our greedy, power grabbing habits. But, just like the blood that can do what it’s there to do, despite the presence of things like alcohol, nicotine or some other drug, these images, despite our frequent manipulation of them, continue to do what they do. They continue to live.

Jung once said that consciousness itself is still in an experimental state (being a relatively recent product of evolution). He didn’t think we should rely on it so much. We rarely challenge its point of view because it seems to work so well, to give us so much control. But I trust dreams the way I trust my heart to beat or my lungs to take what I need from the air. They often direct me , like the crew of that mighty Polar Express, so that I can see better.

My husband was traveling just before Christmas. “Who’s coming home next week,” I would periodically ask Aiyanna. “Daddy!” she would say quickly and crisply. Then I would say, “And who’s coming after that?” Her eyes always widened and took hold of mine. They jittered a bit, excited, inquiring and uncertain. Hardly knowing what she was saying, she’d whisper the news. “Santa comin.”

Optimism

I began a post in 2013 by recognizing something that David Deutsch said in a TED talk in 2005. I have referred back to it many times since, and here I will do it again.  But this time I would like to present it more completely. It’s a beautiful articulation of something that’s just true:

Billions of years ago and billions of light-years away, the material at the center of a galaxy collapsed towards a supermassive black hole. And then intense magnetic fields directed some of the energy of that gravitational collapse, and some of the matterback out in the form of tremendous jets, which illuminated lobes with the brilliance of — I think it’s a trillion — suns. 

Now, the physics of the human brain could hardly be more unlike the physics of such a jet. We couldn’t survive for an instant in it. Language breaks down when trying to describe what it would be like in one of those jets. It would be a bit like experiencing a supernova explosion, but at point-blank range and for millions of years at a time. 

And yet, that jet happened in precisely such a way that billions of years later, on the other side of the universe, some bit of chemical scum could accurately describe and model and predict and explain, above all what was happening there, in reality. The one physical system, the brain, contains an accurate working model of the other, the quasar. Not just a superficial image of it, though it contains that as well, but an explanatory model, embodying the same mathematical relationships and the same causal structure. 

Now, that is knowledge. And if that weren’t amazing enough, the faithfulness with which the one structure resembles the other is increasing with time.

Today I listened to a TED interview that Deutsch did, on June 26, with Chris Anderson. It was given the title The Limitless Potential of Human Knowledge. Anderson confirmed that Deutsch’s use of the term chemical scum was borrowed from Steven Hawking, because essentially the same observation appears in Deutsch’s The Beginning of Infinity. In any case, this interview, like the book, is about the big picture. It is the kind of thinking I find very satisfying. Knowledge is defined as information that causes things to happen. The information in a DNA molecule, that causes features to develop in an organism, is knowledge. What we usually call knowledge is what Deutsch calls explanatory knowledge. Explanatory knowledge, he says, is a uniquely human event. And this feature of the universe, the explanatory knowledge developed by our species, ranks alongside gravity or electromagnetism and, as Deutsch sees it, surpasses them. Take the explosion of a quasar, for example, that is fully simulated by the explanatory knowledge encoded in an astrophysicist’s brain. Which of these, Deutsch asks, is more remarkable – the explosion or its encoded model in the astrophysicist’s brain.

Deutsch argues with simple, conversational language, and tightly woven observations, that human explanatory knowledge is a substantial feature of the universe and that it provides us infinite reach. In space, he points out, there is a hierarchy of size. A black hole is hardly affected by the star it swallows, and the sun is hardly affected by the earth. But on earth, he suggests, the opposite is true. On earth, life is everywhere, and every living thing is the result of the action of just one or two molecules. On earth, “submicroscopic entities command vast resources,” even among living entities that don’t have explanatory knowledge. Humans, however, with the evolution of explanatory knowledge, have now become cosmically significant. Deutsch calls this a phase change – small things affect large things, but not with mass or energy, with just information. It is this knowledge that gives us infinite reach and, with this, our optimism should begin to grow. Deutsch often points out, that most of our knowledge concerns what is not seen. The models of reality that we create are built with patterns of ideas, and capture the aspects of our world that are not visible. Mathematics is a crucial to this.

Deutsch imagines the evolution of our ability to build explanatory knowledge as the development of memes, which he defines as “anything that is copied from one brain to another.” The development of memes was followed by a tradition of criticism and error correction. And this is where things take off. He rests his thoughts on Karl Popper’s epistemology, and an important feature of Popper’s argument is that the first thrust of scientific theory is not what is perceived but rather what is imagined. The Stanford Encyclopedia of Philosophy puts it this way:

Popper stresses, simply because there are no “pure” facts available; all observation-statements are theory-laden, and are as much a function of purely subjective factors (interests, expectations, wishes, etc.) as they are a function of what is objectively real.

We imagine solutions to the problems we face, and then test their strength. Our theories are not established empirically, they are only confirmed or falsified empirically.

But it is Deutsch’s emphasis on information, abstraction, and imagination that always captures my attention. And it is because of how those things relate to mathematics. He clarifies a distinction between the abstract and the physical in The Beginning of Infinity:

Whether a mathematical proposition is true or not is indeed independent of physics. But the proof of such a proposition is a matter of physics only. There is no such thing as abstractly proving something, just as there is no such thing as abstractly knowing something. Mathematical truth is absolutely necessary and transcendent, but all knowledge is generated by physical processes, and its scope and limitations are conditioned by the laws of nature…

Consequently, the reliability of our knowledge of mathematics remains for ever subsidiary to that of our knowledge of physical reality. Every mathematical proof depends absolutely for its validity on our being right about the rules that govern the behavior of some physical objects, like computers, or ink and paper, or brains.

This does two things: it confirms the transcendence of mathematical truth and highlights, or forces the recognition of the consistent interaction of physical and abstract things.

In another passage about numbers he writes this:

Mathematicians nowadays distinguish between numbers, which are abstract entities, and numerals, which are physical symbols that represent numbers; but numerals were discovered first. They evolved from ‘tally marks’ (I, II, III, IIII,…) or tokens such as stones, which had been used since prehistoric times to keep track of discrete entities such as animals or days…The next level above tallying is counting, which involves numerals…it (tallying) is an impractical system. For instance, even the simplest operations on numbers represented by tall marks, such as comparing them, doing arithmetic, and even just copying them, involves repeating the entire tallying process….The earliest improvement may have been to just group the tally marks…Later, such groups were themselves represented by shorthand symbols….By exploiting the universal laws of addition, those rules gave the system some important reach beyond tallying – such as the ability to perform arithmetic…Something new has happened here, which is more than just a matter of shorthand: an abstract truth has been discovered…Numbers have been manipulated in their own right, via their numerals.

I mean it literally when I say that it was the system of numerals that performed arithmetic. The human users of the system did of course physically enact those transformations. But to do that, they first had to encode the system’s rules somewhere in their brains, and then they had to execute them as a computer executes its program. (emphasis added)

This seems to be telling us something about how we might find the abstractions that facilitate knowledge. It’s likely never a direct path, but it happens.

Knowledge, Deutsch imagines, is a growing sphere against the unknown. This is a nice image. All evils, he will claim, are due to a lack of understanding. I think I have always believed something like this. Other expressions I find refreshing: knowledge is our superpower, it has infinite reach, and mistakes are a gift – the faster we make them, the faster we acquire knowledge. And all of this is not just about science, it’s about everything.

I have become fully preoccupied with finding some way to comprehend the inter-relatedness or connectedness of everything, from explosive cosmological origins to human thoughts and ideas. It seems to me that mathematics provides a lens on the ground that is common to both thoughts and objects. Deutsch’s view always pulls me in. Today, what I share without hesitation is the optimism provided by a reliance on understanding, and the truth that knowledge has infinite reach. I agree that our world contains possibilities we have not yet imagined.

Standing by for a correction to our view of the physical

I’ve spent a number of years using this blog to highlight the way that mathematical things seem to operate in very natural occurrences like the way our brains work, the way ants navigate, the way plants calculate an efficient consumption rate of their stored starch, the collective behavior of insect colonies, flocks, schools, and so much more. I do this to counter the view that mathematics is merely a miraculously productive human tool. I believe that if we look at it more carefully, we will see that mathematics itself is part of nature, as are all aspects of our thoughtful and imaginative lives. This is why I decided to bring a Quanta Magazine article to your attention today. Given the title ‘Mathematician Measures the Repulsive Force Within Polynomials,’ the article uses the language of physics to describe the behavior of numbers, a language that mathematicians use as well. Rather than finding mathematical action in a physical thing, this discussion finds physical behavior in a mathematical thing.

We are all familiar with repulsive forces in the physical world, like the repulsive force between two, like magnetic poles. In May 2020, Quanta Magazine reported on a result in mathematics, from mathematician Vesselin Dimitrov of the University of Toronto, where the proof is understood as a demonstration of what mathematicians call a ‘repulsion’ between numbers. Dimitrov proved a decades-old conjecture known as the Schinzel-Zassenhaus conjecture which concerns the roots of a particular family of polynomials. Kevin Hartnett begins by explaining mathematics’ version of physical repulsive actions:

When mathematicians look at the number line, they see the same type of trend. They look at the tick marks denoting the positive and negative counting numbers and sense a kind of numerical force holding them in that equal spacing. It’s as though, like mountain lions with their wide territories, integers can’t exist any closer together than 1 unit apart.

The spacing of the number line is the most basic example of a phenomenon found throughout the field of number theory. It crops up in the study of prime numbers and in the relationships between solutions to different types of equations. Mathematicians can better understand these important values by quantifying the force that acts between them. (emphasis added)

Let’s look at the result that is the subject of Hartnett’s article. If it’s been a long time since you’ve thought about polynomials, you may recall from Algebra classes that the roots of a polynomial equation y=f(x) are the x values that produce 0 for y. On the Cartesian plane, they are the x values where the graphed curve of the polynomial intersects the x axis. These intercepts mark the real roots (the real numbers that produce 0 when plugged into the polynomial), if they exist. Polynomials may also have roots that are complex numbers – numbers represented as the sum of a real number and some multiple of i which is the symbol for the square root of -1, known as the imaginary unit. Roots plotted on the complex plane will include roots that are complex numbers. Finding geometric relationships among the roots of a polynomial has long been a subject of study in mathematics. The Gauss-Lucas theorem, for example, establishes a geometric relationship, on the complex plane, between the roots of a polynomial and the roots of its derivative (which measures or quantifies the way y values change as x values vary for that polynomial). Derivatives of polynomials are also polynomials, and the theorem says that the roots of the derivative of the polynomial all lie within the smallest polygon, on the complex plane, that contains all the roots of the polynomial itself.

The result discussed in Quanta has to do with a particular class of polynomials called cyclotomic polynomials. These are polynomials, with integer coefficients, that are irreducible (meaning they cannot be factored), whose roots, on the complex plane, all lie on the unit circle (a circle centered at the origin with a radius of 1). There are an infinite number of such polynomials and there is a formula for producing them. It is striking that all of the roots of all such polynomials lie on this circle. The Quanta article discusses the proof of a conjecture about the relationship between the roots of these cyclotomic polynomials and non-cyclotomic polynomials.

In 1965, Andrzej Schinzel and Hans Zassenhaus predicted that the geometry of the roots of cyclotomic and non-cyclotomic polynomials differs in a very specific way. Take any non-cyclotomic polynomial whose first coefficient is 1 and graph its roots. Some may fall inside the unit circle, others right on it, and still others outside it. Schinzel and Zassenhaus predicted that every non-cyclotomic polynomial must have at least one root that’s outside the unit circle and at least some minimum distance away.

Or, to put the Schinzel-Zassenhaus conjecture in terms of repulsion, it predicted that the smallest roots of a non-cyclotomic polynomial — which might fall within the unit circle — effectively push other roots outside the unit circle, like magnets pushing each other away.

The minimum distance was expected to depend on the degree of the polynomial, specifically it was conjectured to be some constant number divided by the degree of the polynomial (or the power of the leading term). Dimitrov proved that this minimum distance is, in fact, (log2)/4d, where d is the degree of the polynomial. Log2 is a constant, and while the discussion allows for the possibility that the result could, perhaps, be tweaked to be something like (log3)/ 5d, the fact was established that the distance does depend on the quotient of a constant and a multiple of the degree of the polynomial.

One might say that these observations are just observations of the distance between numbers. But it’s more than that. These distances are produced by the numbers themselves, by their interaction in the polynomials. It is not unusual for mathematicians to talk about the ‘behavior’ of mathematical things – the behavior of solutions or, in this case, the behavior of roots. Is it a metaphor, or does this language emerge from an intuition about what a number really is? I suspect the latter is true. Numbers appear to be the names we have given to the elements of things we collect, or the duration of events. But within mathematics they have undergone a significant evolution, forcing us to examine other things, like the notion of a continuum, or the effects of an imaginary unit. Their geometric interpretation opened up whole new worlds of mathematical events. I bring the repulsion principle to your attention to make the point that the nature of mathematical things is just not very clear, and I am convinced mathematics doesn’t belong to us. Don’t misunderstand. I am meaning to be neither romantic nor mystical about these things. I mean to see something more clearly. A correction to our view of mathematics will bring with it a correction to our view of the physical world.

Ideals in the body politic

I have spent quite a bit of time, using much of the information provided in these posts, to argue that mathematics is in a unique position to show us that our thoughts (the silent language in our heads) that appear to be produced in the privacy of our imagination, have some independent reality. By this I mean that our thoughts are not just ‘in our heads,’ and perhaps the use, and the history of mathematics (which reflects our science-oriented imagination) could help us understand something new, or see something new, about the fundamental nature of thought itself. I have written a yet unpublished, book-length manuscript to make a thorough argument for this view.

Recently, however, maybe in the last four years, I have become a bit distracted by an alarming abuse of words in our sociopolitical world – what I would otherwise call a frightening disregard for the harm caused by lies. Deceitful narratives are produced by the same thoughtful imagination (our words and concepts) that build the arts and the sciences. But, without any fidelity to the facts to which these thoughts should be tied, their expression acts more like a flood, or a fire, that repeatedly weakens the stability of our cultural achievements and challenges the trustworthiness of all words. It looks to me like the protective walls of our civil systems have been badly damaged and may yet crumble. The helplessness that I have felt in response to this damage has given my interest in the true nature of an idea, or a thought, or a word, new impetus. It’s made my fascination with mathematics all the more striking to me, and led me to believe that there may be a cure for my sense of helplessness.

I think I found my way to mathematics along a zigzagy road that, as a young idealist, I hoped would lead to the truth. And I know that the very mention of ‘truth,’ brings with it a long and profound philosophical history. But as a young person, in my Italian-American family, I was often saddened by the effects of simple deceptions. They were mostly harmless, interpersonal distortions of the truth between my mother and my grandmother, or my mother and my aunts,…. but they fed unhappiness, dissatisfaction, and frustration. College was the first time I created some real distance between myself and my family – not geographically, but intellectually. And, while I may have said any number of things about my academic interests to friends and advisors, the classes I chose, the future I imagined, was probably motivated, by my trying to find a way to see what was ‘true,’ in life, and in people. I devoted a lot of time and energy first to philosophy, then psychology, and when I finally considered physics, I found liberation in mathematics.

There is no value to deception in mathematics. And I might argue that mathematics is probably the most consequential, when speaking pragmatically, of our imaginative efforts. Free of deception and profoundly meaningful, the satisfaction I felt from finding this purely symbolic, yet physically connected intellectual enterprise has never subsided. Mathematics bears witness to the worldly relevance of thought, and the power of deception-free analysis. Conjured up by idealizing experience and reasoning, and then letting it grow with a kind of self-organizing life of its own, mathematics may be the most visible evidence of the fact that the mind’s eye, and the eye’s in our heads, each have their own way of perceiving. This leads me to believe that physical structure and thoughtful structure must hold equal weight in nature.

It may be hard to see how all of this would apply to our current sociopolitical situation, but I have thought about it every time I hear some pundit insist that “words matter.” However, taking a broader look at the situation, my husband (who is an experimental physicist) and I developed an analog. We imagined that the law was like mathematics, and politicians were like physicists. By this we mean that the law is the careful and precise development of ideals, and politicians (or government officials) are charged with finding the ways that these ideals may exist in the world. Figuring this out is what the people and their elected leaders try to do together. Deception contaminates the effort.

For whatever reason, I find this analogy reassuring. Perhaps the body politic can recover, like with the pandemic.

There is no starker illustration of the fact that, in the body politic as in the bodies the virus infects, the host’s response can matter far more to the course of the disease than the direct action of the pathogen itself.”
The Economist, “The year of learning dangerously Covid-19 has shown what modern biomedicine can do.” 23 March 2021

Individuals, Information, and the lenses of mathematics

First, I would like to apologize for neglecting Mathematics Rising in recent months. Changes to the classes I’ve been teaching at UT Dallas, (that were necessitated by Covid-19) consumed so much of my time that it became difficult for me to do much more than teach my classes and take care of my family. I’m hoping to be able to do better. So let me begin today.

It happens often that Quanta Magazine brings news of profound and novel approaches to any number of new scientific questions. And these reports often make a very positive contribution to the perspective I have been trying to nurture at Mathematics Rising. In July, Jordana Cepelewicz wrote an article with the title: What Is an Individual? Biology Seeks Clues in Information Theory. Two of the words in this title quickly got my attention – individual, and information. The word individual got my attention because the significance of ‘the individual’ plays an important role in so many things including politics and religion, and information because once mathematics was used to define information, it has become a mathematical lens that is enormously useful to so many questions in science, including quantum mechanics and theories of consciousness. Cepelewicz makes the pithy remark that nature has “a sloppy disregard for boundaries,” taking note of the M.O. of viruses, bacteria, insect colonies or “superorganisms,” and the myriad varieties of symbiotic composites that live in our world. “Even humans,” she says, “contain at least as many bacterial cells as ‘self’ cells.”

To emphasize the value of clarifying what we mean by ‘individual,’ she writes:

Ecologists need to recognize individuals when disentangling the complex symbioses and relationships that define a community. Evolutionary biologists, who study natural selection and how it chooses individuals for reproductive success, need to figure out what constitutes the individual being selected.

The same applies in fields of biology dealing with more abstract concepts of the individual — entities that emerge as distinct patterns within larger schemes of behavior or activity. Molecular biologists must pinpoint which genes out of many thousands interact as a discrete network to produce a given trait. Neuroscientists must determine when clusters of neurons in the brain act as one cohesive entity to represent a stimulus.

“In a way, [biology] is a science of individuality,” said Melanie Mitchell, a computer scientist at the Santa Fe Institute.

Biology, many agree, has been under-theorized, but this is no doubt changing. The Stanford Encyclopedia of Philosophy has an entry on Biological Individuals from which one can see the development of conceptual frameworks to address the question.

The Quanta Magazine article is based on the work of David Krakauer, an evolutionary theorist and president of the Santa Fe Institute, and Jessica Flack who studies collective behavior and collective computation (also based at the Santa Fe Institute). They created a group tasked with finding a new working definition of the ‘individual.’

At the core of that working definition was the idea that an individual should not be considered in spatial terms but in temporal ones: as something that persists stably but dynamically through time. “It’s a different way of thinking about individuals,” said Mitchell, who was not involved in the work. “As kind of a verb, instead of a noun.”

How do you create this view? What can we use to see things this way? The lens they chose is information theory.

Krakauer and Flack, in collaboration with colleagues such as Nihat Ay of the Max Planck Institute for Mathematics in the Sciences, realized that they’d need to turn to information theory to formalize their principle of the individual “as kind of a verb.” To them, an individual was an aggregate that “preserved a measure of temporal integrity,” propagating a close-to-maximal amount of information forward in time.

Their formalism begins with propositions:

  • Individuality can exist at any level of biological organization (sub cellular to social)
  • individuality can be nested (one individual within another)
  • and individuality exists on a continuum meaning systems can have quantifiable degrees of individuality.

The last of these might translate the question of whether a virus is alive or not, into the question, how living is a virus. In other words, where does it lie on the continuum of individuals.

The abstract of their paper is very clear about what this model hopes to accomplish:

Despite the near universal assumption of individuality in biology, there is little agreement about what individuals are and few rigorous quantitative methods for their identification. Here, we propose that individuals are aggregates that preserve a measure of temporal integrity, i.e., “propagate” information from their past into their futures. We formalize this idea using information theory and graphical models. This mathematical formulation yields three principled and distinct forms of individuality—an organismal, a colonial, and a driven form—each of which varies in the degree of environmental dependence and inherited information. This approach can be thought of as a Gestalt approach to evolution where selection makes figure-ground (agent–environment) distinctions using suitable information-theoretic lenses. A benefit of the approach is that it expands the scope of allowable individuals to include adaptive aggregations in systems that are multi-scale, highly distributed, and do not necessarily have physical boundaries such as cell walls or clonal somatic tissue. Such individuals might be visible to selection but hard to detect by observers without suitable measurement principles. The information theory of individuality allows for the identification of individuals at all levels of organization from molecular to cultural and provides a basis for testing assumptions about the natural scales of a system and argues for the importance of uncertainty reduction through coarse-graining in adaptive systems.

(Coarse-graining is a simplification of the details in a system that is as true to the system as the details themselves – the way that temperature represents the average speed of particles in a system)

There is no doubt that this will be a lucrative diversion from the noun-like way we have identified living things. It is, as I see it, one of many efforts in a broad scheme related to the provocative ideas brought to light by biologists Francisco Varela and Humberto Maturana in the 1980s. Rather than list the properties of living things, Maturana and Varela observed that living things are characterized by the fact that they are continually self-producing – not reproducing but self-producing. The cell, for example, what we have long thought of as the fundamental living thing, is a network of processes that are organized as a unity, where the interaction of these processes continuously and directly realizes the unity itself. The cell is what the cell does. They called this process autopoiesis, its Greek roots meaning self (auto) and produce (poiesis). The being and the doing of an autopoietic system cannot be separated. With nested autopoietic systems (or nested unities), Maturana and Varela are imagining individuals in much the same way as Krakauer and Flack. I wrote about Maturana and Varela in a post called Autopoiesis, free energy and mathematics. In that post I introduced a related idea: Karl Frist’s Free Energy Principle, where all kinds of systems are understood in autopoietic terms. What’s missing in Maturana’s theory is the formalism that makes it possible to investigate the consequences of this kind of modeling. Information theory provides this for Krakauer and Flack. Probabilistic programming and machine learning provide it for Karl Frist. Maturana and Frist are both referenced at the end of Krakauer and Flack’s paper.

For me, these discussions always bring to mind the thoughts I had when I read Thomas Mann’s The Magic Mountain. I read the novel in the absence of any commentary about it, and I remember feeling an unexpected affection for the story’s protagonist, Hans Castorp. Mann began writing the story in 1912, but he completed it after World War I, in 1924. Unexpectedly restricted to a tuberculosis infirmary high in the Alps, a young Castorp is an innocent and eager explorer of biology and medicine. Encouraged by the cold and by his solitude, he rested and read from the library of physicians, fully self-training in the language and images of biology. I remember thinking that Castorp’s look at science and medicine was guileless, without the prejudices created by the pragmatism of fixing things, or the desire for useful knowledge. And, I thought, that contemporary ideas in medicine and biology, that are built on these early observations, and that we tend to think of as just true, were not the only ideas that could grow from the kinds of insights to which Castorp was privileged. Here are just a couple of his reflections:

This body, then, which hovered before him, this individual and living I, was a monstrous multiplicity of breathing and self-nourishing individuals, which through organic conformation and adaptation to special ends, had parted to such an extent with their essential individuality, their freedom and living immediacy, had so much become anatomic elements that the functions of some had become limited to sensibility…

What then was life? …It was the existence of the actually impossible-to-exist, of a half-sweet, half-painful balancing, or scarcely balancing, in this restricted and feverish process of decay and renewal, upon the point of existence. It was not matter and it was not spirit, but something between the two, a phenomenon conveyed by matter, like the rainbow on the waterfall, or like flame.

Maturana and Varela, Frist, Krakauer and Flack, are just some of the explorers that now confirm my hunch that there’s never just one way to see.

New strategies, new circuitries, new mathematics

I came upon an MIT News article about the work of Ila Fiete who studies brain functions, like the neurological processes that govern navigational reasoning about our surroundings. Fiete uses computational and mathematical tools. Her interest in biology, and her respect for the “aesthetic to thinking mathematically,” (as she put it) led her to systems biology, where computational and mathematical analyses and modeling, are applied to the understanding of complex biological systems. She did most of her PhD research at MIT,

….where she studied how the brain uses incoming signals of the velocity of head movement to control eye position. For example, if we want to keep our gaze fixed on a particular location while our head is moving, the brain must continuously calculate and adjust the amount of tension needed in the muscles surrounding the eyes, to compensate for the movement of the head.”

Later, at the University of California at Santa Barbara, Fiete began working on grid cells, a system of neurons I have written about on more than one occasion. These cells, located in the entorhinal cortex of the brain, actually create a grid-like, neural representation of the space around us that allows us to know where we are. Grid cell firings correspond to points on the ground that are the vertices of an equilateral triangular grid. It happens with surprising regularity.

In a Dec 2014 article in Neuron, Nobel laureate Neil Burgess said the following:

Grid cell firing provides a spectacular example of internally generated structure, both individually and in the almost crystalline organization of the firing patterns of different grid cells. A similarly strong organization is seen in the relative tuning of head-direction cells. This strong internal structure is reminiscent of Kantian ideas regarding the necessity of an innate spatial structure with which to understand the spatial organization of the world.

I have written on this topic before:
The mathematical nature of self-locating and
Grid cells and time cells in rats, continuity, and the monkey’s mind

At MIT again, Fiete has continued to explore her PhD thesis topic, specifically, how the brain maintains neural representations of the head’s direction, where it is pointed, at any given time. Now, in a paper published in Nature, she explains how she identified a brain circuit in mice that produces a one-dimensional ring of neural activity that acts like a compass. It allows the brain to calculate the direction of the head, with respect to the external world, at any given moment.

MIT’s report on that paper explains her approach:

Trying to show that a data cloud has a simple shape, like a ring, is a bit like watching a school of fish. By tracking one or two sardines, you might not see a pattern. But if you could map all of the sardines, and transform the noisy dataset into points representing the positions of the whole school of sardines over time, and where each fish is relative to its neighbors, a pattern would emerge. This model would reveal a ring shape, a simple shape formed by the activity of hundreds of individual fish.

Fiete, who is also an associate professor in MIT’s Department of Brain and Cognitive Sciences, used a similar approach, called topological modeling, to transform the activity of large populations of noisy neurons into a data cloud in the shape of a ring.

More than one aspect of these studies impresses me. The approach is interesting. Topological modeling, in some sense, is a shape-directed computation rather than a purely numerical one. And the simplicity of the ring impresses me. The brain somehow isolates, from a flood of sensory data, the variables that produce this simple representation of the head’s position. And the value of a circle, something we think of as a purely abstract idealization, is something the body seems to know, constructing it, as it does, outside what we call the mind.

“There are no degree markings in the external world; our current head direction has to be extracted, computed, and estimated by the brain,” explains Ila Fiete, … “The approaches we used allowed us to demonstrate the emergence of a low-dimensional concept, essentially an abstract compass in the brain.”

This abstract compass, according to the researchers, is a one-dimensional ring that represents the current direction of the head relative to the external world.

Their method for characterizing the shape of the data cloud allowed Fiete and colleagues to identify the variable that the circuit was devoted to representing.

My first reaction to this story was that it was beautiful. On the one hand, Fiete seems to be making increasingly creative use of the mathematics for which she has always had an affinity. On the other hand that simple ring, or compass, that tells us which way we are looking, highlights the presence of inherent mathematical tools that just exist in the body, in how brain processes just work. And the ring is stable, even through sleep.

Her lab also studies cognitive flexibility — the brain’s ability to perform so many different types of cognitive tasks.

“How it is that we can repurpose the same circuits and flexibly use them to solve many different problems, and what are the neural codes that are amenable to that kind of reuse?” she says. “We’re also investigating the principles that allow the brain to hook multiple circuits together to solve new problems without a lot of reconfiguration.

It would not surprise me if, when viewed from a careful, neuro-scientific perspective, we would find some resemblance between the way the brain makes new use of already configured circuits and the way mathematicians consistently build novel mathematical structures with tested strategies that have built other structures, like ordered pairs, symmetries, compositions, closure properties, identities, homotopy and equivalence, etc….

Information, mathematics,and the consciousness of the universe

A New Scientist article began with a now familiar refrain:

They call it the “unreasonable effectiveness of mathematics.” Physicist Eugene Wigner coined the phrase in the 1960s to encapsulate the curious fact that merely by manipulating numbers we can describe and predict all manner of natural phenomena with astonishing clarity…

The article by Michael Brooks has the title Is the universe conscious? It seems impossible until you do the maths.

As I expected, the primary focus of the article is Integrated Information Theory, a way to understand consciousness proposed by neuroscientist Giulio Tononi in 2004. More than one of my previous posts refers to the development of this proposal, in particular there is Where does the mind begin? posted in 2017.

Neuroscientist Giulio Tononi proposed the Integrated Information Theory of Consciousness (IIT) in 2004.  IIT holds that consciousness is a fundamental, observer-independent property that can be understood as the consequence of the states of a physical system. It is described by a mathematics that relies on the interactions of a complex of neurons, in a particular state, and is defined by a measure of integrated information. Tononi proposes a way to characterize experience using a geometry that describes informational relationships. In an article co-authored with neuroscientist Christof Koch, an argument is made for opening the door to the reconsideration of a modified panpsychism, where there is only one substance from the smallest entities to human consciousness.

Brooks describes Tononi’s idea with broad strokes, saying that the theory tells us that a system’s consciousness arises from the way information moves between its subsystems.

One way to think of these subsystems is as islands, each with their own population of neurons. The islands are connected by traffic flows of information. For consciousness to appear, Tononi argued, this information flow must be complex enough to make the islands interdependent. Changing the flow of information from one island should affect the state and output of another. In principle, this lets you put a number on the degree of consciousness: you could quantify it by measuring how much an island’s output relies on information flowing from other islands. This gives a sense of how well a system integrates information, a value called “phi.”

If there is no dependence on a traffic flow between the islands, phi is zero and there is no consciousness. But if strangling or cutting off the connection makes a difference to the amount of information it integrates and outputs, then the phi of that group is above zero. The higher the phi, the more consciousness a system will display.

ITT has attracted many proponents but at the same time has come under quite a bit of critical scrutiny. The mathematics of the theory is very complex. The language of the mathematics is borrowed from information theory, and informational relationships are characterized geometrically as shapes. These shapes, Tononi says, are our experience.

Brooks points to a number of computational issues that require attention as well as the problem of “explaining why information flow gives rise to an experience such as the smell of coffee.” But proponents of the idea are committed to resolving the technical issues, and continue to have faith in the model.

Rather than abandoning a promising model, he [mathematician Johannes Kleiner] thinks we need to clarify and simplify the mathematics underlying it. That is why he and Tull [mathematician Sean Tull] set about trying to identify the necessary mathematical ingredients of IIT, splitting them into three parts. First is the set of physical systems that encode the information. Next is the various manifestations or “spaces” of conscious experience. Finally, there are basic building blocks that relate these two: the “repertoires” of cause and effect.

They posted a preprint paper in February that describes the work.

“We would be glad to contribute to the further development of IIT, but we also hope to help improve and unite various existing models,” Kleiner says. “Eventually, we may come to propose new ones.”

A philosophical difficulty with the theory, however, raises questions close to my heart. The calculations involved in IIT imply that inanimate things possess some degree of consciousness. This may be considerably difficult for many to accept, but I’m happy to report that not everyone has a problem with the possibility. From the Brooks article:

“Particles or other basic physical entities might have simple forms of consciousness that are fundamental, but complex human and animal consciousness would be constituted by or emergent from this,” says Hedda Hassel Mørch at Inland Norway University of Applied Sciences in Lillehammer.

The idea that electrons could have some form of consciousness might be hard to swallow, but panpsychists argue that it provides the only plausible approach to solving the hard problem. They reason that, rather than trying to account for consciousness in terms of non-conscious elements, we should instead ask how rudimentary forms of consciousness might come together to give rise to the complex experiences we have.

With that in mind, Mørch thinks IIT is at least a good place to start.

How does it happen? How to the cells that grow and specialize in embryonic development produce what we call the mind? I became preoccupied with questions like this when I tried to understand the degree to which lesions in the frontal lobe of my mother’s brain changed her experience (and her response to the reality that her new experience created) I began to wonder, in a new way, about the nature of the relationship between her body and her mind. Where was the person in the body? One of the things I heard over and over was “she isn’t there anymore.” And I continued to think, “of course she is.” I now find that the kinds of questions I wanted to ask are increasingly present among researchers, in both neuroscience and mathematics. I have also found that my own hunch that mathematics can shed some light on the mystery is also supported.

IIT is particularly interesting to me because it isn’t just proposing a mathematical description of consciousness, it’s actually suggesting a fundamental or natural relationship between mathematical things and experience. It doesn’t just quantify aspects of consciousness (given by the amount of integrated information), it is also aimed at specifying the quality of experience with informational relationships that are defined by geometric shapes. The points in this geometry are determined using the probability distributions for the different states that a complex of neurons may be in.

I think Tononi himself gets at the significance of this point of view when he writes this in his 2008 Provisional Manifesto:

We are by now used to considering the universe as a vast empty space that contains enormous conglomerations of mass, charge, and energy—giant bright entities (where brightness reflects energy or mass) from planets to stars to galaxies. In this view (that is, in terms of mass, charge, or energy), each of us constitutes an extremely small, dim portion of what exists—indeed, hardly more than a speck of dust.

However, if consciousness (i.e., integrated information) exists as a fundamental property, an equally valid view of the universe is this: a vast empty space that contains mostly nothing, and occasionally just specks of integrated information —mere dust, indeed—even there where the mass-charge–energy perspective reveals huge conglomerates. On the other hand, one small corner of the known universe contains a remarkable concentration of extremely bright entities (where brightness reflects high levels of integrated information), orders of magnitude brighter than anything around them. Each bright “star” is the main complex of an individual human being (and most likely, of individual animals). I argue that such a view is at least as valid as that of a universe dominated by mass, charge, and energy.

Like many of the articles about how mathematics is shedding new light on things, the New Scientist article is not highlighting the mathematics itself, nor what it has the potential to show us. As I see it, mathematics has the potential to show us how we are likely misreading our reality, by showing us how perspectives are built in the mind, or how perspectives are built with the arrangement of concepts and not just the arrangements of neurons firing. Mathematics is in a unique position to show us the bridge between the physical and the conceptual. But this recent New Scientist article still reports good news – specifically that ITT continues to attract the interest of neuroscientists and mathematicians alike.

Mathematics says, “here is a point of view.”

Category theory in mathematics is a relatively new and provocative branch of mathematics that has found many faithful followers and some critics. By relatively new I mean that category theory notions were first introduced only as far back as 1945. Criticism of the theory is often related to the level of abstraction it requires. But abstraction is also critically important to its strength. I’ve chosen to highlight things about category theory in these earlier posts:


Category Theory and the extraordinary value of abstraction
More on category theory and the brain
Quantum Mechanical Words and Mathematical Organisms (for Scientific American)

But the inspiration for this post is something I heard from mathematician Eugenia Chang. It was in a talk she gave, at the School of the Art Institute of Chicago, on The Power of Abstraction. Early in her presentation, Chang uses a turn of phrase that I like very much. Mathematics is useful, she says,

…because of the general light that it sheds on all aspects of our thinking.

Notice she doesn’t say, “on all aspects of things,” but rather “on all aspects of our thinking.” I believe this is important. There is an old tradition among educators to tell reluctant students that, while learning mathematics seems to have nothing to do with their day-to-day lives, or the issues they hope to explore, it’s value lies in the fact that it teaches us how to think. But what Chang is saying is bigger and more important than that. Shedding light on ‘thinking’ is not the same as teaching us how to think. Shedding light on thinking means that mathematics is telling us something about ourselves.

To clarify the value of abstraction Chang uses illumination again:

It’s just like when you shine a light on something (and that’s what mathematics is always doing – trying to illuminate the situation)…if we shine the light very close up, then we will have a very bright light but only a very small area. But if we raise the light further up, then we get a dimmer light, but we illuminate a broader area, and we get a bit more context on the situation…Abstraction enables us to study more things, maybe in less detail, but with more context.

Category theory, as she discusses it, is about relationships among things, the notion of sameness, universal properties, and the efficacy of visual representation. About sameness Chang makes the observation that nothing is actually the same as anything else, and that the old notion of an equation is a lie. I haven’t heard anyone apply the term ‘lie’ to a mathematical thing since my first calculus teacher complained about a popular (thick and heavy) calculus text! But the value of an equation, she explains, is that, while it identifies the way two things are the same, equality also points to the way they are different. 2 + 2 = 4 tells us that, in some way, the left side of the equation is the same as the right side, but it other ways, it is not. Equivalences in category theory are understood as sameness within a context.

When first introduced to the notion of equivalence classes in topology, I thought of it as a powerful offspring of equality, not a correction. But, either way, the broad applicability of category theory (even within mathematics itself) is certainly fueling its development. The Stanford Encyclopedia of Philosophy says this about it:

Category theory has come to occupy a central position in contemporary mathematics and theoretical computer science, and is also applied to mathematical physics. Roughly, it is a general mathematical theory of structures and of systems of structures. As category theory is still evolving, its functions are correspondingly developing, expanding and multiplying. At minimum, it is a powerful language, or conceptual framework, allowing us to see the universal components of a family of structures of a given kind, and how structures of different kinds are interrelated. Category theory is both an interesting object of philosophical study, and a potentially powerful formal tool for philosophical investigations of concepts such as space, system, and even truth.

Chang also wrote the account of the concept category in Princeton’s Companion to Mathematics. There she says the following:

An object exists in and depends upon an ambient category…There is no such thing, for instance, as the natural numbers. However, it can be argued that there is such a thing as the concept of natural numbers. Indeed, the concept of natural numbers can be given unambiguously, via the Dedekind-Peano-Lawvere axioms, but what this concept refers to in specific cases depends on the context in which it is interpreted, e.g., the category of sets or a topos of sheaves over a topological space.

If you look back at the earlier posts to which I referred, you will see how the simplicity of the abstractions can serve situations where traditional mathematical approaches contain some ambiguity. I’ve chosen to return to it all today because Eugenia Chang’s language has encouraged me to see mathematics the way I do, as a reflection of thought itself, among other things. Contrary to expectations, she says:

Mathematics is not definitive. It says, here is a point of view.

From coin flipping to branching universes

A recent column in Quanta Magazine, by theorist Seam Carroll, highlights the far reaching implications of the role played by probability theory in quantum mechanics. Carroll’s intention is to bring into focus the need, which does seem to exist, for us to understand what, exactly, those probabilities are telling us. In quantum mechanics, the partnership of mathematics and physics has the unusual effect of both clarifying and mystifying things. Carroll’s concern is whether the probabilities that seem to contradict long held deterministic views of the physical world, should be thought of as properties of the objects studied, or just the cognitive strategy of the subjects studying them. As I see it, this difficulty of unraveling the thought from the material, may help us get a better look at the multidimensional nature of mathematics itself.

Probability is inextricably bound to our experience of uncertainty. When, in the 17th century, Pascal explored the calculation of probabilities, his efforts were aimed at finding ways to predict the results of games of chance. But the use of these strategies was fairly quickly adopted to address questions of law and insurance, as these concerned chance (or random) events (like weather or disease) in the day-to-day lives of individuals. The mathematics of probability provided a way to think about future events, about which we are always uncertain. I read in a Britannica article that in the early 19th century, LaPlace characterized probability theory as “good sense reduced to calculation.”

By the 18th century, Bayes’ theorem was already getting a lot of attention. It was beginning to look like the best calculation of likelihoods also relied on the experience of the individual doing the calculation. Bayes’ Theorem is a formula for calculating conditional probabilities, probabilities that are changed when conditions are altered. One of the conditions that could become altered is what the observer knows. This brings attention back to the subject, which is different from the way we understand the likelihood of heads or tails in a coin toss. Since a coin toss can only yield one of two possible outcomes, we have come to understand that there is a 50/50 chance of either. The more times we toss the coins the closer we get to seeing that 50/50 split in the outcomes. What we expect of the coin toss is entirely dependent on the nature of the coin. But conditional probabilities are not so clear. So how should physicists view our reliance on probabilities in quantum mechanical theory. This is what Carroll addresses.

There are numerous approaches to defining probability, but we can distinguish between two broad classes. The “objective” or “physical” view treats probability as a fundamental feature of a system, the best way we have to characterize physical behavior. An example of an objective approach to probability is frequentism, which defines probability as the frequency with which things happen over many trials.

Alternatively, there are “subjective” or “evidential” views, which treat probability as personal, a reflection of an individual’s credence, or degree of belief, about what is true or what will happen. An example is Bayesian probability, which emphasizes Bayes’ law, a mathematical theorem that tells us how to update our credences as we obtain new information. Bayesians imagine that rational creatures in states of incomplete information walk around with credences for every proposition you can imagine, updating them continually as new data comes in. In contrast with frequentism, in Bayesianism it makes perfect sense to attach probabilities to one-shot events, such as who will win the next election.

In an aeon article about Einstein’s rejection of unresolved randomness in any physical theory, Jim Baggott say this:

In essence, Bohr and Heisenberg argued that science had finally caught up with the conceptual problems involved in the description of reality that philosophers had been warning of for centuries. Bohr is quoted as saying: ‘There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.’ This vaguely positivist statement was echoed by Heisenberg: ‘[W]e have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.’ Their broadly antirealist ‘Copenhagen interpretation’ – denying that the wave function represents the real physical state of a quantum system – quickly became the dominant way of thinking about quantum mechanics. More recent variations of such antirealist interpretations suggest that the wave function is simply a way of ‘coding’ our experience, or our subjective beliefs derived from our experience of the physics, allowing us to use what we’ve learned in the past to predict the future.

But this was utterly inconsistent with Einstein’s philosophy. Einstein could not accept an interpretation in which the principal object of the representation – the wavefunction – is not ‘real’.

Today, proponents exist for more than one model of the universe. There are models where probability is “fundamental and objective,” as Carroll says.

There is absolutely nothing about the present that precisely determines the future…What happens next is unknowable, and all we can say is what the long=term frequency of different outcomes will be.

In other theories, nothing is truly random and probability is entirely subjective. If we knew, not just the wave function, but all the hidden variables, we could predict the future exactly. As it stands, however, we can only make probabilistic predictions.

Finally there is the many-worlds resolution to the problem, which is Carroll’s favorite.

Many-worlds quantum mechanics has the simplest formulation of all the alternatives. There is a wave function, and it obeys Schrödinger’s equation, and that’s all. There are no collapses and no additional variables. Instead, we use Schrödinger’s equation to predict what will happen when an observer measures a quantum object in a superposition of multiple possible states. The answer is that the combined system of observer and object evolves into an entangled superposition. In each part of the superposition, the object has a definite measurement outcome and the observer has measured that outcome.

Everett’s brilliant move was simply to say, “And that’s okay” — all we need to do is recognize that each part of the system subsequently evolves separately from all of the others, and therefore qualifies as a separate branch of the wave function, or “world.” The worlds aren’t put in by hand; they were lurking in the quantum formalism all along.

I find it foolish to ignore that probability theory keeps pointing back at us. Christopher Fuchs, a physicist at the University of Massachusetts, is the founder of a school of thought dubbed Qbism (for quantum bayesianism). In an interview published in Quanta Magazine, Fuchs explains that QBism goes against a devotion to objectivity “by saying that quantum mechanics is not about how the world is without us; instead it’s precisely about us in the world. The subject matter of the theory is not the world or us but us-within-the- world, the interface between the two.” And later:

QBism would say, it’s not that the world is built up from stuff on “the outside” as the Greeks would have had it. Nor is it built up from stuff on “the inside” as the idealists, like George Berkeley and Eddington, would have it. Rather, the stuff of the world is in the character of what each of us encounters every living moment — stuff that is neither inside nor outside, but prior to the very notion of a cut between the two at all.

The effectiveness of our thoughts on likelihood is astounding. Cognitive neuroscientists suggest that statistics is part of our intuition. They argue that we learn everything through probabilistic inferences. Optical illusions have been understood as the brain’s decision about the most likely source of a retinal image. Anil Seth, at the University of Sussex, argues that all aspects of the brain’s construction of our world are built with probabilities and inferences. The points in the geometry of Tonini’s Integrated Information Theory of consciousness are defined using probability distributions. Karl Friston’s free energy principal, first aimed at a better understanding of how the brain works, defines the boundaries around systems (like cells, organs, or social organizations) with a statistical partitioning – things that belong to each other are defined by the probability that the state of one thing will effect another. Uncertainty defines Claude Shannon’s information entropy and Max Tegmark’s laws of thermodynamics. It’s also interesting that a thought experiment, proposed by James Clerk Maxwell in 1871 and known as Maxwell’s demon, was designed to examine the question of whether or not the second law of thermodynamics is only statistically certain.

As Carroll sees it, “The study of probability takes us from coin flipping to branching universes.” So what’s me and what’s not me? Mathematics has a way of raising this issue over and over again. Maybe we are beginning to look to it for guidance.

The monad, autopoiesis and Christmas

If you were listening, the season brought the usual surge of Christmas music through all manner of electromagnetic transmission, wired and wireless, causing me to remember again my mild preoccupation with one tune in particular, namely – Do You Hear What I Hear? For the past few years I found myself listening more closely to the lyrics of this piece because, for me, they created an image related to the many things I have written about mathematics and cognition. I decided this year to try to pin down my thoughts more clearly, and share them.

The song describes the ‘transfer of information,’ if you will, that moves through the wind to the lambs, from the lambs to a shepherd, from the shepherd to the king, and finally from the king to the people. It goes like this:

Said the night wind to the little lamb
Do you see what I see
Way up in the sky little lamb
Do you see what I see
A star, a star
Dancing in the night
With a tail as big as a kite
With a tail as big as a kite

Said the little lamb to the shepherd boy
Do you hear what I hear
Ringing through the sky shepherd boy
Do you hear what I hear
A song, a song
High above the trees
With a voice as big as the sea
With a voice as big as the sea

Said the shepherd boy to the mighty king
Do you know what I know
In your palace wall mighty king
Do you know what I know
A child, a child
Shivers in the cold
Let us bring him silver and gold
Let us bring him silver and gold

Said the king to the people everywhere
Listen to what I say
Pray for peace people everywhere
Listen to what I say
The child, the child
Sleeping in the night
He will bring us goodness and light
He will bring us goodness and light

The wind perceives and communicates what it sees to the lamb. The lamb hears the wind, as a song, a formulation, and somehow communicates what he hears to the boy. The boy then knows something, has a fully conscious perception, which he brings to the king (the one responsible for organizing the human world) and from there it is broadcast so that everyone knows.

I was raised Catholic and so I remember the birth of Jesus described to us as the marriage of heaven and earth, which may be said to be the reconciliation of the eternal and the temporal, or the ideal and the instantiated. It’s the last of these that has gotten considerable attention from me, in these past many years, as I have worked to square conceptual reality with physical reality through a refreshed look at mathematics. And so the song got my attention because it suggests a continuum of knowing, from the wind to the King, and a oneness to the world of the physical and the devine. The idea that sensation and cognition are somehow in everything reminds me of the polymath and mathematician Leibniz’s monads for one thing, and cognition as understood by biologists Francisco Varela and Humberto Maturana. The rigor of Leibniz’s work in logic and mathematics, together with what he understood about the physical world, and his faith in reason, he dissociated ‘substance’ from ‘material’ and reasoned that the world was not built from passive material but from fundamental objects he called monads – simple mind-like substances equipped with perception and appetite. But the monad takes up no space, like a mathematical point. I wrote about these things in 2012 and made this remark:

All of this new rumbling about mathematics and reality encourages a hunch that I have had for a long time – that the next revolution in the sciences will come from a newly perceived correspondence between matter and thought, between what we are in the habit of distinguishing as internal and external experience, and it will enlighten us about ourselves as well as the cosmos. New insights will likely remind us of old ideas, and the advantage that modern science has over medieval theology will wane. I expect mathematics will be at the center of it all.

For Varela and Maturana, every organism lives in a medium to which it is structurally coupled and so the organism can be said to already have knowledge of that medium, even if only implicitly. Living systems exist in a space that is both produced and determined by their structure.Varela and Maturana extend the notion of cognition to mean all effective interactions – action or behavior that accomplishes the continual production of the system itself. “All doing is knowing and all knowing is doing,” as they say in The Tree of Knowledge. I wrote about some of the implications of this idea last year.

There is certainly mystery in Christmas images, from the return of the life-giving presence of the sun on the solstice, to the generous red-suited giver of gifts who lives where there is no life, to the unexpected marriage of the heaven and earth. The song Do You Hear What I Hear? has an interesting history. It was written in 1962 by Noel Regney and Gloria Shayne. Regney wrote the lyrics. He was a French-born musician and composer forced into the German army by Hitler’s troops during World War II. He became a member of the French underground and, while in the required German uniform, he collected information and worked in league with the French resistance. He moved to Manhattan in 1952 and continued his career as a composer. Although he once expressed that he had no interest in writing Christmas songs, amidst the the distress of the Cuban missile crisis in October of 1962, he has said that he was inspired to write the lyrics in question when he saw the hopeful smiles of two babies in strollers, in friendly exchange on a street in Manhattan.

I’m not arguing that my observation of the lyrics defend any particular religious perspective. I want more to express the fact that I can’t help but notice that the song sits comfortably within world views once considered by a 17th century polymath, known for his development of the calculus, and by 20th century biologists whose work redefines life as well as our experience of reality! And there is value in taking note of unintended science-like perspectives in religious images. Even the notion of The Word in Christian literature, translated from the Greek logos, is replete with fundamental views of reality in Ancient Greek philosophy. For the Stoics logos was reason both in the individual and in the cosmos. It was nature as well as God.

Religion and science have a common ancestor and may have a shared destiny.