It begins with a very convincing narrative substantiating the presence of multiple personalities in individuals who experience this. One of the most remarkable was the case (reported in Germany in 2015) of a woman who had dissociated personalities, some of whom were blind.

The woman exhibited a variety of dissociated personalities (“alters”), some of which claimed to be blind. Using EEGs, the doctors were able to ascertain that the brain activity normally associated with sight wasn’t present while a blind alter was in control of the woman’s body, even though her eyes were open. Remarkably, when a sighted alter assumed control, the usual brain activity returned.

This was a compelling demonstration of the literally blinding power of extreme forms of dissociation, a condition in which

the psyche gives rise to multiple, operationally separate centers of consciousness,each with its own private inner life. (emphasis added)

The history of cases of dissociated personalities goes back to the late 1800s and the authors tell us that the literature provides significant evidence that “the human psyche is constantly active in producing personal units of perception” – what we would call selves. While it continues to be unclear how this happens, they argue that the development of selves, or personal units of perception, should play a role in how we understand “what is and is not possible in nature.”

The case they make requires an appeal to alternative philosophical perspectives specifically *physicalism, constituitive panpsychism,* and *cosmopsychism*. Proponents of physicalism believe that we should be able to understand mental states through a thorough analysis of brain processes. The ongoing problem with this expectation is that there is still no way to connect *feelings* to different arrangements of physical stuff. Constituitive panpsychism is the idea that what we call *experience *is inherent in every physical thing, even fundamental particles. Human consciousness would somehow be built “by a combination of the subjective inner lives of the countless physical particles that make up our nervous system.” But, the authors argue, the articulation of this perspective does not provide a way to understand how lower level points of view (atoms and molecules) would combine to produce higher level points of view (human experience). The alternative would be that consciousness is fundamental in nature but not fragmented. This is cosmopsychism which, the authors say, is essentially the classic *idealism, *where the objects of our experience depend on something more fundamental than particles, and that fundamental thing is more like *mind* or thought than matter*.*

The difficulty with this view is understanding how various private conscious centers (like you and everyone around you) emerge from a ‘universal consciousness.’ Keying on this question is what makes the presence of multiple personalities in one individual a useful indicator of how to think about this larger question.

Kastrup’s paper, on which this very readable Scientific American article is based, is steeped in the language of philosophy. He works to unpack the mainstream physicalist perspective and why it doesn’t work, and then he examines a number of panpsychism views and their weaknesses. For his own aragument, he relies most heavily on a proposal from philosopher Itay Shani.

Shani does still postulate a duality in cosmic consciousness to account for the clear qualitative differences between the outer world we, as relative subjects, perceive and measure and the inner world of our thoughts and feelings. He calls it the ‘lateral duality principle’ (Shani 2015, p412) and describes it thus:

[Cosmic consciousness] exemplifies a dual nature: it has a

concealed(or enfolded, or implicit) side to its being, as well as arevealed(or unfolded, or explicit) side; the former is an intrinsic dynamic domain of creative activity, while the latter is identified as the outer, observable expression of that activity. (ibid., original emphasis)

Kastrup’s thinking is in line with Shani’s, but he goes to great lengths to examine the weaknesses in Shani’s view. For the remainder of the paper, Kastrup focuses on addressing the following questions: how do fleeting experiential qualities arise out of “one enduring cosmic consciousness,” what causes individual experiences to be private, how can the physical world we measure be explained in terms of a concealed, thoughtful, order, why does brain function correlate so well with our awareness if it doesn’t generate it, and finally, why are we all imagining the same world outside the control of our personal volition.

Kastrup’s analysis of these questions is thorough and precise, and he uses the phenomenon of dissociated personalities which he calls alters) to address the privacy of individual experiences (since the alters within one individual are nonetheless private from each other) and the functional brain scans, that distinguish actual alters from ones that are just acted out, to imagine how each of us is the result of a “cosmic level dissociative processes.”

These are difficult ideas to accept given what we have come to expect from the sciences. But I will point out that aspects of these proposals run parallel to ones proposed by contemporary neuroscientists and physicists. The intimate connection between physics and mathematics always raises questions about the relatedness of mind and matter. For 17th century mathematician and philosopher Gottfried Wilhelm Liebniz, the fundamental substance of the universe could not be material. It had to be something undividable, something resembling a mathematical point more than a speck of dust. The material in our experience is then somehow a consequence of the relations among these non-material substances that actually resemble ‘mind’ more than ‘matter.’ For physicist and author David Deutsch, information and knowledge are the fundamentals of physical life. In his book, The Beginning of Infinity, Deutsch compares and contrasts human brains and DNA molecules. “Among other things,” he says, they are each “general- purpose information-storage media….” And so Deutsch sees biological information and explanatory information each as instances of knowledge which, he says, “is very unlikely to come into existence other than through the error-correcting process of evolution or thought. The Integrated Information Theory of Consciousness proposed by neuroscientist Giulio Tononi, and defended by neuroscientist Christof Koch, suggests that some degree of consciousness is an intrinsic fundamental property of every physical system. Also, cosmologist and author Max Tegmark is of the opinion that if we want to understand all of nature we have to consider all of it together. For Tegmark there are three pieces to every puzzle – the thing being observed; the environment of the thing being observed (where there may be some interaction); and the observer. He identifies three realities in his book Our Mathematical Universe – external reality, consensus reality, and internal reality. External reality is the physical world which we believe would exist even if we didn’t (and is described in physics mathematically). Consensus reality is the shared description of the physical world that self-aware observers agree on (and it includes classical physics). Internal reality is the way you subjectively perceive the external reality. As with many ideas in physics, the universe is understood in terms of information, and Tegmark has said that he thinks that consciousness is the way information ‘feels’ when processed in complex ways.

It seems to me that a similar insight into what we have been overlooking, about ourselves and our world, is being approached from several directions and in languages specific to individual disciplines. The ones proposed by physicists and neuroscientists are held together with mathematics. But they all bring to mind again, something I thought when I watched my mother’s mind change with the development of a tumor in the right frontal lobe of her brain. Among the many things I questioned was how it is that the cells in her body could produce her experience if something like consciousness or thought did not already in the world that created her.

]]>

The *Nature* article by Alison Abbott tells us that the grid-cell-like coding was so good, the virtual rat was even able to learn short-cuts in its virtual world. And here’s an interesting response to the work from neuroscientist Edvard Moser, a co-discover of biological grid cells:

“This paper came out of the blue, like a shot, and it’s very exciting,” says neuroscientist Edvard Moser at the Kavli Institute for Systems Neuroscience in Trondheim, Norway. Moser shared the 2014 Nobel Prize in Physiology or Medicine for his co-discovery of grid cells and the brain’s other navigation-related neurons, including place cells and head-direction cells, which are found in and around the hippocampus region.

“It is striking that the computer model, coming from a totally different perspective, ended up with the grid pattern we know from biology,” says Moser. The work is a welcome confirmation that the mammalian brain has developed an optimal way of arranging at least this type of spatial code, he adds.

There is something provocative about measuring the brain’s version of grid cell navigation against this emergent but simulated grid cell action.

In Nature’s News and Views Francesco Savelli and James J. Knierim tell us a bit more about the study. First, for the sake of clarity, what researchers call deep learning is a kind of machine learning characterized by layers of computations, structured in such a way that the output from one computation becomes the input of another. Inputs and outputs are defined by a *transformation *of data*,* or information, being received by each layer. The data is translated into “compact representations” that promote the success of the task at hand – like translating pixel data into a face that can be recognized. A system like this can learn to process inputs so as to achieve particular outputs. The extent to which each of the computations, in each of the layers, affects the final outcome is determined by how they are weighted. With optimization algorithms, these weights will be adjusted to optimize results. Deep learning networks have been successful with computer vision, speech recognition, and games, among other things. But navigating ones self through the space of ones environment is a fairly complex task.

The research that led to Moser’s Nobel Prize in 2014 was the discovery of a kind of family of neurons that produces the cognitive maps we develop of our environments. There are place cells, neurons that fire when an organism is in a particular position in an environment, often with landmarks. There are head-direction neurons that signal where the animal seems to be headed. There are also neurons that respond to the presence of an edge to the environment. And, most relevant here, there are grid cells. Grid cells fire when an animal is at any of a set of points that define a hexagonal grid pattern across their environment. The neuron’s firing maps to a point on the ground. They contribute to the animal’s sense of position, and correspond to the direction and distance covered by some number of steps taken.

Banino and colleagues wanted to create a mechanism for self-locating, in a deep-learning network. Such a mechanism is referred to as path integration.

Because path integration involves remembering the output from the previous processing step and using it as input for the next, the authors used a network involving feedback loops.They trained the network using simulations of pathways taken by foraging rodents. The system received information about the simulated rodent’s linear and angular velocity, and about the simulated activity of place and head-direction cells…

And this is what happened:

The authors found that patterns of activity resembling grid cells spontaneously emerged in computational units in an intermediate layer of the network during training, even though nothing in the network or the training protocol explicitly imposed this type of pattern. The emergence of grid-like units is an impressive example of deep learning doing what it does best: inventing an original, often unpredicted internal representation to help solve a task.

These grid-like units allowed the network to keep track of position, but whether they would function in the network’s navigation to a goal was still a question. They addressed this question by adding a reinforcement-learning component. The network learned to assign values to particular actions at particular locations, and higher values were assigned to actions that brought the simulated animal closer to a goal.

The grid-like representation markedly improved the ability of the network to solve goal-directed tasks, compared to control simulations in which the start and goal locations were encoded instead by place and head-direction cells.

Unlike the navigation systems developed by the brain, in this artificial network, the place cell layer is not changed during the training that affects grid cells. But the way that grid and place cells influence each other in the brain is not well understood. Further development of the artificial network might help unravel their interaction.

From a broader perspective, it is interesting that the network, starting from very general computational assumptions that do not take into account specific biological mechanisms, found a solution to path integration that seems similar to the brain’s. That the network converged on such a solution is compelling evidence that there is something special about grid cells’ activity patterns that supports path integration. The black-box character of deep learning systems, however, means that it might be hard to determine what that something is.

There is clear pragmatic promise in this research, involving both AI and it’s many applications, as well as cognitive neuroscience. But I find it striking for a different reason. I find it striking because it seems to provide something new, and provocative, about mathematics’ ubiquitous presence. When I first learned about the action of grid cells I was impressed with the way this fully biological, unconscious, cognitive mechanism resembled the abstract coordinate systems in mathematics. But here there is an interesting reversal. Here we see the biological one emerging, without our direction, from a system that owes its existence entirely to mathematics. It puts mathematics somewhere between everything and in a way that we haven’t quite grasped. It’s intelligence we can’t locate.

]]>…Numbers emerging from one kind of geometric world matched exactly with very different kinds of numbers from a very different [...]]]>

…Numbers emerging from one kind of geometric world matched exactly with very different kinds of numbers from a very different kind of geometric world.

To physicists, the correspondence was interesting. To mathematicians, it was preposterous.

It was in the early nineties that the surprise first occurred, like an alert that there is a *mirror symmetry* between two different mathematical structures, and mathematicians have been investigating it for almost three decades now. The Quanta magazine article reports that they seem to be close to being able to explain the source of the mirroring. Kevin Hartnett, author of the Quanta article, characterizes their effort as one that could produce “a form of geometric DNA – a shared code that explains how two radically different geometric worlds could possible hold traits in common.” (I like this biologically-themed analogy)

The whole mirroring phenomenon rests largely on the development of string theory in physics, where theorists found that the strings, that they hoped were the fundamental building blocks of the universe, required 6 dimensions more than is contained in Einstein’s 4-dimensional spacetime. String theorists answered the demand by finding two ways to account for the missing six dimensions – one from symplectic geometry and the other from complex geometry. These are the two distinct arrangements of geometric ideas that mathematicians are now examining.

The nature of a symplectic geometric space is grounded in the idea of *phase space,* where each point actually represents the state of a system at any given time. A phase space is defined by patterns in data, not by the spatial arrangement of objects. It is a multidimensional space in which each axis corresponds to a coordinate that specifies an aspect of the physical system. When all the coordinates are represented, a point in the space corresponds to a state of the system. The nature of complex geometry, on the other hand, has its roots in algebraic geometry, where the objects of study are the graphed solutions to polynomial equations. Here the ordered pairs represent exactly positions on a grid (like those x,y pairs we learn about in high school), or complex numbers in a complex space, where those numbers are solutions to equations. The beauty of this arrangement is that the properties possessed by the geometric representation of these solutions (or the objects they produce) provide us with more about the equations they represent than we would have without these representations. But wherever they are, these solutions are rigid geometric objects. The phase space is more flexible. Hartnett tells us that:

In the late 1980s, string theorists came up with two ways to describe the missing six dimensions: one derived from symplectic geometry, the other from complex geometry. They demonstrated that either type of space was consistent with the four-dimensional world they were trying to explain. Such a pairing is called a duality: Either one works, and there’s no test you could use to distinguish between them.

Robert Dijkgraaf, Director and Leon Levy Professor at the Institute for Advanced Study tells an interesting story. Around 1990, a group of string theorists asked geometers to calculate a number related to the number of curves, of a particular degree, that could be wrapped around the kind of space or manifold that is heavily used in string theory (a Calabi-Yau space) A result from the nineteenth century established that the number of lines or degree-one curves is equal to 2,875. The number of degree-two curves is 609,250. This was computed around 1980. The number of curves of degree three had not been computed. This was the one geometers were asked to compute.

The geometers devised a complicated computer program and came back with an answer. But the string theorists suspected it was erroneous, which suggested a mistake in the code. Upon checking, the geometers confirmed there was, but how did the physicists know?

String theorists had already been working to translate this geometric problem into a physical one. In doing so, they had developed a way to calculate the number of curves of any degree all at once. It’s hard to overestimate the shock of this result in mathematical circles.

The duality appeared to run deep and mathematicians and physicists alike began to try to understand the underlying feature that would account for the mirroring phenomenon. A proposed strategy is to deconstruct a shape in the symplectic world in such a way that it can be reconstructed as a complex shape. The deconstruction can make a multidimensional simplectic manifold easier to visualize and it can also reduce one of the mirror spaces into building blocks that can be used to construct the other. This would likely lead to a better understanding of what connects them.

Again from Dijkgraaf:

Mathematics has the wonderful ability to connect different worlds. The most overlooked symbol in any equation is the humble equal sign. Mirror symmetry is a perfect example of the power of the equal sign. It is capable of connecting two different mathematical worlds. One is the realm of symplectic geometry, the branch of mathematics that underlies much of mechanics. On the other side is the realm of algebraic geometry, the world of complex numbers. Quantum physics allows ideas to flow freely from one field to the other and provides an unexpected “grand unification” of these two mathematical disciplines.

This is a remarkable story, and there are many in mathematics. I’ve always been captivated a bit by how the spatial ideas of this discipline, once charged with measuring the earth, became the abstract ideals described by Euclid, that were then stretched to accommodate spaces with non-Euclidean shapes, that include our spacetime, and were further developed to create spaces defined by patterned data of any kind – the symplectic kind. In this story, mathematicians, like experimentalists, become charged with the need to find reason for an unexpected observation. But it is an observation of the fully abstract world that mathematics built. What are these abstract worlds made of? How do they become more than we can see? I’m well aware of the lack of precision in these questions, but there is value in stopping to consider them. To what extent are these abstract spaces objective? *Where* are these investigations happening? There is no doubt that we have yet to understand what we realize when we find mathematics.

One of the nice things that the article points out is that there are theorems that have a number of different proofs, each one telling you something different about the theorem or the structures involved in the proof of the theorem.

An example comes to mind — which is not in our book but is very fundamental — Steinitz’s theorem for polyhedra. This says that if you have a planar graph (a network of vertices and edges in the plane) that stays connected if you remove one or two vertices, then there is a convex polyhedron that has exactly the same connectivity pattern. This is a theorem that has three entirely different types of proof — the “Steinitz-type” proof, the “rubber band” proof and the “circle packing” proof. And each of these three has variations.

Any of the Steinitz-type proofs will tell you not only that there is a polyhedron but also that there’s a polyhedron with integers for the coordinates of the vertices. And the circle packing proof tells you that there’s a polyhedron that has all its edges tangent to a sphere. You don’t get that from the Steinitz-type proof, or the other way around — the circle packing proof will not prove that you can do it with integer coordinates. So, having several proofs leads you to several ways to understand the situation beyond the original basic theorem.

This kind of discussion highlights how mathematical ideas can be multi-aspected, the very thing that makes a mathematical idea powerful and difficult to categorize in our experience. But in the lower right margin of the article were links to related articles, and it was here that I found *Michael Atiyah’s Imaginative State of Mind.* This piece was written about a year ago, when Michael Atiyah hosted a conference at the Royal Society of Edinburgh on *The Science of Beauty*. There is a video of his introductory remarks on youtube worth a listen. The article was built around Atiyah’s response to some questions that the authors were able to ask him on the occasion of the conference.

Roughly speaking, he has spent the first half of his career connecting mathematics to mathematics, and the second half connecting mathematics to physics….

….Now, at age 86, Atiyah is hardly lowering the bar. He’s still tackling the big questions, still trying to orchestrate a union between the quantum and the gravitational forces. On this front, the ideas are arriving fast and furious, but as Atiyah himself describes, they are as yet intuitive, imaginative, vague and clumsy commodities.

I felt encouraged by the refreshingly sensory ways Atiyah characterized his experience as a mathematician. Like here:

The crazy part of mathematics is when an idea appears in your head. Usually when you’re asleep, because that’s when you have the fewest inhibitions. The idea floats in from heaven knows where. It floats around in the sky; you look at it, and admire its colors. It’s just there. And then at some stage, when you try to freeze it, put it into a solid frame, or make it face reality, then it vanishes, it’s gone. But it’s been replaced by a structure, capturing certain aspects, but it’s a clumsy interpretation.

In response to being asked if he had always had mathematical dreams he said this:

The crazy part of mathematics is when an idea appears in your head. Usually when you’re asleep, because that’s when you have the fewest inhibitions. The idea floats in from heaven knows where. It floats around in the sky; you look at it, and admire its colors. It’s just there. And then at some stage, when you try to freeze it, put it into a solid frame, or make it face reality, then it vanishes, it’s gone. But it’s been replaced by a structure, capturing certain aspects, but it’s a clumsy interpretation.

And when asked about the two works for which he is well known (the index theorem and K-threory) he suggested this very visual way of describing k-theory:

The index theorem and K-theory are actually two sides of the same coin. They started out different, but after a while they became so fused together that you can’t disentangle them. They are both related to physics, but in different ways.

K-theory is the study of flat space, and of flat space moving around. For example, let’s take a sphere, the Earth, and let’s take a big book and put it on the Earth and move it around. That’s a flat piece of geometry moving around on a curved piece of geometry. K-theory studies all aspects of that situation — the topology and the geometry. It has its roots in our navigation of the Earth.

The maps we used to explore the Earth can also be used to explore both the large-scale universe, going out into space with rockets, and the small-scale universe, studying atoms and molecules. What I’m doing now is trying to unify all that, and K-theory is the natural way to do it. We’ve been doing this kind of mapping for hundreds of years, and we’ll probably be doing it for thousands more.

I found a nice description of how the index theorem can connect the curvature of a space to its topology (or the number of holes it has).

One of the things Atiya is committed to at the moment is reversing the mistake of ignoring the small effect of gravity on an electron or proton. He says he’s going back to Einstein and Dirac and looking at them again and he thinks he sees things that people have missed. “If I’m wrong,” he says, “I made a mistake. But I don’t think so.”

At the end of introductory remarks he made at *The Science of Beauty* conference he said that he found himself closer to the mystical views of Pythagoras than to those who completely rejected mysticism. ”A little bit of mysticism is important in all forms of life.”

When asked if he thought a computer could be made to recognize beauty, his response led to his characterizing the mind as a *parallel universe*. More than just logic, the mind has aspects that recognize states. These are not verbal or pictorial states, but conceptual states. And beauty lives somewhere in the mind. This is the kind of insight that doing mathematics can produce. And it will, I believe, lead us to completely new ideas about who we are and what it is that our minds may be producing.

A last thought on mathematics:

]]>

People think mathematics begins when you write down a theorem followed by a proof. That’s not the beginning, that’s the end. For me the creative place in mathematics comes before you start to put things down on paper, before you try to write a formula. You picture various things, you turn them over in your mind. You’re trying to create, just as a musician is trying to create music, or a poet. There are no rules laid down. You have to do it your own way. But at the end, just as a composer has to put it down on paper, you have to write things down.

Grosholz is the author of many books that include works on the philosophy of mathematics as well as works of poetry. Her latest is *Starry Reckoning: Reference and Analysis in Mathematics and Cosmology. *What follows is based on a piece that she contributed to a book she edited with Herbert Breger. The book is called* The Growth of Mathematical Knowledge*, and her piece is given the title *The Partial Unification of Domains, Hybrids, and the Growth of Mathematical Knowledge. *Here Grosholz argues that unlike what has been considered before, different branches of mathematics do not reduce to other branches. Philosophers of mathematics have discussed the possibility that geometry can be reduced to arithmetic, arithmetic to predicate logic, and arithmetic and geometry to set theory. This is understood in much the same way that one might claim that biology can be reduced to chemistry and chemistry to physics. The vocabulary of the reduced theory is redefined in terms of the reducing theory. In the sciences, the reducing theory has been thought to play an explanatory role, suggesting an inherent unity among the various scientific disciplines. But in mathematics the so-called reducing theory is not used so much as an explanation of the reduced ideas, but more as a foundation for them. And mathematicians have long had difficulty with foundational questions. Grosholz, on the other hand, proposes that mathematics is a collection of rationally related but autonomous domains and then highlights the potent role of what she calls mathematical *hybrids*.

She explains that in Greek mathematics the autonomy of domains is clear. Geometry is about points, lines, planes, and figures, and geometric problems involve relations between parts of the whole of spatial figures. Arithmetic is about numbers, and problems in arithmetic involve monotonic, discreet succession. The vocabulary of logic is one of terms, propositions, and arguments, and problems in logic involve ideas of inclusion, exclusion, consistency, and inconsistency. While these separate domains may seem to resist assimilation, 17th century mathematics introduced some unifications. Among these unifications is Descartes’ application of algebraic techniques to geometric constructions, and Leibniz’s application of combinatorics to an analysis of curves. Grosholz spends some time on each of these. She points out that Leibniz was fascinated with formal languages and number theory, and that he believed that the art of combinations was central to the art of discovery. She argues that Leibniz’s investigation of algebraic forms in the calculus is grounded in “an imperfect but suggestive analogy between numbers and figures.” The infinite summing of infinitesimal differences, that becomes the integral, emerges from his ability to bridge geometric ideas about a curve (like tangent, arc length, area), with algebraic equations, and through the notion of an infinite-sided polygon approximating the curve, patterns of integers were also connected. Here the mathematical hybrid emerges: an abstract structure that rationally relates different domains in the service of problem solving. On a deeper level, objects in each domain must actually exhibit features of both domains, despite the instability created by their differences. But, Grosholz argues, this instability does not mean that hybrids are defective. They are held together by the clarity of the domains from which they emerge, and the abstract structures that link them. “Logical gaps are to be found at the heart of many hybrids,” Grosholz explains, but imaginative analogies inspire the kind of revision and invention that promotes the growth of mathematical knowledge.

I was always impressed by the fact that these intuitive leaps that Leibniz took, while prompting subsequent generations to feel the need to bring acceptable rigor to the notions, were nonetheless substantiated. Grosholz lends some important detail to the picture Richard Courant paints of 17th century pioneers of mathematics in his classic text, *What is Mathematics?*

In a veritable orgy of intuitive guesswork, of cogent reasoning interwoven with nonsensical mysticism, with a blind confidence in the superhuman power of formal procedure, they conquered a mathematical world of immense riches.

This talk of hybrids reminded me of the *interdisciplinarity* that Virginia Chaitin writes about. I wrote this in an earlier post about one of her papers:

What she proposes is not the kind of interdisciplinary work that we’re accustomed to, where the results of different research efforts are shared or where studies are designed with more than one kind of question in mind. The kind of interdisciplinary work that Chaitin is describing, involves adopting a new conceptual framework, borrowing the very way that understanding is defined within a particular discipline, as well as the way it is explored and the way it is expressed. The results, as she says, are the “migrations of entire conceptual neighborhoods that create a new vocabulary.”

In her own words:

…an epistemically fertile interdisciplinary area of study is one in which the original frameworks, research methods and epistemic goals of individual disciplines are combined and recreated yielding novel and unexpected prospects for knowledge and understanding. This is where interdisciplinary research really proves its worth.

Grosholz’s identification of the hybrid is an important insight, and I would argue that it has implications beyond mathematics. It may be that because the objects of mathematics are so *clean*, or unambiguous, the value of the hybrid is more easily observed. But my hunch is that productive analogies likely belong to the stuff of life itself.

It happens in the other direction as well. Mathematician and computer scientist Gregory Chaitin has approached biology mathematically, not in the sense of modeling behavior, but more in the way of expressing the creativity of evolution using the creativity of mathematics. Here’s a little piece of a post from about six years ago:

Chaitin believes that Gödel and Turing (in his 1936 paper) opened the door to a provocative connection between mathematics and biology, between life and software. I’ve looked at how Turing was inspired by biology in two of my other posts. They can be found here and here.

But Chaitin is working to understand it with a new branch of mathematics called Metabiology. I very much enjoyed hearing him describe the history of the ideas that inspired him in one of his talks: Life as Evolving Software in which he says:

“After we invented software we could see that we were surrounded by software. DNA is a universal programming language and biology can be thought of as software archeology – looking at very old, very complicated software.”

Chaitin is also one of the mathematicians who developed what is known as algorithmic information theory. And I recently happened upon a paper from Giulio Ruffini at Starlab Barcelona, with the title *An Algorithmic Information Theory of Consciousness*. This paper was published near the end of 2017. Ruffini’s research is motivated, to some extent by the value of being able to provide a metric of conscious states. But the course he’s chosen is described in the abstract:

In earlier work, we argued that the experience we call reality is a mental construct derived from information compression. Here we show that algorithmic information theory provides a natural framework to study and quantify consciousness from neurophysiological or neuroimaging data, given the premise that the primary role of the brain is information processing.

Ruffini argues that characterizing consciousness is “a profound scientific problem,” and progress in this area will have important practical implications with respect to any one of a number of disorders of consciousness. While the paper is mostly aimed at justifying the fit of algorithmic information theory (which he refers to as AIT) to this endeavor, one can also see some of the deeper philosophical convictions that motivate his approach. He says the following, for example, in his introduction:

We begin from a definition of cognition in the context of AIT and posit that brains strive to model their input/output fluxes of information with simplicity as a fundamental driving principle (Ruffini 2007,2009). Furthermore, we argue that brains, agents, and cognitive systems can be identified with special patterns embedded in mathematical structures enabling computation and compression.

But I found the conviction that seems to be driving his perspective clearly laid out in his 2007 paper *Information, complexity, brains and reality (Kolmogorov Manifesto).* There he says that information theory gives us the conceptual framework we need to comprehend how brains and the universe are related. That seems like the really big picture. He also says:

I argue that what we call the universe is an interpreted abstraction—a mental construct—based on the observed coherence between multiple sensory input streams and our own interactions (output streams). Hence, the notion of Universe is itself a model. Rules or regularities are the building blocks of what we call Reality—an emerging concept. I highlight that physical concepts such as “mass”, “spacetime” or mathematical notions such as “set” and “number” are models (rules) to simplify the sensory information stream, typically in the form of invariants. The notion of repetition is in this context a fundamental modelling building block.

Compression is one of the key ideas. Relations that are expressed in equations, or events that are captured by programs have been compressed, and the simplification is productive. The Kolmogorov complexity of a data set, in algorithmic information theory, is defined as the length of the shortest program able to generate it. Experience is a consequence of the brain’s compression (and hence simplification) of an ongoing flood of sensory data. And so one of Ruffini’s ideas is that science is what brains do. And this, he says, is to be taken as a definition of science. Here are a few of the ideas his paper means to address, some more provocative than others:

Reality is a byproduct of information processing.

Mathematics is the only really fundamental discipline and its currency is information.

The nervous system is an information processing tool. Therefore, information science is crucial to understand how brains work.

The brain compression efforts provide the substrate for reality (the brain is the “reality machine”).

The brain is a pattern lock loop machine. It discovers and locks into useful regularities in the incoming information flux, and based on those it constructs a model for reality.

…the concept of repetition is a fundamental block in “compression science.” This concept is rather important and leads brains naturally to the notion of number and counting (and probably mathematics itself)

Compressive processes are probably very basic mechanisms in nervous systems…Counting is just the next step after noticing repetition. And with the concept of repetition also comes the notion of multiplication and primality. More fundamentally, repetition is the core of algorithmics.

This sketchy survey of the paper does not do it justice. But I bring it to your attention as yet another indication that the blend of information theory and biology is running deep.

Anil Seth, at the University of Sussex, proposes that this nontrivial mathematical idea is responsible, not only for how we perceive and learn about the world around us, but also how we perceive *ourselves*. His research begins with the observation that the body creates what we see by organizing sensory signals and prior experience with probabilities. For Seth, we don’t so much *perceive* the world as *generate* it which, he notes, is consistent with a Helmholtzian view of things (see last month’s post). And this *generation of a world* that we experience is not just the organization of signal coming from the outside. The brain is also addressing signal from the inside – like blood pressure or how the heart and internal organs are doing – as much, if not more, than the outside. This side of the brain’s attention concerns control and regulation. Seth argues that the brain is consistently engaged in error-reducing predictive processing, to keep us alive.

With the continuous flow of signal from outside the body and within the body, the brain makes its best guess about what’s happening. Seth calls our shared experience *a controlled hallucination*. What we typically call a hallucination occurs when, for whatever reason, the brain pays less attention to signals coming from outside the body and runs more purely on expectations.

For Seth, conscious experience, and our multilayered experiences of selfhood, are also constructed from the organization of sensory data and internal states, and characterized by the most likely meaning of that data. Even our sense of owning our own bodies is a consequence of the brain’s predictions about the data it receives. Experimental evidence supports this claim. The rubber hand experiment, for example, is one in which an individual is asked to focus on a fake hand while their real hand is kept out of sight. The fake hand will begin to feel like part of the participant’s body when tactile stimuli are arranged to suggest this. In another experiment, the electronic image of a hand is perceived as belonging to the body when it is programmed to flash gently red in time with that individual’s heartbeat.

The brain’s probabilistic judgments are not easily unraveled. They are based on a complex set of physiological processes that include the body’s sense of its own position and its own motion, (produced by stimuli that arise from within the body), the body’s sense of its own internal state, as well as what it receives from the five senses that we recognize. What we perceive is some reconciliation of integrated signals and expectation. And when expectations are given priority over incoming data, one will perceive what is not there.

In this paradigm, mathematical processes are found living. They are a feature of our biology. But another important implication of this work is that, with all of these error-reducing guesses that the brain is making (using probabilities and past experience), we could be mistaken about *anything* – about what’s out there as well as about ourselves. In the history of science, mathematics has consistently corrected mistakes in our perception of reality – from the earth’s position in the solar system, to the calculus of classical mechanics, to the probability-driven theories of quantum mechanics. So it seems to be addressing our view of things from both ends. As pure structure, related to fundamental life processes, mathematics may also be uniquely capable of clarifying what we can understand about ourselves or, even more importantly, what we can understand about *our relationship to what we believe is our reality*. Insights gained from a cognitive science perspective must, inevitably, connect to cosmological questions like Wheeler’s participatory universe, or the Qbism view of the wave function described here. The promise of a genuinely fresh approach to longstanding philosophical issues, like the viability of an objective point of view, intrigues me in these discussions, and mathematics plays a consistently significant role in substantial paradigm shifts like this one.

Nineteenth century France and Germany broke from past ideologies, and new economic and political structures emerged. There were significant developments in science and mathematics and significant growth in specializations. I’ve highlighted the work of Bernhard Riemann often, paying particular attention to his famous 1854 paper *On the hypotheses which lie at the bases of geometry, *to some extent because Riemann acknowledged the influence of philosopher Johann Frederich Herbart who pioneered early studies of perception and learning. I wrote a piece for *Plus Magazine* that suggested parallels between Riemann’s insights into the nature of space, quantity and measure in mathematics, and modern studies in cognitive neuroscience that address how number, space, and time are more the result of brain circuitry than features of the world around us.

It became particularly clear to me today that another nineteenth century heavyweight, whose multidisciplinary research spans physics, psychology, and mathematics, was similarly influenced by Herbart. In an essay with the title *The Eye as Mathematician,* Timothy Lenoir discusses Hermann von Helmholtz’s mid-nineteenth century theory of vision, which suggests an intimate link between vision and mathematics. And Lenoir explains that Helmholtz’s theory of vision was “deeply inspired by Herbart’s conception of the symbolic character of space.”

Lenoir sketches out how the brain uses the data it receives to construct an efficient map of the external world. The data may include sensory impressions of color, or contour along with, perhaps, the registration of a light source on a peripheral spot on the retina. The location of the light source is then defined by the successive feelings associated with eye movements that bring the focal part of the eye in line with the light. A correspondence between the arc defined by each positional change in the eyes, and the stimulation of that spot on the retina, is stored in memory and repeated whenever that spot on the retina is stimulated. Helmholtz called these memories *local signs*. They are learned associations among various kinds of sensory data that also include head and body positions. From sequences of sensory inputs, the mind creates pictures, or *symbolic representations*, that provide a practical way for us to handle the world of objects we find around us. Helmholtz is clear, however, that these pictures or symbols are not copies of the things they represent. While causally related to the world around us, the quality of any sensation belongs to the nervous system. For Helmholtz, the things we see are a symbolic shorthand for aggregates of sensory data like size, shape, color, contrast, that have become associated through trial, error and repetition. The more frequently associations occur, the more rapidly linkages are carried out. Symbols then become associated with complexes of sensory data. And, like a mathematician, the brain learns to operate with the symbols instead of with the production of the complex of sensory data directly. This, Helmholtz argued, is how the constructive nature of perception becomes hidden and nature seems to us to be immediately apparent.

There are other psychological acts of judgment in Helmholtz’s visual theories. The brain has to decided whether a collection of points, for example, generated by stimulation of the retina, does or does not represent a stable object in our presence. To be an object, the points registered on the retina would need to be steady, to not move or change over time. The brain tests their stability by evaluating successive gazes or passes over the object. According to Helmholtz, the collection of points is judged to be a stable object if the squares of the errors, after many passes, are at an acceptable minimum. This is meant in the same sense as the principle of least squares in mathematics. Lenoir calls these measuring mechanisms sensory knowledge, “part of the system of local signs we carry around with us at all times…”

Lenoir’s piece also made it clear that, in the mid 1800’s, there was significant overlap in the methods and the instruments developed by physiologists and astronomers. Gauss introduced the use of least squares in astronomy. Helmholtz invented the ophthalmometer, an instrument that measures how the eye accommodates changing optical circumstances, which makes the prescription of eyeglasses possible. He described the ophthalmometer as a telescope modified for observations at short distances.

In an article for the *Stanford Encyclopedia of Philosophy*, Lydia Patton also addresses Helmhotz’s work in mathematics.

Even when he was writing about physiology, Helmholtz’s vocation as a mathematical physicist was apparent. Helmholtz used mathematical reasoning to support his arguments for the sign theory, rather than exclusively philosophical or empirical evidence. Throughout his career, Helmholtz’s work is marked by two preoccupations: concrete examples and mathematical reasoning. Helmholtz’s early work in physiology of perception gave him concrete examples of how humans perceive spatial relations between objects. These examples would prove useful to illustrate the relationship between metric geometry and the spatial relations between objects of perception. Later, Helmholtz used his experience with the concrete science of human perception to pose a problem for the Riemannian approach to geometry.

As I read, I felt like I was enjoying just a little sip of the rich confluence of physics, psychology and mathematics. We keep trying to unravel the tight weave that binds the nature of the world, the nature of our perception and experience, and how we pull it all together.

In an article that appeared in a 2006 issue of the Bulletin of Mathematical Biology, immunologist Irun Cohen argues that meaning is not an intrinsic property of an entity but rather emerges from dynamic systems. Cohen’s article was used in last month’s post to explore the idea that information in biological systems feeds back on itself in such a way that modified copies of old information, enrich the system with new information, assuming that these modified copies are not destructive to the old information or to the system in general. For Cohen:

Meaning, in contrast to information, is not an intrinsic property of an entity (a word or a molecule, for example); the meaning of an entity emerges from the interactions of the test entity (the word or molecule) with other entities (for example, words move people, and molecules interact with their receptors, ligands, enzymes, etc.). Interactions mark all manifestations of biological structure—molecular through social. Meaning can thus be viewed as the impact of information—what a word means is how people use it; what a molecule means is how the organism uses it; what an organism means is what it does to others and how others respond to it; and so on over the scales life—meaning is generated by interaction.

As last month’s post explained, the ideas expressed in this article are linked to the work of biophysicist and philosopher Henri Atlan. Much of Atlan’s work is directed at understanding the mechanisms of self-organization in systems that are not goal oriented from the outside. Instead, these systems organize themselves in such a way that the meaning of information emerges from the dynamics of the system.

These ideas brought Douglas Hofstadter’s famous text, *Gödel, Escher, Bach,* to mind again. Hofstadter spends a significant amount of time asking questions about the *location* of meaning in order to establish “the universality of at least some messages,” or some information. Meaning, he argues, is an inherent property of a message, if the context that gives it meaning is so natural that it is part of the universal scheme of things. Or, it’s so natural, that it’s everywhere.

It turns out that locating meaning is not a simple matter. In the case of a vinyl recording, for example, we can ask whether the meaning is in the grooves of the record, or in the sound waves produced by the needle on the grooves, in the brain of the listener, or in what the listener has learned about music? In mathematics, is the meaning of symbols coming from chains of human experiences – like collecting and sorting – that are linked by metaphors? Or is it coming from the way relations among abstract objects mirror cognitive processes. Or, is it coming from our immersion in the universal properties that they express? Mathematics could look like a game, where the rules are made to establish relations among symbols. To an untrained eye, it all looks fairly arbitrary. But it’s not. And locating its meaning is, perhaps, the way we understand how it is not a game. This is important to our understanding ourselves.

In the preface to the 20th anniversary edition of *Gödel, Escher, Bach* Hofstadter argues that patterns, bring about our self-reflective consciousness – the very thing that is at the heart of mathematical systems:

…the key is not the stuff out of which brains are made, but the patterns that can come to exist inside the stuff of a brain. This is a liberating shift, because it allows one to move to a different level of considering what brains are: as media that support complex patterns that mirror, albeit far from perfectly, the world, of which, needless to say, those brains are themselves denizens – and it is in the inevitable self-mirroring that arises, however impartial or imperfect it may be, that the strange loops of consciousness start to swirl.

It is a particular kind of pattern that Hofstadter has in mind, something he calls a strange loop – patterns that refer back to themselves. While Atlan and Hofstadter are not actually saying the same thing, there is certainly some overlap between Atlan’s focus on self-organizing systems and Hofstadter’s use of self-referencing systems. And so there is no surprise, perhaps, when the Fluid Analogies Research Group (which Hofstadter heads) describes ‘thinking’ as

…a kind of churning, swarming activity in which thousands (if not millions) of microscopic and myopic entities carry out tiny “subcognitive” acts all at the same time, not knowing of each other’s existence, and often contradicting each other and working at cross-purposes. Out of such a random hubbub comes a kind of collective behavior in which connections are made at many levels of sophistication, and larger and larger perceptual structures are gradually built up under the guidance of “pressures” that have been evoked by the situation.

In *Gödel, Escher, Bach,* Hofstadter argues that Euclid actually obscured the paths that geometric ideas could open by allowing the real world meaning of words like *point, line,* and *circle* to persist in his formal system of deductive reasoning. As a result he explains, “some of the images conjured up by those words crept into the proofs which he created.” It’s a subtle effect, but that’s what’s interesting about it. And hence all of the proofs that attempted to confirm Euclid’s facts about parallel lines were inevitably contaminated by the interplay of everyday intuition and the formal properties of an abstract system. The meaning of objects and propositions in this system did not actually reside in experience. According to Hofstadter:

By treating words such as “POINT” and “LINE” as if they had only the meaning instilled in them by the propositions in which they occur, we take a step toward the complete formalization of geometry.

This opens many doors to understanding, not the least of which are the non-Euclidean geometries.

*Gödel, Escher, Bach* uses mathematics to address the emergence of the self-reflective ‘I’ in our experience, and Gödel’s theorems are at the heart of the matter. The fact that Gödel’s numbering made it possible for mathematics to make a statement about itself was Hofstadter’s inspiration for *Gödel, Escher, Bach* and the research efforts that followed. It always looked to me like mathematics was alive. I just find more and more reasons to think I was right.

A Quanta article by Erica Klarreich was written in 2014, when Mirzakhani won the Fields Medal. There Klarreich tells us that when Mirzakhani began her graduate career at Harvard, she became fascinated with hyperbolic surfaces and, it seems, that this fascination lit the road she would journey. These are surfaces with a hyperbolic geometry rather than a Euclidean one. They can only be explored in the abstract. They cannot be constructed in ordinary space.

I find it worth noting that the ancestry of these objects can be traced back to the 19th century when, while investigating the necessity of Euclid’s postulate about parallel lines, mathematicians brought forth a new world, a new geometry, known today as hyperbolic geometry. This new geometry is sometimes identified with the names of mathematicians János Bolyai and Nikolai Ivanovich Lobachevsky. Bolyai and Lobachevsky independently confirmed its existence when they allowed Euclid’s postulate about parallel lines to be replaced by another. In hyperbolic geometry, given a line and a point not on it, there are many lines going through the given point that are parallel to the given line. In Euclidean geometry there is only one. With this change, Bolyai and Lobachevsky developed a consistent and meaningful non-Euclidean geometry axiomatically. Extensive work on the ideas is also attributed to Carl Friedrich Gauss. One of the consequences of the change is that the sum of the angles of a hyperbolic triangle is strictly less than 180 degrees. The depth of this newly discovered world was ultimately investigated analytically. And Riemann’s famous lecture in 1854 brought definitive clarity to the notion of geometry itself.

With her doctoral thesis in 2004, Mirzakhani was able to answer some fundamental questions about hyperbolic surfaces and, at the same time, build a connection to another major research effort concerning what is called moduli space. The value of moduli space is the other thing that captured my attention in these articles.

In his more extended piece for Quanta, Kevin Hartnett provides a very accessible description of moduli space that is reproduced here:

In mathematics, it’s often beneficial to study classes of objects rather than specific objects — to make statements about all squares rather than individual squares, or to corral an infinitude of curves into one single object that represents them all.

“This is one of the key ideas of the last 50 years, that it is very convenient to not study objects individually, but to try to see them as a member of some continuous family of objects,” said Anton Zorich, a mathematician at the Institute of Mathematics of Jussieu in Paris and a leading figure in dynamics.

Moduli space is a tidy way of doing just this, of tallying all objects of a given kind, so that all objects can be studied in relation to one another.

Imagine, for instance, that you wanted to study the family of lines on a plane that pass through a single point. That’s a lot of lines to keep track of, but you might realize that each line pierces a circle drawn around that point in two opposite places. The points on the circle serve as a kind of catalog of all possible lines passing through the original point. Instead of trying to work with more lines than you can hold in your hands, you can instead study points on a ring that fits around your finger.

“It’s often not so complicated to see this family as a geometric object, which has its own existence and own geometry. It’s not so abstract,” Zorich said.

This way of collapsing one world into another is particularly interesting. And one of the results in Mirzakhani’s doctoral thesis concerned a formula for the *volume* of the moduli space created by the set of all possible hyperbolic structures on a given surface. Mirzakhani’s research has roots in all of these – hyperbolic geometry, Riemann’s manifold, and moduli space.

Her work, and the work of her colleagues, is often characterized as an analysis of the paths of imagined billiard balls inside a polygon. This is not for the sake of understanding the game of pool better, it’s just one of the ways to see the task at hand. Their strategies are interesting and, I might say, provocative . With this in mind, Hartnett provides a simple statement of process:

Start with billiards in a polygon, reflect that polygon to create a translation surface, and encode that translation surface as a point in the moduli space of all translation surfaces.

The miracle of the whole operation is that the point in moduli space remembers something of the original polygon— so by studying how that point fits among all the other points in moduli space, mathematicians can deduce properties of billiard paths in the original polygon. (emphasis added)

The ‘translation surface’ is just a series of reflections of the original polygon over its edges.

These are beautiful conceptual leaps and they have answered many questions that inevitably concern both mathematics and physics. In 2014, Klarreich’s article captured some of Mirzakhani’s thoughtfulness:

In a way, she said, mathematics research feels like writing a novel. “There are different characters, and you are getting to know them better,” she said. “Things evolve, and then you look back at a character, and it’s completely different from your first impression.”

The Iranian mathematician follows her characters wherever they take her, along story lines that often take years to unfold.

In the article she was described as someone with a daring imagination. Reading about how she experienced mathematics made the nature of these efforts even more striking. There is a mysterious reality in these abstract worlds that grow out of *measuring the earth*. The two and three dimensional worlds of our experience become represented by ideals which then, almost like an Alice-in-Wonderland rabbit hole, lead the way to unimaginable depths. We find purely abstract *spaces* that have *volume*. We get there by looking further and looking longer. I feel a happy and eager inquisitiveness when I ask myself the question: “What are we looking at?” And I would like to find a new way to begin an answer. It seems to me that Mirzakhani loved looking. A last little bit from Klarreich:

Unlike some mathematicians who solve problems with quicksilver brilliance, she gravitates toward deep problems that she can chew on for years. “Months or years later, you see very different aspects” of a problem, she said. There are problems she has been thinking about for more than a decade. “And still there’s not much I can do about them,” she said.

]]>