Mathematical billiards describe the motion of a mass point in a domain with elastic reflections from the boundary. Billiards is not a single mathematical theory… it is rather a mathematician’s playground where various methods and approaches are tested and honed. Billiards is indeed a very popular subject…

In her public lecture Rom-Kedar started at the beginning. She described the familiar motion of billiard balls as they hit the sides of a billiards table. Once set in motion, a billiard ball will move along a straight line with a constant speed until it hits the side of the table. Its path, after it hits the side, is subject to a familiar law about the reflection of light, specifically, that the angle of incidence equals the angle of reflection. The billiard ball obeys the same law. If it happens to hit the other side head-on it will return to the first side along the same path.

Rom-Kedar then asked the first scientific question: What would happen if the ball just kept moving, traveling in straight lines, as it hit side after side, for an infinitely long time, each time obeying that law of reflection? Will the ball eventually traverse every point on the table? As it turns out, the answer to this question is yes, for some of these events, and no for others. When the ball’s initial move away from the side of the table begins at an angle whose measure has a rational relationship to the dimensions of the table, a periodic orbit gets locked in, and the periodic repetition of paths will never allow the ball to cover all the points on the table. But when the relation between that angle and the dimensions of the table is irrational, the paths are ergodic, i.e. they impinge on all of the points of the given surface or table. These ideal billiards have no mass and hence there is no friction. But in every other way, their behavior is the same as the ordinary billiard ball. I would suggest that there is already something interesting about the correspondence between the rationality of a geometric measure and the action of the ball. Why would there be such a correspondence? It’s like seeing something about numbers through the back of a mirror.

As it turns out, periodic behavior is fairly rare. Ergodic behavior is far more common, There’s a nice narrative about various approaches to this specialization in a 2014 Plus Magazine article by Marianne Freiberger.

In the 1980s mathematicians proved that for the vast majority of initial directions the trajectory will be much wilder: not only will it not retrace its steps, but it will eventually explore the whole of the table, getting arbitrarily close to every point on it. What is more, a typical trajectory will visit each part of the table in equal measure: if you take two regions of the table whose areas are equal, then the trajectory will spend an equal amount of time in both. This behaviour is a consequence of billiards being ergodic. By “vast majority” mathematicians mean that if you pick a direction at random, it will almost certainly behave in this ergodic way.

The absence of a pattern in ergodic behavior makes it very difficult to predict where the ball, or point, might be after some specified amount of time. A computer program could run all the paths fast enough to see what happens, but in true chaotic fashion a very slight change in the direction of the initial trajectory will dramatically change the ball’s later positions. But, as Freiberger explains, because many dynamic physical systems are ergodic, it does give us a handle on something other than the position of a particular point over time. Rather than being able to trace the path of of point.

you can accurately predict what proportion of its time it spends in a certain region of the table. If it’s a gas you are looking at, then you might not be able to say exactly where its many constituent molecules are at any given moment, but you can predict things like its temperature or pressure. So, as chaotic systems go, ergodicity is actually a good thing.

As always happens, mathematicians hunt for all of the generalities associated with all of the imagined, ideal possibilities. And changing the shape of the table introduces a lot of them. Instead of a rectangle, the table could be triangular, hexagonal or L-shaped. It could be round or elliptical. Rom-Kedar said that with these variations, questions about what will happen become “more delicate.” There are many more periodic trajectories in curved figures like circles and ellipses. Most of these events do not explore all of the table.

It is remarkable that billiards models effectively address many phenomena in the physical sciences, that are already described by alternative mathematical models, as well as open questions in mathematics, even number theory. Physicists use it as a close approximation of particle forces and movement. They are relevant to any systems exhibiting chaotic behavior. And, to be clear, billiard models are not restricted to objects on the plane. Billiard models have been developed on various surfaces, including Riemann surfaces.

There is something beautiful about all of this. An observation, of a very specific and pretty limited physical event (a billiard ball on a table) inspires the thoughtful exploration of imagined ideals, that involve infinite times, and are not limited by physicality. These abstractions are a product of looking through the physical consideration to the endless possibilities captured by ideals. Then these thorough investigations of purely idealized possibilities become a way to look at a surprising number of unrelated physical (and mathematical) phenomena. How does the human intellect manage this? And what motivates us to do things like this? It’s beautiful and fascinating.

]]>The authors argue, convincingly, that while individuals diagnosed with dyslexia may have difficulty with the symbolic representation of words, they seem to excel at 3-dimensional spatial reasoning. I thought of my daughter, whose dyslexia was not diagnosed till her freshman year in high school, but who, even as a 5-year old, seemed to have a remarkable ability to know which way to go when we were driving. She could locate herself pretty easily.

The authors describe the coordinate-like action of grid cells in order to point to one of the neurological components of how we all negotiate 3-dimensional space. I wrote about the action of grid cells in a 2014 post, to suggest that cognitive processes themselves have a mathematical nature. There I made this observation:

Spatial relations are translated into what look like purely temporal ones (the timing of neuron firing). The non-sensory system then stores a coded representation of a sensory one. Here again we see, not the mathematical modeling of brain processes but more their mathematical nature.

Drs. Brock and Fernette Eide cite studies done, from various perspectives, which suggest that the presence of strong spatial reasoning in a dyslexic individual is not developed in order to compensate for verbal difficulties but, rather, it is a strength with which dyslexic individuals are born. As a result, many such individuals have chosen careers in areas that include art, architecture, building, engineering, and computer graphics.

I found one of their observations particularly interesting because it contradicted the perfectly reasonable expectation that strong spatial reasoning skills are accompanied by vivid, mental visual images. But, as it is with mathematics, so it is with the brain. Specifically, it seems that it is possible to separate *knowledge of space* from *spatial images* in an individual’s experience. The authors describe a particular case-study where the individual involved lost his ability to create clear visual mental images, but his spatial reasoning abilities were unaffected. In other words, he could manage spatial relations without visualizing them.

MX was a retired building surveyor living in Scotland who’d always enjoyed a remarkable vivid and lifelike visual imagery system, or “mind’s eye.” Unfortunately, four days after undergoing a cardiac procedure MX awoke to discover that though his vision was normal, when he closed his eyes he could no longer voluntarily call to mind any visual image at all.

MX was tested using a whole series of spatial reasoning and visual memory tasks. As a control, a group of high-visualizing architects performed the same tasks. Surprisingly, it was found that although MX could no longer create any mental visual images while performing these tasks, he scored just as well as the architects did. As he performed the tasks, MX’s brain was also scanned with fMRI technology. In contrast to the architects, who heavily activated the visual centers of their brains while solving these tasks, MX used none of his brain’s visual processing regions.

These studies suggested that while MX had lost his ability to

perceive visual imageswhen engaging in spatial reasoning, he could stillaccess spatial informationfrom his spatial database and apply it to Material reasoning tasks with no detectable loss of skill.

It is common place in mathematics to separate spatial information from the visual images with which they can be associated. Analytic or coordinate geometry, for example, studies geometric figures (or figures in space) using their algebraic representations (i.e. only numbers). So there we have the numerical approach to figures and the visual one. A discipline like abstract algebra creates other kinds of spaces and objects by abstracting away not just the particular numbers (like the variables we learn about in high school algebra do) but by abstracting away the previous meaning of things like addition, for example. The plus sign comes to stand for anything that obeys the same rules that addition obeys, like a + b = b + a and a + 0 = a. But a, b and 0 are not necessarily numbers. The point is that mathematics manages to keep finding other places where information exists. Mathematics explores structure as fully and deeply as possible.

The relationship between vision and structure that MX’s experience brought me back to also reminded me of the 2002 AMS article called The World of Blind Mathematicians. The article is full of interesting and unexpected observations of the accomplished blind geometer, Bernard Morin. This is just one of them:

]]>…blind people often have an affinity for the imaginative, Platonic realm of mathematics. For example, Morin remarked that sighted students are usually taught in such a way that, when they think about two intersecting planes, they see the planes as two-dimensional pictures drawn on a sheet of paper. “For them, the geometry is these pictures,” he said. “They have no idea of the planes existing in their natural space.

Physics theoretician Nima Arkani-Hamed, at the Institute for Advanced Study at Princeton, has recently suggested that maybe space and time are not what we think they are. In a recent interview with Natalie Wolchover, for a New Yorker article, he expressed renewed interest in a point made by Richard Feynman in 1964. [1] Feynman took note of the fact that when considering physical systems, it is very often the case that different explanations will produce equally good predictions of events. For example, when predicting the movements of two objects that are gravitationally attracted to each other, three different approaches will produce the same correct prediction. These are, Newton’s law of gravity (that says that objects exert a pull on each other); Einstein’s spacetime (that describes how space bends around massive objects), and the mathematics of what is known as the principle of least action which holds that moving objects follow the path that uses the least energy and is accomplished in the least time. The fact that it is possible to predict physical behavior with more than one mathematical idea could suggest that physics research is actually testing the mathematics more than the physical world. And so Arkani-Hamed has proposed that the universe is actually the answer to a mathematical question we have not yet discovered. From Wolchover’s article:

“The miraculous shape-shifting property of the laws is the single most amazing thing I know about them,” he told me, this past fall. It “must be a huge clue to the nature of the ultimate truth.”

Arkani-Hamed was encouraged in this when he and his colleagues succeeded in predicting the outcomes of subatomic particle interactions using only the volume of their newly discovered, and purely abstract geometric object called the amplituhedron.[2] These volume calculations are made without any reference to physical space or time. If we don’t need space and time to calculate particle interactions then, perhaps, the space and time in which we seem to live is not a fundamental aspect of our reality. Arkani-Hamed is considering that space and time emerge from some deeper structure. If the volume of the amplituhedron encodes the outcomes of particle collisions, maybe mathematical principles, like those that define the intersections of lines and planes, for example is the real fundamental thing. Arkani-Hamed has also found that celestial patterns, that describe the history of the universe, can be represented as geometric volumes. Like Max Tegmark, author of Our Mathematical Universe, Arkani-Hamed is inspired by the possibility that ultimately, it will be a “spectacular mathematical structure,” out of which the past, present, and future of everything emerge.[3] Tegmark’s argument goes more like this:

Remember that two mathematical structures are equivalent if you can pair up their entities in a way that preserves all relations. If you can thus pair up every entity in our external physical reality with a corresponding one in a mathematical structure…then our external physical reality meets the definition of being a mathematical structure – indeed, the same mathematical structure.

Given these kinds of considerations, I would argue that addressing the question of whether or not mathematics exists independently of us is far more complicated than we ever thought. And the possibility that space and time emerge from something more fundamental (like a spectacular mathematical structure) is one of the more extreme attempts to consolidate the physical with the abstract. Again from Wolchover’s article:

“The ascension to the tenth level of intellectual heaven,” he told me, “would be if we find the question to which the universe is the answer, and the nature of that question in and of itself explains why it was possible to describe it in so many different ways.”

[1] Natalie Wolchover, New Yorker Magazine, February 10, 2019

[2] Nima Arkani-Hamedm, Jacob L. Bourjaily, Freddy Cachazo, Alexander B. Goncharov, Alexander Postnikov, Jaroslav Trnka, Scattering Amplitudes and the Positive Grassmannian, arXiv:1212.5605

[3] Natalie Wolchover, Visions of Future Physics, Quanta Magazine, September 22, 2015

]]>Building a convincing account of autopoiesis is a book-length enterprise. And the free energy principle is a mathematically complex idea. So I didn’t do justice to either of them in that post. But they are both important to a sense I’ve had, for a number of years now, that mathematics does have a biological nature. Today, I want to make the argument again, but a little differently. I’ll begin with some references to studies that have caught my attention because they concern *mathematical behavior* in insects. A very recent Science article reported on a study that suggests that bees are capable of responding to symbolic representations of addition and subtraction operations. It was a small study (just 14 bees), but the bees were trained to associate addition with the color blue and subtraction with the color yellow

Over the course of 100 appetitive-aversive (reward-punishment) reinforced choices, honeybees were trained to add or subtract one element based on the color of a sample stimulus.

The bees were placed at the entrance of a Y-shaped maze, where they were shown several shapes in either yellow or blue. If the shapes were blue, bees got a reward for choosing a set of objects at the end of the maze that was equal to the first number of shapes plus one. If the first set was yellow, they got a reward for choosing the set of shapes at the end of the maze that was equal to the first number shapes minus one. The alternative or incorrect choice could have more than one shape added, or it could have a smaller number of shapes than the initial set. The bees made the right choice 63% to 72% of the time, much better than random choices would allow.

The full study can be found here. In their introduction they explain why these bees are worth looking at.

Honeybees are a model for insect cognition and vision. Bees have demonstrated the ability to learn a number of rules and concepts to solve problems such as “left/right,“ above/below”, “same/different,” and “larger/smaller.” Honeybees have also shown some capacity for counting and number discrimination when trained using an appetitive (reward- only) differential conditioning framework. Recent advances in training protocols reveal that bees perform significantly better on perceptually difficult tasks when trained with an appetitive-aversive (reward-punishment) differential conditioning framework. This improved learning capacity is linked to attention in bees, and attention is a key aspect of advanced numerosity and spatial processing abilities in the human brain. Using this conditioning protocol, honeybees were recently shown to acquire the numerical rules of “greater than” and “less than” and subsequently apply these rules to demonstrate an understanding that an empty set, zero, lies at the lower end of the numerical continuum.

I understand being impressed with the fact that bees can acquire this kind of discriminating ability. But not so clear is why they are structurally capable of such things. I would argue that it’s because their living relies on the presence of structure that allows these kinds of responses. This is the level that interests me.

Here are a few clips from past posts::

Ants were seen to communicate some kind of numerical information about the location of food.

In the described experiments scouting ants actively manipulated with quantities, as they had to transfer to foragers in a laboratory nest the information about which branch of a ‘counting maze’ they had to go to in order to obtain syrup…

The likely explanation of the results concerning ants’ ability to find the ‘right’ branch is that they can evaluate the number of the branch in the sequence of branches in the maze and transmit this information to each other. Presumably, a scout could pass messages not about the number of the branch but about the distance to it or about the number of steps and so on. What is important is that even if ants operate with distance or with the number of steps, this shows that they are able to use quantitative values and pass on exact information about them.

Zhanna Reznikova and Boris Ryabko, 2011. Numerical competence in animals, with an insight from ants.Behaviour, Volume 148, Number 4, pp. 405-434, 2011

*The abstract of a paper published in Nature in 2001 includes this:*

…honeybees can interpolate visual information, exhibit associative recall, categorize visual information, and learn contextual information. Here we show that honeybees can form ‘sameness’ and ‘difference’ concepts. They learn to solve ‘delayed matching-to-sample’ tasks, in which they are required to respond to a matching stimulus, and ‘delayed non-matching-to-sample’ tasks, in which they are required to respond to a different stimulus; they can also transfer the learned rules to new stimuli of the same or a different sensory modality. Thus, not only can bees learn specific objects and their physical parameters, but they can also master abstract inter-relationships, such as sameness and difference.

**Mathematical behavior without a brain?**

But here’s something interesting about the slime mold – the abstract of a paper published in Nature in September of 2000 reads:

The plasmodium of the slime mould Physarum polycephalum is a large amoeba-like cell consisting of a dendritic network of tube-like structures (pseudopodia). It changes its shape as it crawls over a plain agar gel and, if food is placed at two different points, it will put out pseudopodia that connect the two food sources. Here we show that this simple organism has the ability to find

the minimum-length solution between two points in a labyrinth.(emphasis added)

And here’s another strategy used by researchers that was reported by Tim Wogan in 2010 in Science.

They placed oat flakes (a slime mold favorite) on agar plates in a pattern that mimicked the locations of cities around Tokyo and impregnated the plates with P. polycephalum at the point representing Tokyo itself. They then watched the slime mold grow for 26 hours, creating tendrils that interconnected the food supplies.

Different plates exhibited a range of solutions, but the visual similarity to the Tokyo rail system was striking in many of them… Where the slime mold had chosen a different solution, its alternative was just as efficient.

Autopoietic systems are ones which, through their interactions and transformations, continuously produce or realize the network of processes that defines them. They continuously create themselves. Maturana and Varela proposed that every living system is autopoietic, from individual cells, to the nested autopoietic systems in organs, organisms, and even social organizations. In my last post I connected this interpretation of life to Karl Friston’s free energy principle. But it was pretty sketchy, so I would like to fill it in a little here.

I find it important that the free energy principle has the same circular model of living processes as autopoiesis. But for Friston, the key to a system’s continuously regenerating itself relies on how it manages to maximize expectations and minimize surprise. Minimizing surprise is essentially the same as maintaining a low entropy state, which is synonymous with maintaining ones structure. (The mathematics of entropy in information theory, is easily applied to entropy in thermodynamics.) And so minimizing surprise is the same as minimizing entropy. But the thing that holds it all together, the thing that can formalize the analysis, is the use of Baysian inferences or statistical models because this is a way to quantify uncertainty. With all of this, living systems maintain themselves by keeping themselves within a set of expectations (through sensory information, statistical inferencing, and their own action), If they stray too far from having those expectations met (like a fish out of water), they will no longer exist.

When I consider the different ways that mathematics is present in bees, ants, and slime molds from the perspective of autopoiesis or free energy, mathematics looks like its right in the middle of all the action – in the thick of the organism’s living. It will show up in the interactions and transformations that contribute to life itself (where life is the maintenance of the structural and functional integrity of oneself) because it is part of the regular flow of its living. According to the free energy principle, living systems live by either adjusting their expectations to match the flow of sensations, or adjusting the flow of sensations to match their expectations. It must be true that mathematics is as much a part of this as any biological process.

]]>From my perspective, the notion of structural coupling which developed out of this framework, has the potential to contribute something important to a philosophy of mathematics. Two or more unities are structurally coupled when they enter into a relatedness that accomplishes their autopoiesis by virtue of ‘a history of recurrent interactions’ that leads to their ‘structural congruence.’ Also true is that every autopoietic system is closed, meaning that it lives only with respect to itself. Whether interactions happen within the internal components of a system, or with the medium in which the system exists, the system is only involved in its own continuous regeneration. The view of cognition proposed by Maturana requires that the nervous is just such a closed, autopoietic system, which also functions as a component of the organism that contains it.

Mathematician Yehuda Rav used these ideas to propose a philosophy of mathematics (which I referenced in a 2012 post). In an essay with the title *Philosophical Problems of Mathematics in the Light of Evolutionary Epistemology*, Rav writes:

Thus, Maturna (1980, p. 13) writes: “Living systems are cognitive systems, and living as a process is a process of cognition”. What I wish to stress here is that there is a continuum of cognitive mechanisms, from molecular cognition to cognitive acts of organisms, and that some of these fittings have become genetically fixed and are transmitted from generation to generation. Cognition is not a passive act on the part of an organism, but a dynamic process realized in and through action.

When we form a representation for possible action, the nervous system apparently treats this representation as if it were a sensory input, hence processes it by the same logico-operational schemes as when dealing with an environmental situation. From a different perspective, Maturana and Varela (1980, p. 131) express it this way: “all states of the nervous system are internal states, and the nervous system cannot make a distinction in its process of transformations between its internally and externally generated changes.”

Thus, the logical schemes in hypothetical representations are the same as the logical schemes in coordination of actions, schemes which have been tested through eons of evolution and which by now are genetically fixed.

As it is a fundamental property of the nervous system to function through recursive loops, any hypothetical representation which we form is dealt with by the same ‘logic’ of coordination as in dealing with real life situations. Starting from the elementary logico-mathematical schemes, a hierarchy is established. Under the impetus of socio-cultural factors, new mathematical concepts are progressively introduced, and each new layer fuses with the previous layers. In structuring new layers, the same cognitive mechanisms operate with respect to the previous layers as they operate with respect to an environmental input. …..The sense of reality which one experiences in dealing with mathematical concepts stems in part from the fact that in all our hypothetical reasonings, the object of our reasoning is treated by the nervous system by means of cognitive mechanisms which have evolved through interactions with external reality.

Mathematics is a singularly rich cognition pool of mankind from which schemes can be drawn for formulating theories which deal with phenomena which lie outside the range of daily experience, and hence for which ordinary language is inadequate.

Rav is imagining the development of mathematics as a feature of human cognition. But the perspective proposed by Maturana includes a theory of language. For Maturana, language is not a thing, and the essence of what we call language is not in the words or the grammar. Language happens as we live in the units that our coupling defines – through living systems, interlocked by structural congruences, that build unities. We are languaging beings the way we are breathing beings.

My experience with mathematics has suggested to me that, like words and grammar, the symbolic representation of mathematics is secondary to what mathematics is. Mathematics also seems to happen. And Maturana’s emphasis on autopoiesis and structural coupling has suggested to me that mathematics, like language, happens through the recursive coordination of behaviors. But perhaps unlike language, the relational dynamics that bring it about are somehow fed by the more fundamental structures in the physical world (both living and non-living), to which we are coupled, rather than by the features of the day-to-day experience that we share.

Conceptually, the view of biology proposed by Maturana is significantly different from main stream thinking in the biological sciences. One of the most important differences is the way living systems are each bounded by their individual autopoietic processes and, at the same time, nested within each other, infinitely extending living possibilities. In my opinion, this particular aspect of their thinking is the most promising in the sense that it is this aspect of their thinking that has the greatest potential to produce something new.

A recent article in Wired about the work of Karl Friston suggested to me that I might be right. Friston, a neuroscientist who has made important contributions to neuroimaging technology, is the author of an idea called the free energy principle. Free energy is the difference between the states a living system expects to be in, and the states that its sensors determine it to be in. Another way of saying it is that when free energy is minimized, surprise is minimized. For Friston, a biological system (Maturana’s unity) that resists disorder and dissolution (is autopoietic) will adhere to the free energy principle – “whether it’s a protozoan or a pro basketball team.”

Friston’s unities are separated by what are called Markov blankets.

Markov is the eponym of a concept called a Markov blanket, which in machine learning is essentially a shield that separates one set of variables from others in a layered, hierarchical system. The psychologist Christopher Frith—who has an h-index on par with Friston’s—once described a Markov blanket as “a cognitive version of a cell membrane, shielding states inside the blanket from states outside.

In Friston’s mind, the universe is made up of Markov blankets inside of Markov blankets. Each of us has a Markov blanket that keeps us apart from what is not us. And within us are blankets separating organs, which contain blankets separating cells, which contain blankets separating their organelles. The blankets define how biological things exist over time and behave distinctly from one another. Without them, we’re just hot gas dissipating into the ether.

The free energy principle is mathematical, and grounded in physics, Bayesian statistics, and biology. It involves action, or the living system’s response to surprise, in addition to the systems predictive abilities. This is one reason the theory has far-reaching potential for application. The audience that the free energy principle attracts is consistently expanding.

]]>Today I looked back at a post that I wrote in 2010 with the title, *Archetypes, Image Schemas, Numbers and the Season*. The subject of the post is a chapter from the book, *Recasting Reality: Wolfgang Pauli’s Philosophical Ideas and Contemporary Science*. The chapter was written by cognitive scientist Raphael Nuñez who uses Pauli’s collaboration with Jung to address Pauli’s philosophy of mathematics. Jung understood ‘number’ in terms of archetypes, primitive mental images that are part of our collective unconscious. But Nuñez seems most interested in addressing Platonism. Pauli’s interest in Jung doesn’t address Platonism directly but it is nonetheless implied in many of the things he says. As a cognitive scientist, Nuñez rejects Platonism. Despite the complexity of mathematical abstractions, he argues that the discipline is heavily driven by human experience. While his observations of Pauli’s interest in Jung’s psychology are nicely laid out, and some parallels to his own theory are highlighted, Pauli’s ideas don’t really contribute to the non-mystical position that Nuñez has staked out.

But today I looked at the entire text to which Nuñez contributed, and could see that there are a number of things in Pauli’s view that address my own preoccupation with the nature of mathematics, and more deeply than does the question of whether mathematics exists independent of human experience or not. Pauli was preoccupied with reconciling opposites, finding unity, making things whole, and was strongly motivated to think about the problem of how scientific knowledge, and what he called redemptive knowledge, are related to each other. I find it fairly plausible that mathematics could help with this since it exists in the world of ideal images as well as the world of physical measurement and logical reasoning.

Today I found a really nice essay by physicist Hans von Baeyer with the title Wolfgang Pauli’s Journey Inward. It tells a more intimate story of Pauli’s ardent search for what’s true, and is well worth the read.

In time Pauli came to feel that the irrational component of his personality, represented by the black, female yin, was every bit as significant as its rational counterpart. Pauli called it his shadow and struggled to come to terms with it. What he yearned for was a harmonious balance of yin and yang, of female and male elements, of the irrational and the rational, of soul and body, of religion and science

During his lifetime, Pauli’s fervent quest for spiritual wholeness was unknown to the public and ignored by his colleagues. Today, with the debate between science and religion once more at high tide, Pauli’s visionary pursuit speaks to us with renewed relevance.

I particularly enjoyed von Baeyer’s description of the famous Exclusion Principle for which Pauli received a Nobel Prize in 1945. It went like this:

The fundamental question had been why the six electrons in the carbon atom, say, don’t all carry the same amount of energy — “why their quantum numbers don’t have identical values… it should be expected that the electrons would all seek the same lowest possible energy configuration, the way water seeks the lowest level, and crowd into it.” If this rule applied to electrons in atoms, there would be very little difference between, say, carbon with its six and nitrogen with its seven electrons. There would be no chemistry.

Pauli answered the question by decree: the electrons in an atom, he claimed, don’t have the same quantum numbers because they can’t. If one electron is labeled with, say, the four quantum numbers (5, 2, 3, 0) the next electron you add must carry a different label, say (5, 2, 3, 1) or perhaps (6, 2, 3, 0). He proposed no new force between electrons, no mechanism, not even logic to support this injunction. It was simply a rule, imperious in its peremptoriness, and unlike anything else in the entire sweep of modern physics. Electrons avoid each other’s private quantum numbers for no reason other than, as one physicist put it, “for fear of Pauli.” …With the invention of the fourth quantum number and the exclusion principle Pauli opened the way for the systematic construction of Mendeleev’s entire periodic table.

What struck me from reading von Baeyer’s account was the depth of Pauli’s concern. And the boldness of his Exclusion Principle somehow makes him seem particularly trustworthy. The reconciliation he sought was not one that just allowed for the accepted coexistence of different concerns, but rather one that changed both of them to accommodate something new. As von Baeyer points out, physics has become more and more dominated by “the manipulation of symbols that facilitate thinking but bear only an indirect relationship to observable facts.” Pauli seemed to expect that symbol was the link between the rational and the irrational. This would easily support my hunch. He seemed to expect that science be able to deal with the soul, where the soul in turn informs science.

Eventually, he hoped, science and religion, which he believed with Einstein to have common roots, will again be one single endeavor, with a common language, common symbols, and a common purpose.

This is what I expect. And, at the moment, mathematics seems to be my most trustworthy guide. It lives on the boundary that we think we see between pure thought and material, between mind and matter. Pauli’s conviction is particularly reassuring.

]]>My affinity for this branch of mathematics may have been helped along by the fact that my favorite teacher in graduate school was a topologist. Sylvain Cappell, now still at The Courant Institute of Mathematical Sciences at NYU, introduced me to topological ideas. I’ve saved a Discover Magazine article from 1993 that landed in my mailbox not long after I left Courant. In it Cappell discusses the motivation and effectiveness of a topological approach to problems. The late Fields Medalist, William Thurston, also contributed to that article. Thurston suggested that our difficulty with perceiving the higher dimensions that are a fundamental consequence of topological ideas is primarily psychological. He believed that the mind’s eye is divided between linear, analytic thinking and geometric visualization.

Algebraic equations, for example, are like sentences. The formula that gives you the area for a cube, x times x times x, can easily be communicated in words. But the shape of the cube is another matter. You have to see it.

When we talk about higher-dimensional spaces, Thurston says, we’re learning to think in and plug into this other spatial processing system. The going back and forth is difficult because it involves two really foreign parts of the brain.

Emphasizing the value of “see it” Cappell makes the argument that even with a 2-dimensional graph, relating something like interest to consumer spending, where neither has anything to do with geometry, the shape of the line that represents their relationship gives you a better grasp of the situation.

The same holds true in five or even ten-dimensional models. Logically, it may seem like the geometry is lost, that it’s just numbers, says Cappell. But the geometry can tell you things that the numbers alone can’t: how a curve reaches a maximum, how you get from there to here. You can see hills and valleys, sharp turns and smooth transitions; holes in a doughnut-shaped nine-dimensional model might indicate realms where no solutions lie.

A few days ago, an article in Forbes tells us that topology can help us see something about how to reduce slum conditions in cities. Researchers, it explains, opt for a “shape-based” understanding of cities.

According to the team’s research, when two or more city sections have the same number of blocks, they’re topologically equivalent and can be deformed into each other.” Using that approach, sections of Mumbai can be deformed into Las Vegas suburbs or even areas of Manhattan.

How are slums and planned cities topologically different? The difference emerges essentially from better or worse access to the infrastructure and researchers claim that once cities are understood as topological spaces, the access issue can be resolved mathematically.

Their approach uses an algorithm that can be applied to any city block, they note in their paper. It applies tools from topology and graph theory — the branch of mathematics concerned with networks of points connected by lines — to neighborhood maps to diagnose and “solve critical problems of development,” they wrote.

This is an interesting peek at what can happen in or with mathematics. Topology itself requires a willingness to look differently. Using topological ideas to analyze or to address the development of slum conditions in sprawling cities is unexpected. It’s a geometry being applied to a space, but not directly. Not because it resembles the space. It says more about how abstractions can give us greater access to the real world. Or how the minds eye can see.

Both the Discover and Forbes articles are worth a look.

]]>I want to let subscribers know that I am making some hosting changes. I will be posting a blog tomorrow. If you don’t receive notice of the post, I encourage you to resubscribe.

Thanks for your interest in the site.

Joselle

]]>I want to let subscribers know that I am making some hosting changes. I will be posting a blog tomorrow. If you don’t receive notice of the post, I encourage you to resubscribe.

Thanks for your interest in the site.

Joselle

]]>Researches starting from general notions…can only be useful in preventing this work from being hampered by too narrow views, and progress in knowledge of the interdependence of things from being checked by traditional prejudices.

Siegfried seems not so much interested in talking about mathematics itself as he is in illustrating the significance of a change of perspective within mathematics. Most young students of mathematics would never imagine that there could be more than one mathematical way to think, or even that within the discipline there is mathematical thinking that’s not just problem solving. Referring to Riemann’s famous lecture, given in 1854, that essentially redefined what we mean by geometry, Siegfried says this:

In that lecture, Riemann cut to the core of Euclidean geometry, pointing out that its foundation consisted of presuppositions about points, lines and space that lacked any logical basis. As those presuppositions are based on experience, and “within the limits of observation,” the probability of their correctness seems high. But it is necessary, Riemann asserted, to “inquire about the justice of their extension beyond the limits of observation, on the side both of the infinitely great and of the infinitely small. (emphasis added)

Physics gets us beyond the limits of observation with extraordinarily imaginative instruments, detectors of all sorts. But how is it that mathematics can get us beyond those limits on its own? How is it possible for Riemann to see more without getting outside of himself? I don’t think this is the usual way the question is posed, but I have become a bit preoccupied with understanding how it is that purely abstract formal structures, that we seem to build in our minds, with our intellect, can get us beyond what we are able to observe. Again from Siegfried:

Riemann’s insights stemmed from his belief that in math, it was important to grasp the ideas behind the calculations, not merely accept the rules and follow standard procedures. Euclidean geometry seemed sensible at distance scales commonly experienced, but could differ under conditions not yet investigated (which is just what Einstein eventually showed)…

…Riemann’s geometrical conceptions extended to the possible existence of dimensions of space beyond the three commonly noticed. By developing the math describing such multidimensional spaces, Riemann provided an essential tool for physicists exploring the possibility of extra dimensions today.

Riemann appears in a number of my posts. I’ve taken particular interest in the significance of his work in part because in his famous lecture on geometry he cited the philosopher John Friedrich Herbart as one of his influences. Herbart pioneered early studies of perception and learning and his work played an important role in 19th century debates about how the mind brings structure to sensation. In Jose Ferreiros’ book *Labyrinth of Thought,* he takes up Riemann’s introduction of the notion of manifold and says this:

Herbart thought that mathematics is, among the scientific disciplines, the closest to philosophy. Treated philosophically, i.e., conceptually, mathematics can become a part of philosophy.~According to Scholz, Riemann’s mathematics cannot be better characterized than as a “philosophical study of mathematics” in the Herbarian spirit, since he always searched for the elaboration of central concepts with which to reorganize and restructure the discipline and its different branches, as Herbart recommended [Scholz 1982a, 428; 1990a].

I think the way I first grappled with the depth of Riemann’s insights was to consider that he was somehow guided, by the cognitive processes, that govern perception, despite the fact that they operate outside our awareness. Some blend of experience, psychology, and rigor worked to establish the clarity of his view. I wrote a piece for Plus magazine on this very topic.

Herbart’s thinking foreshadows what studies in cognitive science now show us about how we perceive space and magnitude — it may be that Riemann’s mathematical insights reflect them.

More recently I’ve become focused on asking a related question, but maybe from a different angle, and that is what is actually happening when we explore mathematical territories? How is this internal investigation accomplished? What does the mind think it’s doing? These questions are relevant because it is clear that there is significantly more going on in mathematics than calculation and problem solving. Riemann’s groundbreaking observations make that clear. The questions I ask may sound like impossible questions to answer, but even just organizing an approach to them is likely to involve, at the very least, cognitive science and neuroscience, mathematics, and epistemology, which makes them clearly worthwhile.

About Atiya’s breakthrough, an NBC news article said this:

Atiyah is a wizard of a mathematician, but there’s a lot of skepticism among mathematicians that his wizardry has been sufficient to crack the Riemann Hypothesis,” John Allen Paulos, a professor of mathematics at Temple University in Philadelphia and the author of several popular books on mathematical topics, told NBC News MACH in an email.

This skepticism is present in almost every article I read, but Atiya remains confident and is promising to publish a full version of the proof.

]]>I read today about Constantinos Daskalakis who was awarded the Rolf Nevanlinna Prize at the International Congress of Mathematicians 2018 for his outstanding contributions to the mathematical aspects of information sciences. In particular, Daskalakis made some new observations of some older ideas – namely game theory and what is called Nash equilibrium. Marianne Freiberger explains Nash equilibrium in Plus Magazine:

When you throw together a collection of

agents(people, cars, etc) in a strategic environment, they will probably start by trying out all sorts of different ways of behaving — all sorts of differentstrategies. Eventually, though, they all might settle on the single strategy that suits them best in the sense that no other strategy can serve them better. This situation, when nobody has an incentive to change, is called a Nash equilibrium.

A Nash equilibrium is not necessarily positive, it’s just stable. Nash proved in 1950 that no matter how complex a system is, it is always possible to arrive at an equilibrium. But the questioned remained – knowing that a system can stabilize doesn’t tell us whether it will. And nothing in Nash’s proof tells us how these states of equilibrium are constructed, or how they happen. People have searched for algorithms that could find the Nash equilibrium of a system, and they found some, but the time it would take to do the computations, or to complete the task, wasn’t clear. Daskalakis explains in Freidberger’s article:

My work is a critique of Nash’s theorem coming from a computational perspective,” he explains. “What we showed is that [while] an equilibrium may exist, it may not be attainable. The best supercomputers may not be able to find it. This theorem applies to games that we play, it applies to road networks, it applies to markets. In many complex systems it may be computationally intractable for the system to find a stable operational mode. The system could be wondering around the equilibrium, or be far away from the equilibrium, without ever being drawn to a stable state.

Daskalakis’ work alerts people working in relevant industries that a Nash equilibrium, while it exists, may be essentially unattainable because the algorithms don’t exist, or because the complexity of the problem is just too difficult. These considerations are relevant to people who design things like road systems, or online products like dating sites or taxi apps.

When designing such a system, you want to optimise some objective: you want to make sure that traffic flows consistently, that potential dates are matched up efficiently, or that taxi drivers and riders are happy.

If you are counting on an equilibrium to deliver this happy state of affairs, then you better make sure the equilibrium can actually be reached. You better be careful that the rules that you set inside your system do not lead to a situation where our theorem applies,” says Daskalakis. “Your system should be clean enough and have the right mathematical structure so that equilibria can arise easily from the interaction of agents. [You need to make sure] that agents are able to get to equilibrium and that in equilibrium the objectives are promoted.

Another option is to forget about the equilibrium and try to guarantee that your objective is promoted even [with] dynamically changing behaviour of people in your system.

This confluence of game theory, complexity theory and information science has made it possible to see the abstract more clearly, or has made a mathematical notion somehow measurable. The work includes a look at how hard the solution to a problem can be, and whether or not the ideal can be actualized. What struck me about the discussion in *Plus* was the fact that Daskalakis’ work was thought to address the difference between the mathematical existence demonstrated by Nash and its real world counterparts, maybe even whether or how they are related. These things touch on my questions. Nash’s proof is non-constructive existence proof. It doesn’t build anything, it just finds something to be true. Daskalakis is a computer scientist and an engineer. He expects to build things. But the problem is attacked with mathematics. His effort spans game theory in mathematics, complexity theory (a branch of mathematics that classifies problems according to how hard they are) and information sciences. There is an interesting confluence of things here. And it didn’t answer any of the questions I have. But it encouraged me. I also like this quote from a recent Quanta Magazine article about Daskalakis:

]]>The decisions the 37-year-old Daskalakis has made over the course of his career — such as forgoing a lucrative job right out of college and pursuing the hardest problems in his field — have all been in the service of uncovering distant truths. “It all originates from a very deep need to understand something,” he said. “You’re just not going to stop unless you understand; your brain cannot stay still unless you understand.”