Hello everyone,
I want to let subscribers know that I am making some hosting changes. I will be posting a blog tomorrow. If you don’t receive notice of the post, I encourage you to resubscribe.
Thanks for your interest in the site.
Joselle


Hello everyone, I want to let subscribers know that I am making some hosting changes. I will be posting a blog tomorrow. If you don’t receive notice of the post, I encourage you to resubscribe. Thanks for your interest in the site. Joselle I was struck today by the title of an article in Science News that read, Before his early death, Riemann freed geometry from Euclidean prejudices. The piece, by science writer Tom Siegfried, was no doubt inspired by the recent claim from awardwinning mathematician Michael Atiyah that he has proved the long standing Riemann hypothesis, one of the most famous unsolved problems in mathematics for close to 160 years. But Siegfried’s article was more about Riemann’s extraordinary insights than it was about Atiyah’s claim (which I’ll get to before I’m done here). First, let me say this: by using the term ‘Euclidean prejudices,’ Siegfried is telling us that we could be mislead by prejudices even in mathematics. And what prejudices usually do is conceal the truth. The word actually appears in the lecture itself. This is from a translation of the lecture by William Kingdom Clifford:
Siegfried seems not so much interested in talking about mathematics itself as he is in illustrating the significance of a change of perspective within mathematics. Most young students of mathematics would never imagine that there could be more than one mathematical way to think, or even that within the discipline there is mathematical thinking that’s not just problem solving. Referring to Riemann’s famous lecture, given in 1854, that essentially redefined what we mean by geometry, Siegfried says this:
Physics gets us beyond the limits of observation with extraordinarily imaginative instruments, detectors of all sorts. But how is it that mathematics can get us beyond those limits on its own? How is it possible for Riemann to see more without getting outside of himself? I don’t think this is the usual way the question is posed, but I have become a bit preoccupied with understanding how it is that purely abstract formal structures, that we seem to build in our minds, with our intellect, can get us beyond what we are able to observe. Again from Siegfried:
Riemann appears in a number of my posts. I’ve taken particular interest in the significance of his work in part because in his famous lecture on geometry he cited the philosopher John Friedrich Herbart as one of his influences. Herbart pioneered early studies of perception and learning and his work played an important role in 19th century debates about how the mind brings structure to sensation. In Jose Ferreiros’ book Labyrinth of Thought, he takes up Riemann’s introduction of the notion of manifold and says this:
I think the way I first grappled with the depth of Riemann’s insights was to consider that he was somehow guided, by the cognitive processes, that govern perception, despite the fact that they operate outside our awareness. Some blend of experience, psychology, and rigor worked to establish the clarity of his view. I wrote a piece for Plus magazine on this very topic.
More recently I’ve become focused on asking a related question, but maybe from a different angle, and that is what is actually happening when we explore mathematical territories? How is this internal investigation accomplished? What does the mind think it’s doing? These questions are relevant because it is clear that there is significantly more going on in mathematics than calculation and problem solving. Riemann’s groundbreaking observations make that clear. The questions I ask may sound like impossible questions to answer, but even just organizing an approach to them is likely to involve, at the very least, cognitive science and neuroscience, mathematics, and epistemology, which makes them clearly worthwhile. About Atiya’s breakthrough, an NBC news article said this:
This skepticism is present in almost every article I read, but Atiya remains confident and is promising to publish a full version of the proof. I’m not completely sure I understand where my desire to grasp the value of abstractions is taking me, but as I think about mathematics, and more recent trends in the sciences, I keep wanting to get further and further behind what our symbolic reasoning is actually doing, and how it’s doing it. I have this idea that if I can manage to somehow see around, or inside, the products of our minds, I’ll see something new. Maybe I can simplify the question that my own mind keeps asking, but that won’t help answer it. Here’s the thing I’m stuck on. Everything that we do (in the arts and the sciences) is based on the continuous flow of thoughts, that begin to amass into threads of thoughts, that have now moved through the hands and minds of countless individuals, over thousands of years, creating giant fabrics of meaningful information. What are these threads made from? How do they develop? How are they related to everything else in nature? And what might the giant fabrics of human culture have to do with everything else. While we generally distinguish symbols from reality, a very large part of our reality is now built more from ideas than from concrete or wood, using the symbols that make the sharing of these ideas possible. Even language captures relations, among sentiments and experience, in symbol. These relations, together with a kind of reasoning or logic, build our social and political systems. We’re so immersed in our languages and our symbols that we don’t even see them. And I don’t know if we can see them in the way that could address my curiosities. But I’m convinced that we can see more than we do. And the steady growth of information theory, (and its more immediate relatives, like algorithmic information theory and quantum information theory) seems to shed new light on the reality of abstract relations. Mathematics is distinguished as the discipline that explores purely abstract relations. The fruits of many of these explorations now service the parts of our world that we try to get our hands on – things like astronomy, engineering, physics, computer science, biology, medicine, and so on. I’m beginning to consider that information sciences may yet uncover something about why mathematics has been so fruitful. I read today about Constantinos Daskalakis who was awarded the Rolf Nevanlinna Prize at the International Congress of Mathematicians 2018 for his outstanding contributions to the mathematical aspects of information sciences. In particular, Daskalakis made some new observations of some older ideas – namely game theory and what is called Nash equilibrium. Marianne Freiberger explains Nash equilibrium in Plus Magazine:
A Nash equilibrium is not necessarily positive, it’s just stable. Nash proved in 1950 that no matter how complex a system is, it is always possible to arrive at an equilibrium. But the questioned remained – knowing that a system can stabilize doesn’t tell us whether it will. And nothing in Nash’s proof tells us how these states of equilibrium are constructed, or how they happen. People have searched for algorithms that could find the Nash equilibrium of a system, and they found some, but the time it would take to do the computations, or to complete the task, wasn’t clear. Daskalakis explains in Freidberger’s article:
Daskalakis’ work alerts people working in relevant industries that a Nash equilibrium, while it exists, may be essentially unattainable because the algorithms don’t exist, or because the complexity of the problem is just too difficult. These considerations are relevant to people who design things like road systems, or online products like dating sites or taxi apps.
This confluence of game theory, complexity theory and information science has made it possible to see the abstract more clearly, or has made a mathematical notion somehow measurable. The work includes a look at how hard the solution to a problem can be, and whether or not the ideal can be actualized. What struck me about the discussion in Plus was the fact that Daskalakis’ work was thought to address the difference between the mathematical existence demonstrated by Nash and its real world counterparts, maybe even whether or how they are related. These things touch on my questions. Nash’s proof is nonconstructive existence proof. It doesn’t build anything, it just finds something to be true. Daskalakis is a computer scientist and an engineer. He expects to build things. But the problem is attacked with mathematics. His effort spans game theory in mathematics, complexity theory (a branch of mathematics that classifies problems according to how hard they are) and information sciences. There is an interesting confluence of things here. And it didn’t answer any of the questions I have. But it encouraged me. I also like this quote from a recent Quanta Magazine article about Daskalakis:
We all generally know the meaning of abstraction. We all have some opinion, for example, about the value of abstract painting. And I’ve heard from many that mathematics is too abstract to be understood or even interesting. (But I must admit, it is exactly this about mathematics that keeps me so captivated). An abstraction is usually thought of as the general idea as opposed to the particular circumstance. I thought today of bringing a few topics back into focus, all of which I’ve written about before, to highlight something about knowledge – what it is, or how we seem to collect it. This particular story centers around the idea of entropy. First, here’s as brief a description of the history of the mathematics of this idea as I can manage at the moment: In the history of science and mathematics, two kinds of entropy were defined – one in physics and one in information theory. Mathematical physicist Rudolf Clausius first introduced the concept of entropy in 1850, and he defined it as a measure of a system’s thermal (or heat) energy that was not available to do work. It was a fairly specific idea that provided a mathematical way to pin down the variations in physical possibilities. I’ve read that Clausius chose the word entropy because of its Greek ties to the word transformation and the fact that it sounded like energy. The mathematical statement of entropy provided a clear account, for example, of how gas, confined in a cylinder, would freely expand if released by a valve, but could also be made to push a piston in response to the pressure of something that confined it, like in our cars. The piston event is reversible, while the expansion is not. In the piston event, however, some amount of heat or energy is always lost to entropy. And so Clausius’ version of the second law of thermodynamics says that spontaneous change, for irreversible processes in isolated systems, always moves in the direction of increasing entropy. In 1948 Claude Shannon initiated the development of what is now known as information theory when he formalized a mathematics of information based on the observation that transmitted messages could be encoded with just two bursts of energy – on and off. In this light he defined an information entropy, still referred to as Shannon’s entropy, which is understood as the measure of the randomness in a message, or a measure of the absence of information. Shannon’s formula was based on the probability of symbols (or letters in the alphabet) showing up in the message. In the late 1800’s, James Clerk Maxwell, developed the statistical mechanical description of entropy in thermodynamics, where macroscopic phenomena (like temperature and volume) were understood in terms of the microscopic behavior of molecules. Soon after, physicist and philosopher Ludwig Boltzman generalized Maxwell’s statistical understanding of the action and formalized the logarithmic expression of entropy that is grounded in probabilities. In that version entropy is proportional to the logarithm of the number of microscopic ways (hard to see ways) that the system could acquire different macroscopic states (the things we see). In other words we can come to a statistical conclusion about how the behavior of an immense number of molecules, that we don’t see, will affect the events we do see. It is Boltzman’s logarithmic equation (which appears on his gravestone) that resembles Shannon’s equation, allowing both entropies to be understood, essentially, as probabilities related to the arrangement of things. It is certainly true that the mathematics that defined entropy at each stage of its development is an abstraction of the phenomenon. However reducing both the thermodynamic definition, and information theory definition, to probablities related purely to the arrangement of things is another (and fairly significant) level of abstraction. You are likely familiar with the notion that entropy always increases, or as it is often understood, things always tend to disorder. But, unlike what one might expect, it is this ‘arrangement of things’ idea that seems to best explain why eggs don’t uncrack or ice doesn’t unmelt. The number of possible arrangements of atoms in an uncracked egg is far, far, smaller that the number of possible arrangements of atoms in a cracked egg, and so far less likely. Aatish Bhatia does a really nice job of explaining this way of understanding things here. Next, the relationship between information and thermodynamics has had the attention of physicists since James Clerk Maxwell introduced a hypothetical little creature who seemed to challenge the second law of thermodynamics and has come to be know as Maxwell’s demon. Some discussion of the demon can be found in a post I wrote in 2016. In 2017, a Quanta Magazine article by Philip Ball, reviews the work of physicists, mathematicians, computer scientists, and biologists who explore the computational (or information processing) aspect of entropy as it relates to biology.
In 1944, physicist Erwin Schrödinger proposed that living systems take energy from their surroundings to maintain nonequilibrium (or to stay organized) by capturing and storing information. He called it “negative entropy.”
Now, physicist Jeremy England is considering pulling biology into physics (or at least some aspect of it) with the suggestion that the organization that takes place in living things is just one of the more extreme possibilities of a phenomena exhibited by all matter. From an essay written by England,
Finally (not in any true sense, just for the scope of this post) physicist Chiara Marletto has a theory of life based on a new fundamental theory of physics called Constructor Theory. I wrote a guest blog for Scientific American on Constructor Theory in 2013. In her essay, also published by Aeon, Marletto explains,
But the constructor itself, the thing that causes a transformation, is abstracted away in constructor theory, leaving only the input/output states. ‘Information’ is the only thing that remains unchanged in each of these transformations, and this is the focus of constructor theory. With Constructor Theory, this underlying independence of information involves a more fundamental level of physics than particles, waves and spacetime. And the expectation is that this ‘more fundamental level’ may be shared by all physical systems (another generality). The input/output states of Constructor Theory are expressed as “ordered pairs of states” and are called construction tasks. The idea is no doubt a distant cousin of the ordered pairs of numbers we learned about in high school, along with the onetooneness, and compositions taught in precalculus! And Constructor Theory is an algebra, a new one certainly, but an algebra nonetheless. This algebra is not designed to systematize current theories, but rather to find their foundation and then open a window onto things that we have not yet seen. According to Marletto: The early history of evolution is, in constructortheoretic terms, a lengthy, highly inaccurate, nonpurposive construction that eventually produced knowledgebearing recipes out of elementary things containing none. These elementary things are simple chemicals such as short RNA strands…Thus the constructor theory of life shows explicitly that natural selection does not need to assume the existence of any initial recipe, containing knowledge, to get started. Marletto has also written on the constructor theory of thermdynamics in which she argues that constructor theory highlights a relationship between information and the first law of thermodynamics, not just the second. This story about information, thermodynamics, and life, certainly suggests something about the value of abstraction. As a writer, I’m not only interested in the progression of scientific ideas, but also in the power of generalities that seem to produce new vision, as well as amplify the details of what we already see. It seems to me that there is a particular character to the knowledge that is produced when communities of thinkers move through abstractions that bring them from measuring temperature and volume, to information driven theories of a science that could contain both physics and biology. It’s all about relations. I haven’t written this to answer my question about what’s happening here, but mostly to ask it.
A recent post on scientificamerican.com got my attention – no surprise given its title, Could Multiple Personality Disorder Explain Life, the Universe, and Everything. It was coauthored by three individuals: computer scientist Bernardo Kastrup, Psychotherapist Adam Crabtree, and cognitive scientist Edward F. Kelly. The article’s major source is a paper written by Kastrup, published this year in the Journal of Consciousness Studies with the title The Universe is Consciousness. I’ll try to outline here the gist of the argument. It begins with a very convincing narrative substantiating the presence of multiple personalities in individuals who experience this. One of the most remarkable was the case (reported in Germany in 2015) of a woman who had dissociated personalities, some of whom were blind.
The history of cases of dissociated personalities goes back to the late 1800s and the authors tell us that the literature provides significant evidence that “the human psyche is constantly active in producing personal units of perception” – what we would call selves. While it continues to be unclear how this happens, they argue that the development of selves, or personal units of perception, should play a role in how we understand “what is and is not possible in nature.” The case they make requires an appeal to alternative philosophical perspectives specifically physicalism, constituitive panpsychism, and cosmopsychism. Proponents of physicalism believe that we should be able to understand mental states through a thorough analysis of brain processes. The ongoing problem with this expectation is that there is still no way to connect feelings to different arrangements of physical stuff. Constituitive panpsychism is the idea that what we call experience is inherent in every physical thing, even fundamental particles. Human consciousness would somehow be built “by a combination of the subjective inner lives of the countless physical particles that make up our nervous system.” But, the authors argue, the articulation of this perspective does not provide a way to understand how lower level points of view (atoms and molecules) would combine to produce higher level points of view (human experience). The alternative would be that consciousness is fundamental in nature but not fragmented. This is cosmopsychism which, the authors say, is essentially the classic idealism, where the objects of our experience depend on something more fundamental than particles, and that fundamental thing is more like mind or thought than matter. The difficulty with this view is understanding how various private conscious centers (like you and everyone around you) emerge from a ‘universal consciousness.’ Keying on this question is what makes the presence of multiple personalities in one individual a useful indicator of how to think about this larger question. Kastrup’s paper, on which this very readable Scientific American article is based, is steeped in the language of philosophy. He works to unpack the mainstream physicalist perspective and why it doesn’t work, and then he examines a number of panpsychism views and their weaknesses. For his own aragument, he relies most heavily on a proposal from philosopher Itay Shani.
Kastrup’s thinking is in line with Shani’s, but he goes to great lengths to examine the weaknesses in Shani’s view. For the remainder of the paper, Kastrup focuses on addressing the following questions: how do fleeting experiential qualities arise out of “one enduring cosmic consciousness,” what causes individual experiences to be private, how can the physical world we measure be explained in terms of a concealed, thoughtful, order, why does brain function correlate so well with our awareness if it doesn’t generate it, and finally, why are we all imagining the same world outside the control of our personal volition. Kastrup’s analysis of these questions is thorough and precise, and he uses the phenomenon of dissociated personalities which he calls alters) to address the privacy of individual experiences (since the alters within one individual are nonetheless private from each other) and the functional brain scans, that distinguish actual alters from ones that are just acted out, to imagine how each of us is the result of a “cosmic level dissociative processes.” These are difficult ideas to accept given what we have come to expect from the sciences. But I will point out that aspects of these proposals run parallel to ones proposed by contemporary neuroscientists and physicists. The intimate connection between physics and mathematics always raises questions about the relatedness of mind and matter. For 17th century mathematician and philosopher Gottfried Wilhelm Liebniz, the fundamental substance of the universe could not be material. It had to be something undividable, something resembling a mathematical point more than a speck of dust. The material in our experience is then somehow a consequence of the relations among these nonmaterial substances that actually resemble ‘mind’ more than ‘matter.’ For physicist and author David Deutsch, information and knowledge are the fundamentals of physical life. In his book, The Beginning of Infinity, Deutsch compares and contrasts human brains and DNA molecules. “Among other things,” he says, they are each “general purpose informationstorage media….” And so Deutsch sees biological information and explanatory information each as instances of knowledge which, he says, “is very unlikely to come into existence other than through the errorcorrecting process of evolution or thought. The Integrated Information Theory of Consciousness proposed by neuroscientist Giulio Tononi, and defended by neuroscientist Christof Koch, suggests that some degree of consciousness is an intrinsic fundamental property of every physical system. Also, cosmologist and author Max Tegmark is of the opinion that if we want to understand all of nature we have to consider all of it together. For Tegmark there are three pieces to every puzzle – the thing being observed; the environment of the thing being observed (where there may be some interaction); and the observer. He identifies three realities in his book Our Mathematical Universe – external reality, consensus reality, and internal reality. External reality is the physical world which we believe would exist even if we didn’t (and is described in physics mathematically). Consensus reality is the shared description of the physical world that selfaware observers agree on (and it includes classical physics). Internal reality is the way you subjectively perceive the external reality. As with many ideas in physics, the universe is understood in terms of information, and Tegmark has said that he thinks that consciousness is the way information ‘feels’ when processed in complex ways. It seems to me that a similar insight into what we have been overlooking, about ourselves and our world, is being approached from several directions and in languages specific to individual disciplines. The ones proposed by physicists and neuroscientists are held together with mathematics. But they all bring to mind again, something I thought when I watched my mother’s mind change with the development of a tumor in the right frontal lobe of her brain. Among the many things I questioned was how it is that the cells in her body could produce her experience if something like consciousness or thought did not already in the world that created her.
Earlier this month, Nature reported on Artificial Intelligence (AI) research, where deep learning networks (an AI strategy) spontaneously generated patterns of computations that bore a striking resemblance to the activity generated by our own grey matter – namely by the neurons called grid cells in the mammalian brain. The patterned firing of grid cells enable mammals to create cognitive maps of their environment. The artificial network, that unexpectedly produced something similar, was developed by neuroscientists at University College London, together with AI researchers at the Londonbased Google company DeepMind. A computersimulated rat was trained to track its movement in a virtual environment. The Nature article by Alison Abbott tells us that the gridcelllike coding was so good, the virtual rat was even able to learn shortcuts in its virtual world. And here’s an interesting response to the work from neuroscientist Edvard Moser, a codiscover of biological grid cells:
There is something provocative about measuring the brain’s version of grid cell navigation against this emergent but simulated grid cell action. In Nature’s News and Views Francesco Savelli and James J. Knierim tell us a bit more about the study. First, for the sake of clarity, what researchers call deep learning is a kind of machine learning characterized by layers of computations, structured in such a way that the output from one computation becomes the input of another. Inputs and outputs are defined by a transformation of data, or information, being received by each layer. The data is translated into “compact representations” that promote the success of the task at hand – like translating pixel data into a face that can be recognized. A system like this can learn to process inputs so as to achieve particular outputs. The extent to which each of the computations, in each of the layers, affects the final outcome is determined by how they are weighted. With optimization algorithms, these weights will be adjusted to optimize results. Deep learning networks have been successful with computer vision, speech recognition, and games, among other things. But navigating ones self through the space of ones environment is a fairly complex task. The research that led to Moser’s Nobel Prize in 2014 was the discovery of a kind of family of neurons that produces the cognitive maps we develop of our environments. There are place cells, neurons that fire when an organism is in a particular position in an environment, often with landmarks. There are headdirection neurons that signal where the animal seems to be headed. There are also neurons that respond to the presence of an edge to the environment. And, most relevant here, there are grid cells. Grid cells fire when an animal is at any of a set of points that define a hexagonal grid pattern across their environment. The neuron’s firing maps to a point on the ground. They contribute to the animal’s sense of position, and correspond to the direction and distance covered by some number of steps taken. Banino and colleagues wanted to create a mechanism for selflocating, in a deeplearning network. Such a mechanism is referred to as path integration.
And this is what happened:
These gridlike units allowed the network to keep track of position, but whether they would function in the network’s navigation to a goal was still a question. They addressed this question by adding a reinforcementlearning component. The network learned to assign values to particular actions at particular locations, and higher values were assigned to actions that brought the simulated animal closer to a goal.
Unlike the navigation systems developed by the brain, in this artificial network, the place cell layer is not changed during the training that affects grid cells. But the way that grid and place cells influence each other in the brain is not well understood. Further development of the artificial network might help unravel their interaction.
There is clear pragmatic promise in this research, involving both AI and it’s many applications, as well as cognitive neuroscience. But I find it striking for a different reason. I find it striking because it seems to provide something new, and provocative, about mathematics’ ubiquitous presence. When I first learned about the action of grid cells I was impressed with the way this fully biological, unconscious, cognitive mechanism resembled the abstract coordinate systems in mathematics. But here there is an interesting reversal. Here we see the biological one emerging, without our direction, from a system that owes its existence entirely to mathematics. It puts mathematics somewhere between everything and in a way that we haven’t quite grasped. It’s intelligence we can’t locate. My thoughts started jumping around today, trying to land on what it was that I found so fascinating about a recent article in Quanta Magazine. This is one of the statements that got me going:
It was in the early nineties that the surprise first occurred, like an alert that there is a mirror symmetry between two different mathematical structures, and mathematicians have been investigating it for almost three decades now. The Quanta magazine article reports that they seem to be close to being able to explain the source of the mirroring. Kevin Hartnett, author of the Quanta article, characterizes their effort as one that could produce “a form of geometric DNA – a shared code that explains how two radically different geometric worlds could possible hold traits in common.” (I like this biologicallythemed analogy) The whole mirroring phenomenon rests largely on the development of string theory in physics, where theorists found that the strings, that they hoped were the fundamental building blocks of the universe, required 6 dimensions more than is contained in Einstein’s 4dimensional spacetime. String theorists answered the demand by finding two ways to account for the missing six dimensions – one from symplectic geometry and the other from complex geometry. These are the two distinct arrangements of geometric ideas that mathematicians are now examining. The nature of a symplectic geometric space is grounded in the idea of phase space, where each point actually represents the state of a system at any given time. A phase space is defined by patterns in data, not by the spatial arrangement of objects. It is a multidimensional space in which each axis corresponds to a coordinate that specifies an aspect of the physical system. When all the coordinates are represented, a point in the space corresponds to a state of the system. The nature of complex geometry, on the other hand, has its roots in algebraic geometry, where the objects of study are the graphed solutions to polynomial equations. Here the ordered pairs represent exactly positions on a grid (like those x,y pairs we learn about in high school), or complex numbers in a complex space, where those numbers are solutions to equations. The beauty of this arrangement is that the properties possessed by the geometric representation of these solutions (or the objects they produce) provide us with more about the equations they represent than we would have without these representations. But wherever they are, these solutions are rigid geometric objects. The phase space is more flexible. Hartnett tells us that:
Robert Dijkgraaf, Director and Leon Levy Professor at the Institute for Advanced Study tells an interesting story. Around 1990, a group of string theorists asked geometers to calculate a number related to the number of curves, of a particular degree, that could be wrapped around the kind of space or manifold that is heavily used in string theory (a CalabiYau space) A result from the nineteenth century established that the number of lines or degreeone curves is equal to 2,875. The number of degreetwo curves is 609,250. This was computed around 1980. The number of curves of degree three had not been computed. This was the one geometers were asked to compute.
The duality appeared to run deep and mathematicians and physicists alike began to try to understand the underlying feature that would account for the mirroring phenomenon. A proposed strategy is to deconstruct a shape in the symplectic world in such a way that it can be reconstructed as a complex shape. The deconstruction can make a multidimensional simplectic manifold easier to visualize and it can also reduce one of the mirror spaces into building blocks that can be used to construct the other. This would likely lead to a better understanding of what connects them. Again from Dijkgraaf:
This is a remarkable story, and there are many in mathematics. I’ve always been captivated a bit by how the spatial ideas of this discipline, once charged with measuring the earth, became the abstract ideals described by Euclid, that were then stretched to accommodate spaces with nonEuclidean shapes, that include our spacetime, and were further developed to create spaces defined by patterned data of any kind – the symplectic kind. In this story, mathematicians, like experimentalists, become charged with the need to find reason for an unexpected observation. But it is an observation of the fully abstract world that mathematics built. What are these abstract worlds made of? How do they become more than we can see? I’m well aware of the lack of precision in these questions, but there is value in stopping to consider them. To what extent are these abstract spaces objective? Where are these investigations happening? There is no doubt that we have yet to understand what we realize when we find mathematics. A recent article in Quanta Magazine anticipates the publication of the 6th edition of Proofs from The Book, collected by Martin Aigner and Günter Ziegler. The original volume was inspired by the wellknown and prolific mathematician Paul Erdős, who traveled the world, participating in countless collaborative efforts, and who would say of proofs that he judged to be of sublime beauty, “This one is from The Book.” This Book was imagined as the heavenly collection of mathematics’ perfect proofs. Aigner suggested the possibility of actually making The Book in 1994. Aigner, along with fellow mathematician Günter Ziegler, and with contributions from Erdős himself, published the first volume in 1998. Unfortunately, Erdős died in 1996, at he age of 83, and never saw the volume in print. The book received the 2018 Steele Prize for Mathematical Exposition. One of the nice things that the article points out is that there are theorems that have a number of different proofs, each one telling you something different about the theorem or the structures involved in the proof of the theorem.
This kind of discussion highlights how mathematical ideas can be multiaspected, the very thing that makes a mathematical idea powerful and difficult to categorize in our experience. But in the lower right margin of the article were links to related articles, and it was here that I found Michael Atiyah’s Imaginative State of Mind. This piece was written about a year ago, when Michael Atiyah hosted a conference at the Royal Society of Edinburgh on The Science of Beauty. There is a video of his introductory remarks on youtube worth a listen. The article was built around Atiyah’s response to some questions that the authors were able to ask him on the occasion of the conference.
I felt encouraged by the refreshingly sensory ways Atiyah characterized his experience as a mathematician. Like here:
In response to being asked if he had always had mathematical dreams he said this:
And when asked about the two works for which he is well known (the index theorem and Kthreory) he suggested this very visual way of describing ktheory:
I found a nice description of how the index theorem can connect the curvature of a space to its topology (or the number of holes it has). One of the things Atiya is committed to at the moment is reversing the mistake of ignoring the small effect of gravity on an electron or proton. He says he’s going back to Einstein and Dirac and looking at them again and he thinks he sees things that people have missed. “If I’m wrong,” he says, “I made a mistake. But I don’t think so.” At the end of introductory remarks he made at The Science of Beauty conference he said that he found himself closer to the mystical views of Pythagoras than to those who completely rejected mysticism. ”A little bit of mysticism is important in all forms of life.” When asked if he thought a computer could be made to recognize beauty, his response led to his characterizing the mind as a parallel universe. More than just logic, the mind has aspects that recognize states. These are not verbal or pictorial states, but conceptual states. And beauty lives somewhere in the mind. This is the kind of insight that doing mathematics can produce. And it will, I believe, lead us to completely new ideas about who we are and what it is that our minds may be producing. A last thought on mathematics:
My attention was just recently brought to the work of philosopher and poet Emily Grosholz. It’s rare to find an individual so steeped in the ways of poetry and mathematics, and the desire to explore how and what they express about us. What I would like to consider here, in this particular post, is really a detail of the extensive thought and research that Grosholz brings to the discussion of how mathematics grows. But I think it’s a powerful idea that can have a good deal to say about how we work, and how we, as a species, produce the bountiful and variegated products of human culture. Grosholz is the author of many books that include works on the philosophy of mathematics as well as works of poetry. Her latest is Starry Reckoning: Reference and Analysis in Mathematics and Cosmology. What follows is based on a piece that she contributed to a book she edited with Herbert Breger. The book is called The Growth of Mathematical Knowledge, and her piece is given the title The Partial Unification of Domains, Hybrids, and the Growth of Mathematical Knowledge. Here Grosholz argues that unlike what has been considered before, different branches of mathematics do not reduce to other branches. Philosophers of mathematics have discussed the possibility that geometry can be reduced to arithmetic, arithmetic to predicate logic, and arithmetic and geometry to set theory. This is understood in much the same way that one might claim that biology can be reduced to chemistry and chemistry to physics. The vocabulary of the reduced theory is redefined in terms of the reducing theory. In the sciences, the reducing theory has been thought to play an explanatory role, suggesting an inherent unity among the various scientific disciplines. But in mathematics the socalled reducing theory is not used so much as an explanation of the reduced ideas, but more as a foundation for them. And mathematicians have long had difficulty with foundational questions. Grosholz, on the other hand, proposes that mathematics is a collection of rationally related but autonomous domains and then highlights the potent role of what she calls mathematical hybrids. She explains that in Greek mathematics the autonomy of domains is clear. Geometry is about points, lines, planes, and figures, and geometric problems involve relations between parts of the whole of spatial figures. Arithmetic is about numbers, and problems in arithmetic involve monotonic, discreet succession. The vocabulary of logic is one of terms, propositions, and arguments, and problems in logic involve ideas of inclusion, exclusion, consistency, and inconsistency. While these separate domains may seem to resist assimilation, 17th century mathematics introduced some unifications. Among these unifications is Descartes’ application of algebraic techniques to geometric constructions, and Leibniz’s application of combinatorics to an analysis of curves. Grosholz spends some time on each of these. She points out that Leibniz was fascinated with formal languages and number theory, and that he believed that the art of combinations was central to the art of discovery. She argues that Leibniz’s investigation of algebraic forms in the calculus is grounded in “an imperfect but suggestive analogy between numbers and figures.” The infinite summing of infinitesimal differences, that becomes the integral, emerges from his ability to bridge geometric ideas about a curve (like tangent, arc length, area), with algebraic equations, and through the notion of an infinitesided polygon approximating the curve, patterns of integers were also connected. Here the mathematical hybrid emerges: an abstract structure that rationally relates different domains in the service of problem solving. On a deeper level, objects in each domain must actually exhibit features of both domains, despite the instability created by their differences. But, Grosholz argues, this instability does not mean that hybrids are defective. They are held together by the clarity of the domains from which they emerge, and the abstract structures that link them. “Logical gaps are to be found at the heart of many hybrids,” Grosholz explains, but imaginative analogies inspire the kind of revision and invention that promotes the growth of mathematical knowledge. I was always impressed by the fact that these intuitive leaps that Leibniz took, while prompting subsequent generations to feel the need to bring acceptable rigor to the notions, were nonetheless substantiated. Grosholz lends some important detail to the picture Richard Courant paints of 17th century pioneers of mathematics in his classic text, What is Mathematics?
This talk of hybrids reminded me of the interdisciplinarity that Virginia Chaitin writes about. I wrote this in an earlier post about one of her papers:
In her own words:
Grosholz’s identification of the hybrid is an important insight, and I would argue that it has implications beyond mathematics. It may be that because the objects of mathematics are so clean, or unambiguous, the value of the hybrid is more easily observed. But my hunch is that productive analogies likely belong to the stuff of life itself. I have been particularly concentrated on whether mathematics can tell us something about the nature of thought, something that we have not yet understood about what thought is made from, how it happens, how it is connected to everything else in the universe. These questions inevitably point me in the direction of research in cognitive science, neuroscience, philosophical debates about the viability of the objectivity on which science relies, and discussions of what we even mean by ‘knowledge.’ Mathematics shows up everywhere, in the abstractions and probabilities involved in how the brain learns, for example, or how the brain constructs what we see, or how the brain navigates the space around us. One of the avenues I’ve followed has led me through the science of selforganizing systems and the application of information theory to biology in particular, some of which was discussed in a recent post. In this context, we see biologists exploiting the value of math ideas. And the modeling that happens in these research efforts doesn’t just predict outcomes. It often characterizes the action. The behavior of swarms, flocks, insect colonies, and even cells is mathematical. It happens in the other direction as well. Mathematician and computer scientist Gregory Chaitin has approached biology mathematically, not in the sense of modeling behavior, but more in the way of expressing the creativity of evolution using the creativity of mathematics. Here’s a little piece of a post from about six years ago:
Chaitin is also one of the mathematicians who developed what is known as algorithmic information theory. And I recently happened upon a paper from Giulio Ruffini at Starlab Barcelona, with the title An Algorithmic Information Theory of Consciousness. This paper was published near the end of 2017. Ruffini’s research is motivated, to some extent by the value of being able to provide a metric of conscious states. But the course he’s chosen is described in the abstract:
Ruffini argues that characterizing consciousness is “a profound scientific problem,” and progress in this area will have important practical implications with respect to any one of a number of disorders of consciousness. While the paper is mostly aimed at justifying the fit of algorithmic information theory (which he refers to as AIT) to this endeavor, one can also see some of the deeper philosophical convictions that motivate his approach. He says the following, for example, in his introduction:
But I found the conviction that seems to be driving his perspective clearly laid out in his 2007 paper Information, complexity, brains and reality (Kolmogorov Manifesto). There he says that information theory gives us the conceptual framework we need to comprehend how brains and the universe are related. That seems like the really big picture. He also says:
Compression is one of the key ideas. Relations that are expressed in equations, or events that are captured by programs have been compressed, and the simplification is productive. The Kolmogorov complexity of a data set, in algorithmic information theory, is defined as the length of the shortest program able to generate it. Experience is a consequence of the brain’s compression (and hence simplification) of an ongoing flood of sensory data. And so one of Ruffini’s ideas is that science is what brains do. And this, he says, is to be taken as a definition of science. Here are a few of the ideas his paper means to address, some more provocative than others:
This sketchy survey of the paper does not do it justice. But I bring it to your attention as yet another indication that the blend of information theory and biology is running deep.


Copyright © 2019 Mathematics Rising  All Rights Reserved Powered by WordPress & Atahualpa 
Recent Comments