Categories

Spaces upon spaces – topology and slum conditions

I didn’t know anything about topology before I entered graduate school, but I continue to see it as one of the more provocative specialties in mathematics and an important transition of thought. Most definitions of the subject describe it as the study of the properties of objects that are preserved through deformations like stretching and twisting.  Cutting or tearing or gluing is not allowed.  A circle is topologically equivalent to an ellipse because the circle can just be stretched into the ellipse.  Removing one point, however, from either the circle or the ellipse, produces something else. We now have a line segment.  A sphere is topologically equivalent to an ellipsoid, again because one can be squeezed or stretched into the other. But a doughnut, because of the hole it has, is not topologically equivalent to either,  Holes, in fact, become key to creating equivalence classes of things.  A well known and lighthearted equivalence, is one where a coffee cup is topologically equivalent to a doughnut.   Even without any training, one can see that these equivalences depend on another level of abstraction, and one that challenges intuitive notions.  While topology often considers shapes and spaces, it is not concerned with distance or size.

My affinity for this branch of mathematics may have been helped along by the fact that my favorite teacher in graduate school was a topologist.  Sylvain Cappell, now still at The Courant Institute of Mathematical Sciences at NYU, introduced me to topological ideas. I’ve saved a Discover Magazine article from 1993 that landed in my mailbox not long after I left Courant. In it Cappell discusses the motivation and effectiveness of a topological approach to problems. The late Fields Medalist, William Thurston, also contributed to that article. Thurston suggested that our difficulty with perceiving the higher dimensions that are a fundamental consequence of topological ideas is primarily psychological. He believed that the mind’s eye is divided between linear, analytic thinking and geometric visualization.

Algebraic equations, for example, are like sentences. The formula that gives you the area for a cube, x times x times x, can easily be communicated in words. But the shape of the cube is another matter. You have to see it.

When we talk about higher-dimensional spaces, Thurston says, we’re learning to think in and plug into this other spatial processing system. The going back and forth is difficult because it involves two really foreign parts of the brain.

Emphasizing the value of “see it” Cappell makes the argument that even with a 2-dimensional graph, relating something like interest to consumer spending, where neither has anything to do with geometry, the shape of the line that represents their relationship gives you a better grasp of the situation.

The same holds true in five or even ten-dimensional models. Logically, it may seem like the geometry is lost, that it’s just numbers, says Cappell. But the geometry can tell you things that the numbers alone can’t: how a curve reaches a maximum, how you get from there to here. You can see hills and valleys, sharp turns and smooth transitions; holes in a doughnut-shaped nine-dimensional model might indicate realms where no solutions lie.

A few days ago, an article in Forbes tells us that topology can help us see something about how to reduce slum conditions in cities. Researchers, it explains, opt for a “shape-based”  understanding of cities.

According to the team’s research, when two or more city sections have the same number of blocks, they’re topologically equivalent and can be deformed into each other.” Using that approach, sections of Mumbai can be deformed into Las Vegas suburbs or even areas of Manhattan.

How are slums and planned cities topologically different? The difference emerges essentially from better or worse access to the infrastructure and researchers claim that once cities are understood as topological spaces, the access issue can be resolved mathematically.

Their approach uses an algorithm that can be applied to any city block, they note in their paper. It applies tools from topology and graph theory — the branch of mathematics concerned with networks of points connected by lines — to neighborhood maps to diagnose and “solve critical problems of development,” they wrote.

This is an interesting peek at what can happen in or with mathematics. Topology itself requires a willingness to look differently. Using topological ideas to analyze or to address the development of slum conditions in sprawling cities is unexpected. It’s a geometry being applied to a space, but not directly.  Not because it resembles the space.  It says more about how abstractions can give us greater access to the real world.  Or how the minds eye can see.

Both the Discover and Forbes articles are worth a look.

Update on site

Hello everyone,

I want to let subscribers know that I am making some hosting changes.  I will be posting a blog tomorrow.  If you don’t receive notice of the post, I encourage you to resubscribe.

Thanks for your interest in the site.

Joselle

Prejudice in an abstract world

I was struck today by the title of an article in Science News that read, Before his early death, Riemann freed geometry from Euclidean prejudices. The piece, by science writer Tom Siegfried, was no doubt inspired by the recent claim from award-winning mathematician Michael Atiyah that he has proved the long standing Riemann hypothesis, one of the most famous unsolved problems in mathematics for close to 160 years. But Siegfried’s article was more about Riemann’s extraordinary insights than it was about Atiyah’s claim (which I’ll get to before I’m done here). First, let me say this: by using the term ‘Euclidean prejudices,’ Siegfried is telling us that we could be mislead by prejudices even in mathematics. And what prejudices usually do is conceal the truth. The word actually appears in the lecture itself.  This is from a translation of the lecture by William Kingdom Clifford:

Researches starting from general notions…can only be useful in preventing this work from being hampered by too narrow views, and progress in knowledge of the interdependence of things from being checked by traditional prejudices.

Siegfried seems not so much interested in talking about mathematics itself as he is in illustrating the significance of a change of perspective within mathematics. Most young students of mathematics would never imagine that there could be more than one mathematical way to think, or even that within the discipline there is mathematical thinking that’s not just problem solving. Referring to Riemann’s famous lecture, given in 1854, that essentially redefined what we mean by geometry, Siegfried says this:

In that lecture, Riemann cut to the core of Euclidean geometry, pointing out that its foundation consisted of presuppositions about points, lines and space that lacked any logical basis. As those presuppositions are based on experience, and “within the limits of observation,” the probability of their correctness seems high. But it is necessary, Riemann asserted, to “inquire about the justice of their extension beyond the limits of observation, on the side both of the infinitely great and of the infinitely small. (emphasis added)

Physics gets us beyond the limits of observation with extraordinarily imaginative instruments, detectors of all sorts. But how is it that mathematics can get us beyond those limits on its own? How is it possible for Riemann to see more without getting outside of himself? I don’t think this is the usual way the question is posed, but I have become a bit preoccupied with understanding how it is that purely abstract formal structures, that we seem to build in our minds, with our intellect, can get us beyond what we are able to observe. Again from Siegfried:

Riemann’s insights stemmed from his belief that in math, it was important to grasp the ideas behind the calculations, not merely accept the rules and follow standard procedures. Euclidean geometry seemed sensible at distance scales commonly experienced, but could differ under conditions not yet investigated (which is just what Einstein eventually showed)…

…Riemann’s geometrical conceptions extended to the possible existence of dimensions of space beyond the three commonly noticed. By developing the math describing such multidimensional spaces, Riemann provided an essential tool for physicists exploring the possibility of extra dimensions today.

Riemann appears in a number of my posts. I’ve taken particular interest in the significance of his work in part because in his famous lecture on geometry he cited the philosopher John Friedrich Herbart as one of his influences. Herbart pioneered early studies of perception and learning and his work played an important role in 19th century debates about how the mind brings structure to sensation. In Jose Ferreiros’ book Labyrinth of Thought, he takes up Riemann’s introduction of the notion of manifold and says this:

Herbart thought that mathematics is, among the scientific disciplines, the closest to philosophy. Treated philosophically, i.e., conceptually, mathematics can become a part of philosophy.~According to Scholz, Riemann’s mathematics cannot be better characterized than as a “philosophical study of mathematics” in the Herbarian spirit, since he always searched for the elaboration of central concepts with which to reorganize and restructure the discipline and its different branches, as Herbart recommended [Scholz 1982a, 428; 1990a].

I think the way I first grappled with the depth of Riemann’s insights was to consider that he was somehow guided, by the cognitive processes, that govern perception, despite the fact that they operate outside our awareness. Some blend of experience, psychology, and rigor worked to establish the clarity of his view. I wrote a piece for Plus magazine on this very topic.

Herbart’s thinking foreshadows what studies in cognitive science now show us about how we perceive space and magnitude — it may be that Riemann’s mathematical insights reflect them.

More recently I’ve become focused on asking a related question, but maybe from a different angle, and that is what is actually happening when we explore mathematical territories? How is this internal investigation accomplished? What does the mind think it’s doing?  These questions are relevant because it is clear that there is significantly more going on in mathematics than calculation and problem solving.  Riemann’s groundbreaking observations make that clear.  The questions I ask may sound like impossible questions to answer, but even just  organizing an approach to them is likely to involve, at the very least, cognitive science and neuroscience, mathematics, and epistemology, which makes them clearly worthwhile.

About Atiya’s breakthrough, an NBC news article said this:

Atiyah is a wizard of a mathematician, but there’s a lot of skepticism among mathematicians that his wizardry has been sufficient to crack the Riemann Hypothesis,” John Allen Paulos, a professor of mathematics at Temple University in Philadelphia and the author of several popular books on mathematical topics, told NBC News MACH in an email.

This skepticism is present in almost every article I read, but Atiya remains confident and is promising to publish a full version of the proof.

To be or not to be abstract

I’m not completely sure I understand where my desire to grasp the value of abstractions is taking me, but as I think about mathematics, and more recent trends in the sciences, I keep wanting to get further and further behind what our symbolic reasoning is actually doing, and how it’s doing it.  I have this idea that if I can manage to somehow see around, or inside, the products of our minds, I’ll see something new.  Maybe I can simplify the question that my own mind keeps asking, but that won’t help answer it.   Here’s the thing I’m stuck on.  Everything that we do (in the arts and the sciences) is based on the continuous flow of thoughts, that begin to amass into threads of thoughts, that have now moved through the hands and minds of countless individuals, over thousands of years, creating giant fabrics of meaningful information.  What are these threads made from?  How do they develop?  How are they related to everything else in nature?  And what might the giant fabrics of human culture have to do with everything else.  While we generally distinguish symbols from reality, a very large part of our reality is now built more from ideas than from concrete or wood, using the symbols that make the sharing of these ideas possible.   Even language captures relations, among sentiments and experience, in symbol.  These relations, together with a kind of reasoning or logic,  build our social and political systems.  We’re so immersed in our languages and our symbols that we don’t even see them.   And I don’t know if we can see them in the way that could address my curiosities.  But I’m convinced that we can see more than we do.  And the steady growth of information theory, (and its more immediate relatives, like algorithmic information theory and quantum information theory) seems to shed new light on the reality of abstract relations.  Mathematics is distinguished as the discipline that explores purely abstract relations.  The fruits of many of these explorations now service the parts of our world that we try to get our hands on – things like astronomy, engineering, physics, computer science, biology, medicine, and so on. I’m beginning to consider that information sciences may yet uncover something about why mathematics has been so fruitful.

I read today about Constantinos Daskalakis who was awarded the Rolf Nevanlinna Prize at the International Congress of Mathematicians 2018 for his outstanding contributions to the mathematical aspects of information sciences.   In particular, Daskalakis made some new observations of some older ideas –  namely game theory and what is called Nash equilibrium.  Marianne Freiberger explains Nash equilibrium in Plus Magazine:

When you throw together a collection of agents (people, cars, etc) in a strategic environment, they will probably start by trying out all sorts of different ways of behaving — all sorts of different strategies. Eventually, though, they all might settle on the single strategy that suits them best in the sense that no other strategy can serve them better. This situation, when nobody has an incentive to change, is called a Nash equilibrium.

A Nash equilibrium is not necessarily positive, it’s just stable.  Nash proved in 1950 that no matter how complex a system is, it is always possible to arrive at an equilibrium.  But the questioned remained – knowing that a system can stabilize doesn’t tell us whether it will.  And nothing in Nash’s proof tells us how these states of equilibrium are constructed, or how they happen.  People have searched for algorithms that could find the Nash equilibrium of a system, and they found some, but the time it would take to do the computations, or to complete the task, wasn’t clear.  Daskalakis explains in  Freidberger’s article:

My work is a critique of Nash’s theorem coming from a computational perspective,” he explains. “What we showed is that [while] an equilibrium may exist, it may not be attainable. The best supercomputers may not be able to find it.  This theorem applies to games that we play, it applies to road networks, it applies to markets. In many complex systems it may be computationally intractable for the system to find a stable operational mode. The system could be wondering around the equilibrium, or be far away from the equilibrium, without ever being drawn to a stable state.

Daskalakis’ work alerts people working in relevant industries that a Nash equilibrium, while it exists, may be essentially unattainable because the algorithms don’t exist, or because the complexity of the problem is just too difficult.  These considerations are relevant to people who design things like road systems, or online products like dating sites or taxi apps.

When designing such a system, you want to optimise some objective: you want to make sure that traffic flows consistently, that potential dates are matched up efficiently, or that taxi drivers and riders are happy.

If you are counting on an equilibrium to deliver this happy state of affairs, then you better make sure the equilibrium can actually be reached. You better be careful that the rules that you set inside your system do not lead to a situation where our theorem applies,” says Daskalakis. “Your system should be clean enough and have the right mathematical structure so that equilibria can arise easily from the interaction of agents. [You need to make sure] that agents are able to get to equilibrium and that in equilibrium the objectives are promoted.

Another option is to forget about the equilibrium and try to guarantee that your objective is promoted even [with] dynamically changing behaviour of people in your system.

This confluence of game theory, complexity theory and information science has made it possible to see the abstract more clearly, or has made a mathematical notion somehow measurable.  The work includes a look at how hard the solution to a problem can be, and whether or not the ideal can be actualized.  What struck me about the discussion in Plus was the fact that Daskalakis’ work was thought to address the difference between the mathematical existence demonstrated by Nash and its real world counterparts, maybe even whether or how they are related.  These things touch on my questions.  Nash’s proof is non-constructive existence proof.  It doesn’t build anything, it just finds something to be true.  Daskalakis is a computer scientist and an engineer.  He expects to build things.  But the problem is attacked with mathematics.  His effort spans game theory in mathematics, complexity theory (a branch of mathematics that classifies problems according to how hard they are) and information sciences.   There is an interesting confluence of things here.   And it didn’t answer any of the questions I have. But it encouraged me.  I also like this quote from a recent Quanta Magazine article about Daskalakis:

The decisions the 37-year-old Daskalakis has made over the course of his career — such as forgoing a lucrative job right out of college and pursuing the hardest problems in his field — have all been in the service of uncovering distant truths. “It all originates from a very deep need to understand something,” he said. “You’re just not going to stop unless you understand; your brain cannot stay still unless you understand.”

Abstractions: What’s happening with them?

We all generally know the meaning of abstraction.  We all have some opinion, for example, about the value of abstract painting.  And I’ve heard from many that mathematics is too abstract to be understood or even interesting.  (But I must admit, it is exactly this about mathematics that keeps me so captivated).  An abstraction is usually thought of as the general idea as opposed to the particular circumstance.  I thought today of bringing a few topics back into focus, all of which I’ve written about before, to highlight something about knowledge – what it is, or how we seem to collect it.   This particular story centers around the idea of entropy.

First, here’s as brief a description of the history of the mathematics of this idea as I can manage at the moment:

In the history of science and mathematics, two kinds of entropy were defined – one in physics and one in information theory.  Mathematical physicist Rudolf Clausius first introduced the concept of entropy in 1850, and he defined it as a measure of a system’s thermal (or heat) energy that was not available to do work.  It was a fairly specific idea that provided a mathematical way to pin down the variations in physical possibilities. I’ve read that Clausius chose the word entropy because of its Greek ties to the word transformation and the fact that it sounded like energy. The mathematical statement of entropy provided a clear account, for example, of how gas, confined in a cylinder, would freely expand if released by a valve, but could also be made to push a piston in response to the pressure of something that confined it, like in our cars. The piston event is reversible, while the expansion is not.  In the piston event, however, some amount of heat or energy is always lost to entropy.  And so Clausius’ version of the second law of thermodynamics says that spontaneous change, for irreversible processes in isolated systems, always moves in the direction of increasing entropy.

In 1948 Claude Shannon initiated the development of what is now known as information theory when he formalized a mathematics of information based on the observation that transmitted messages could be encoded with just two bursts of energy – on and off.  In this light he defined an information entropy, still referred to as Shannon’s entropy, which is understood as the measure of the randomness in a message, or a measure of the absence of information.  Shannon’s formula was based on the probability of symbols (or letters in the alphabet) showing up in the message.

In the late 1800’s, James Clerk Maxwell, developed the statistical mechanical description of entropy in thermodynamics, where macroscopic phenomena (like temperature and volume) were understood in terms of the microscopic behavior of molecules.   Soon after, physicist and philosopher Ludwig Boltzman generalized Maxwell’s statistical understanding of the action and formalized the logarithmic expression of entropy that is grounded in probabilities.  In that version entropy is proportional to the logarithm of the number of microscopic ways (hard to see ways) that the system could acquire different macroscopic states (the things we see).   In other words we can come to a statistical conclusion about how the behavior of an immense number of molecules, that we don’t see, will affect the events we do see.  It is Boltzman’s logarithmic equation (which appears on his gravestone) that resembles Shannon’s equation, allowing both entropies to be understood, essentially, as probabilities related to the arrangement of things.

It is certainly true that the mathematics that defined entropy at each stage of its development is an abstraction of the phenomenon.  However reducing both the thermodynamic definition, and information theory definition, to probablities related purely to the arrangement of things is another (and fairly significant) level of abstraction.

You are likely familiar with the notion that entropy always increases, or as it is often understood, things always tend to disorder.  But, unlike what one might expect, it is this ‘arrangement of things’ idea that seems to best explain why eggs don’t un-crack or ice doesn’t un-melt.  The number of possible arrangements of atoms in an un-cracked egg is far, far, smaller that the number of possible arrangements of atoms in a cracked egg, and so far less likely.  Aatish Bhatia does a really nice job of explaining this way of understanding things here.

Next, the relationship between information and thermodynamics has had the attention of physicists since James Clerk Maxwell introduced a hypothetical little creature who seemed to challenge the second law of thermodynamics and has come to be know as Maxwell’s demon.  Some discussion of the demon can be found in a post I wrote in 2016.  In 2017, a Quanta Magazine article by Philip Ball, reviews the work of physicists, mathematicians, computer scientists, and biologists who explore the computational (or information processing) aspect of entropy as it relates to biology.

Living organisms seem rather like Maxwell’s demon. Whereas a beaker full of reacting chemicals will eventually expend its energy and fall into boring stasis and equilibrium, living systems have collectively been avoiding the lifeless equilibrium state since the origin of life about three and a half billion years ago. They harvest energy from their surroundings to sustain this nonequilibrium state, and they do it with “intention.”

In 1944, physicist Erwin Schrödinger proposed that living systems take energy from their surroundings to maintain non-equilibrium (or to stay organized) by capturing and storing information. He called it “negative entropy.”

Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information.

Now, physicist Jeremy England is considering pulling biology into physics (or at least some aspect of it) with the suggestion that the organization that takes place in living things is just one of the more extreme possibilities of a phenomena exhibited by all matter. From an essay written by England,

The theoretical research I do with my colleagues tries to comprehend a new aspect of life’s evolution by thinking of it in thermodynamic terms. When we conceive of an organism as just a bunch of molecules, which energy flows into, through and out of, we can use this information to build a probabilistic model of its behaviour. From this perspective, the extraordinary abilities of living things might turn out to be extreme outcomes of a much more widespread process going on all over the place, from turbulent fluids to vibrating crystals – a process by which dynamic, energy-consuming structures become fine-tuned or adapted to their environments. Far from being a freak event, finding something akin to evolving lifeforms might be quite likely in the kind of universe we inhabit – especially if we know how to look for it.

Living things manage not to fall apart as fast as they form because they constantly increase the entropy around them. They do this because their molecular structure lets them absorb energy as work and release it as heat. Under certain conditions, this ability to absorb work lets organisms (and other systems) refine their structure so as to absorb more work, and in the process, release more heat. It all adds up to a positive feedback loop that makes us appear to move forward in time, in accordance with the extended second law. (emphasis added)

Finally (not in any true sense, just for the scope of this post) physicist Chiara Marletto has a theory of life based on a new fundamental theory of physics called Constructor Theory.  I wrote a guest blog for Scientific American on Constructor Theory in 2013.  In her essay, also published by Aeon, Marletto explains,

In constructor theory, physical laws are formulated only in terms of which tasks are possible (with arbitrarily high accuracy, reliability, and repeatability), and which are impossible, and why – as opposed to what happens, and what does not happen, given dynamical laws and initial conditions. A task is impossible if there is a law of physics that forbids it. Otherwise, it is possible – which means that a constructor for that task – an object that causes the task to occur and retains the ability to cause it again – can be approximated arbitrarily well in reality. Car factories, robots and living cells are all accurate approximations to constructors.

But the constructor itself, the thing that causes a transformation, is abstracted away in constructor theory, leaving only the input/output states.  ‘Information’ is the only thing that remains unchanged in each of these transformations, and this is the focus of constructor theory.  With Constructor Theory, this underlying independence of information involves a more fundamental level of physics than particles, waves and space-time. And the expectation is that this ‘more fundamental level’ may be shared by all physical systems (another generality).

The input/output states of Constructor Theory are expressed as “ordered pairs of states” and are called construction tasks.  The idea is no doubt a distant cousin of the ordered pairs of numbers we learned about in high school, along with the one-to-oneness, and compositions taught in pre-calculus!  And Constructor Theory is an algebra, a new one certainly, but an algebra nonetheless.  This algebra is not designed to systematize current theories, but rather to find their foundation and then open a window onto things that we have not yet seen.

According to Marletto:

The early history of evolution is, in constructor-theoretic terms, a lengthy, highly inaccurate, non-purposive construction that eventually produced knowledge-bearing recipes out of elementary things containing none. These elementary things are simple chemicals such as short RNA strands…Thus the constructor theory of life shows explicitly that natural selection does not need to assume the existence of any initial recipe, containing knowledge, to get started.

Marletto has also written on the constructor theory of thermdynamics in which she argues that constructor theory highlights a relationship between information and the first law of thermodynamics, not just the second.

This story about information, thermodynamics, and life, certainly suggests something about the value of abstraction.  As a writer, I’m not only interested in the progression of scientific ideas, but also in the power of generalities that seem to produce new vision, as well as amplify the details of what we already see.  It seems to me that there is a particular character to the knowledge that is produced when communities of thinkers move through abstractions that bring them from measuring temperature and volume, to information driven theories of a science that could contain both physics and biology.  It’s all about relations.  I haven’t written this to answer my question about what’s happening here, but mostly to ask it.

 

 

 

Multiple personality disorder – a glimpse into the cosmos?

A recent post on scientificamerican.com got my attention – no surprise given its title, Could Multiple Personality Disorder Explain Life, the Universe, and Everything.  It was coauthored by three individuals: computer scientist Bernardo Kastrup, Psychotherapist Adam Crabtree, and cognitive scientist Edward F. Kelly.  The article’s major source is a paper written by Kastrup, published this year in the Journal of Consciousness Studies with the title The Universe is Consciousness.  I’ll try to outline here the gist of the argument.

It begins with a very convincing narrative substantiating the presence of multiple personalities in individuals who experience this.  One of the most remarkable was the case (reported in Germany in 2015) of a woman who had dissociated personalities, some of whom were blind.

The woman exhibited a variety of dissociated personalities (“alters”), some of which claimed to be blind. Using EEGs, the doctors were able to ascertain that the brain activity normally associated with sight wasn’t present while a blind alter was in control of the woman’s body, even though her eyes were open. Remarkably, when a sighted alter assumed control, the usual brain activity returned.

This was a compelling demonstration of the literally blinding power of extreme forms of dissociation, a condition in which the psyche gives rise to multiple, operationally separate centers of consciousness, each with its own private inner life.  (emphasis added)

The history of cases of dissociated personalities goes back to the late 1800s and the authors tell us that the literature provides significant evidence that “the human psyche is constantly active in producing personal units of perception” – what we would call selves.  While it continues to be unclear how this happens, they argue that the development of selves, or personal units of perception, should play a role in how we understand “what is and is not possible in nature.”

The case they make requires an appeal to alternative philosophical perspectives specifically physicalism, constituitive panpsychism, and cosmopsychism.  Proponents of physicalism believe that we should be able to understand mental states through a thorough analysis of brain processes.  The ongoing problem with this expectation is that there is still no way to connect feelings to different arrangements of physical stuff.  Constituitive panpsychism is the idea that what we call experience is inherent in every physical thing, even fundamental particles.  Human consciousness would somehow be built “by a combination of the subjective inner lives of the countless physical particles that make up our nervous system.”  But, the authors argue, the articulation of this perspective does not provide a way to understand how lower level points of view (atoms and molecules) would combine to produce higher level points of view (human experience).   The alternative would be that consciousness is fundamental in nature but not fragmented.  This is cosmopsychism which, the authors say, is essentially the classic idealism, where the objects of our experience depend on something more fundamental than particles, and that fundamental thing is more like mind or thought than matter.

The difficulty with this view is understanding how various private conscious centers (like you and everyone around you) emerge from a ‘universal consciousness.’   Keying on this question is what makes the presence of multiple personalities in one individual a useful indicator of how to think about this larger question.

Kastrup’s paper, on which this very readable Scientific American article is based, is steeped in the language of philosophy.  He works to unpack the mainstream physicalist perspective and why it doesn’t work, and then he examines a number of panpsychism views and their weaknesses.  For his own aragument, he relies most heavily on a proposal from philosopher Itay Shani.

Shani does still postulate a duality in cosmic consciousness to account for the clear qualitative differences between the outer world we, as relative subjects, perceive and measure and the inner world of our thoughts and feelings. He calls it the ‘lateral duality principle’ (Shani 2015, p412) and describes it thus:

[Cosmic consciousness] exemplifies a dual nature: it has a concealed (or enfolded, or implicit) side to its being, as well as a revealed (or unfolded, or explicit) side; the former is an intrinsic dynamic domain of creative activity, while the latter is identified as the outer, observable expression of that activity. (ibid., original emphasis)

Kastrup’s thinking is in line with Shani’s, but he goes to great lengths to examine the weaknesses in Shani’s view.  For the remainder of the paper, Kastrup focuses on addressing the following questions: how do fleeting experiential qualities arise out of “one enduring cosmic consciousness,”  what causes individual experiences to be private,  how can the physical world we measure be explained in terms of a concealed, thoughtful, order, why does brain function correlate so well with our awareness if it doesn’t generate it, and finally, why are we all imagining the same world outside the control of our personal volition.

Kastrup’s analysis of these questions is thorough and precise, and he uses the phenomenon of dissociated personalities which he calls alters) to address the privacy of individual experiences (since the alters within one individual are nonetheless private from each other) and the functional brain scans, that distinguish actual alters from ones that are just acted out, to imagine how each of us is the result of a “cosmic level dissociative processes.”

These are difficult ideas to accept given what we have come to expect from the sciences.  But I will point out that aspects of these proposals run parallel to ones proposed by contemporary neuroscientists and physicists.  The intimate connection between physics and mathematics always raises questions about the relatedness of mind and matter. For 17th century mathematician and philosopher Gottfried Wilhelm Liebniz, the fundamental substance of the universe could not be material. It had to be something undividable, something resembling a mathematical point more than a speck of dust. The material in our experience is then somehow a consequence of the relations among these non-material substances that actually resemble ‘mind’ more than ‘matter.’  For physicist and author David Deutsch, information and knowledge are the fundamentals of physical life. In his book, The Beginning of Infinity, Deutsch compares and contrasts human brains and DNA molecules. “Among other things,” he says, they are each “general- purpose information-storage media….” And so Deutsch sees biological information and explanatory information each as instances of knowledge which, he says, “is very unlikely to come into existence other than through the error-correcting process of evolution or thought.  The Integrated Information Theory of Consciousness proposed by neuroscientist Giulio Tononi, and defended by neuroscientist Christof Koch, suggests that some degree of consciousness is an intrinsic fundamental property of every physical system.  Also, cosmologist and author Max Tegmark is of the opinion that if we want to understand all of nature we have to consider all of it together.  For Tegmark there are three pieces to every puzzle – the thing being observed; the environment of the thing being observed (where there may be some interaction); and the observer.  He identifies three realities in his book Our Mathematical Universe – external reality, consensus reality, and internal reality.   External reality is the physical world which we believe would exist even if we didn’t (and is described in physics mathematically).  Consensus reality is the shared description of the physical world that self-aware observers agree on (and it includes classical physics). Internal reality is the way you subjectively perceive the external reality.  As with many ideas in physics, the universe is understood in terms of information, and Tegmark has said that he thinks that consciousness is the way information ‘feels’ when processed in complex ways.

It seems to me that a similar insight into what we have been overlooking, about ourselves and our world, is being approached from several directions and in languages specific to individual disciplines.  The ones proposed by physicists and neuroscientists are held together with mathematics.  But they all bring to mind again, something I thought when I watched my mother’s mind change with the development of a tumor in the right frontal lobe of her brain.  Among the many things I questioned was how it is that the cells in her body could produce her experience if something like consciousness or thought did not already in the world that created her.

 

 

 

 

 

 

 

Intelligence, artificial and otherwise

Earlier this month, Nature  reported on Artificial Intelligence (AI) research, where deep learning networks (an AI strategy) spontaneously generated patterns of computations that bore a striking resemblance to the activity generated by our own grey matter – namely by the neurons called grid cells in the mammalian brain. The patterned firing of grid cells enable mammals to create cognitive maps of their environment. The artificial network, that unexpectedly produced something similar, was developed by neuroscientists at University College London, together with AI researchers at the London-based Google company DeepMind.  A computer-simulated rat was trained to track its movement in a virtual environment.

The Nature article by Alison Abbott tells us that the grid-cell-like coding was so good, the virtual rat was even able to learn short-cuts in its virtual world. And here’s an interesting response to the work from neuroscientist Edvard Moser, a co-discover of biological grid cells:

“This paper came out of the blue, like a shot, and it’s very exciting,” says neuroscientist Edvard Moser at the Kavli Institute for Systems Neuroscience in Trondheim, Norway. Moser shared the 2014 Nobel Prize in Physiology or Medicine for his co-discovery of grid cells and the brain’s other navigation-related neurons, including place cells and head-direction cells, which are found in and around the hippocampus region.

“It is striking that the computer model, coming from a totally different perspective, ended up with the grid pattern we know from biology,” says Moser. The work is a welcome confirmation that the mammalian brain has developed an optimal way of arranging at least this type of spatial code, he adds.

There is something provocative about measuring the brain’s version of grid cell navigation against this emergent but simulated grid cell action.

In Nature’s News and Views  Francesco Savelli and James J. Knierim tell us a bit more about the study. First, for the sake of clarity, what researchers call deep learning is a kind of machine learning characterized by layers of computations, structured in such a way that the output from one computation becomes the input of another. Inputs and outputs are defined by a transformation of data, or information, being received by each layer.  The data is translated into “compact representations” that promote the success of the task at hand – like translating pixel data into a face that can be recognized. A system like this can learn to process inputs so as to achieve particular outputs. The extent to which each of the computations, in each of the layers, affects the final outcome is determined by how they are weighted. With optimization algorithms, these weights will be adjusted to optimize results. Deep learning networks have been successful with computer vision, speech recognition, and games, among other things. But navigating ones self through the space of ones environment is a fairly complex task.

The research that led to Moser’s Nobel Prize in 2014 was the discovery of a kind of family of neurons that produces the cognitive maps we develop of our environments. There are place cells, neurons that fire when an organism is in a particular position in an environment, often with landmarks.  There are head-direction neurons that signal where the animal seems to be headed. There are also neurons that respond to the presence of an edge to the environment.   And, most relevant here, there are grid cells.  Grid cells fire when an animal is at any of a set of points that define a hexagonal grid pattern across their environment. The neuron’s firing maps to a point on the ground. They contribute to the animal’s sense of position, and correspond to the direction and distance covered by some number of steps taken.

Banino and colleagues wanted to create a mechanism for self-locating, in a deep-learning network.  Such a mechanism is referred to as path integration.

Because path integration involves remembering the output from the previous processing step and using it as input for the next, the authors used a network involving feedback loops.They trained the network using simulations of pathways taken by foraging rodents. The system received information about the simulated rodent’s linear and angular velocity, and about the simulated activity of place and head-direction cells…

And this is what happened:

The authors found that patterns of activity resembling grid cells spontaneously emerged in computational units in an intermediate layer of the network during training, even though nothing in the network or the training protocol explicitly imposed this type of pattern. The emergence of grid-like units is an impressive example of deep learning doing what it does best: inventing an original, often unpredicted internal representation to help solve a task.

These grid-like units allowed the network to keep track of position, but whether they would function in the network’s navigation to a goal was still a question. They addressed this question by adding a reinforcement-learning component. The network learned to assign values to particular actions at particular locations, and higher values were assigned to actions that brought the simulated animal closer to a goal.

The grid-like representation markedly improved the ability of the network to solve goal-directed tasks, compared to control simulations in which the start and goal locations were encoded instead by place and head-direction cells.

Unlike the navigation systems developed by the brain, in this artificial network, the place cell layer is not changed during the training that affects grid cells. But the way that grid and place cells influence each other in the brain is not well understood. Further development of the artificial network might help unravel their interaction.

From a broader perspective, it is interesting that the network, starting from very general computational assumptions that do not take into account specific biological mechanisms, found a solution to path integration that seems similar to the brain’s. That the network converged on such a solution is compelling evidence that there is something special about grid cells’ activity patterns that supports path integration. The black-box character of deep learning systems, however, means that it might be hard to determine what that something is.

There is clear pragmatic promise in this research, involving both AI and it’s many applications, as well as cognitive neuroscience.  But I find it striking for a different reason.  I find it striking because it seems to provide something new, and provocative, about mathematics’ ubiquitous presence.  When I first learned about the action of grid cells I was impressed with the way this fully biological, unconscious, cognitive mechanism resembled the abstract coordinate systems in mathematics.  But here there is an interesting reversal.  Here we see the biological one emerging, without our direction, from a system that owes its existence entirely to mathematics.  It puts mathematics somewhere between everything and in a way that we haven’t quite grasped.  It’s intelligence we can’t locate.

The fluency of geometry

My thoughts started jumping around today, trying to land on what it was that I found so fascinating about a recent article in Quanta Magazine.  This is one of the statements that got me going:

…Numbers emerging from one kind of geometric world matched exactly with very different kinds of numbers from a very different kind of geometric world.

To physicists, the correspondence was interesting. To mathematicians, it was preposterous.

It was in the early nineties that the surprise first occurred, like an alert that there is a mirror symmetry between two different mathematical structures,  and mathematicians have been investigating it for almost three decades now.   The Quanta magazine article reports that they seem to be close to being able to explain the source of the mirroring.  Kevin Hartnett, author of the Quanta article, characterizes their effort as one that could produce “a form of geometric DNA – a shared code that explains how two radically different geometric worlds could possible hold traits in common.”  (I like this biologically-themed analogy)

The whole mirroring phenomenon rests largely on the development of string theory in physics, where theorists found that the strings, that they hoped were the fundamental building blocks of the universe, required 6 dimensions more than is contained in Einstein’s 4-dimensional spacetime.  String theorists answered the demand by finding two ways to account for the missing six dimensions – one from symplectic geometry and the other from complex geometry.  These are the two distinct arrangements of geometric ideas that mathematicians are now examining.

The nature of a symplectic geometric space is grounded in the idea of phase space, where each point actually represents the state of a system at any given time.  A phase space is defined by patterns in data, not by the spatial arrangement of objects.  It is a multidimensional space in which each axis corresponds to a coordinate that specifies an aspect of the physical system.  When all the coordinates are represented, a point in the space corresponds to a state of the system.  The nature of complex geometry, on the other hand, has its roots in algebraic geometry, where the objects of study are the graphed solutions to polynomial equations.  Here the ordered pairs represent exactly positions on a grid (like those x,y pairs we learn about in high school), or complex numbers in a complex space, where those numbers are solutions to equations.  The beauty of this arrangement is that the properties possessed by the geometric representation of these solutions (or the objects they produce) provide us with more about the equations they represent than we would have without these representations.  But wherever they are, these solutions are rigid geometric objects.  The phase space is more flexible. Hartnett tells us that:

In the late 1980s, string theorists came up with two ways to describe the missing six dimensions: one derived from symplectic geometry, the other from complex geometry. They demonstrated that either type of space was consistent with the four-dimensional world they were trying to explain. Such a pairing is called a duality: Either one works, and there’s no test you could use to distinguish between them.

Robert Dijkgraaf, Director and Leon Levy Professor at the Institute for Advanced Study tells an interesting story. Around 1990, a group of string theorists asked geometers to calculate a number related to the number of curves, of a particular degree, that could be wrapped around the kind of space or manifold that is heavily used in string theory (a Calabi-Yau space) A result from the nineteenth century established that the number of lines or degree-one curves is equal to 2,875. The number of degree-two curves is 609,250.  This was computed around 1980.  The number of curves of degree three had not been computed.  This was the one geometers were asked to compute.

The geometers devised a complicated computer program and came back with an answer. But the string theorists suspected it was erroneous, which suggested a mistake in the code. Upon checking, the geometers confirmed there was, but how did the physicists know?

String theorists had already been working to translate this geometric problem into a physical one. In doing so, they had developed a way to calculate the number of curves of any degree all at once. It’s hard to overestimate the shock of this result in mathematical circles.

The duality appeared to run deep and mathematicians and physicists alike began to try to understand the underlying feature that would account for the mirroring phenomenon. A proposed strategy is to deconstruct a shape in the symplectic world in such a way that it can be reconstructed as a complex shape. The deconstruction can make a multidimensional simplectic manifold easier to visualize and it can also reduce one of the mirror spaces into building blocks that can be used to construct the other. This would likely lead to a better understanding of what connects them.

Again from Dijkgraaf:

Mathematics has the wonderful ability to connect different worlds. The most overlooked symbol in any equation is the humble equal sign. Mirror symmetry is a perfect example of the power of the equal sign. It is capable of connecting two different mathematical worlds. One is the realm of symplectic geometry, the branch of mathematics that underlies much of mechanics. On the other side is the realm of algebraic geometry, the world of complex numbers. Quantum physics allows ideas to flow freely from one field to the other and provides an unexpected “grand unification” of these two mathematical disciplines.

This is a remarkable story, and there are many in mathematics.  I’ve always been captivated a bit by how the spatial ideas of this discipline, once charged with measuring the earth, became the abstract ideals described by Euclid, that were then stretched to accommodate spaces with non-Euclidean shapes, that include our spacetime, and were further developed to create spaces defined by patterned data of any kind – the symplectic kind.  In this story, mathematicians, like experimentalists, become charged with the need to find reason for an unexpected observation.  But it is an observation of the fully abstract world that mathematics built.  What are these abstract worlds made of?  How do they become more than we can see?  I’m well aware of the lack of precision in these questions, but there is value in stopping to consider them.   To what extent are these abstract spaces objective?  Where are these investigations happening?  There is no doubt that we have yet to understand what we realize when we find mathematics.

Proofs, the mind, and mathematics

A recent article in Quanta Magazine anticipates the publication of the 6th edition of Proofs from The Book, collected by Martin Aigner and Günter Ziegler.  The original volume was inspired by the well-known and prolific mathematician Paul Erdős, who traveled the world, participating in countless collaborative efforts, and who would say of proofs that he judged to be of sublime beauty, “This one is from The Book.” This Book was imagined as the heavenly collection of mathematics’ perfect proofs. Aigner suggested the possibility of actually making The Book in 1994. Aigner, along with fellow mathematician Günter Ziegler, and with contributions from Erdős himself, published the first volume in 1998. Unfortunately, Erdős died in 1996, at he age of 83, and never saw the volume in print. The book received the 2018 Steele Prize for Mathematical Exposition.

One of the nice things that the article points out is that there are theorems that have a number of different proofs, each one telling you something different about the theorem or the structures involved in the proof of the theorem.

An example comes to mind — which is not in our book but is very fundamental — Steinitz’s theorem for polyhedra. This says that if you have a planar graph (a network of vertices and edges in the plane) that stays connected if you remove one or two vertices, then there is a convex polyhedron that has exactly the same connectivity pattern. This is a theorem that has three entirely different types of proof — the “Steinitz-type” proof, the “rubber band” proof and the “circle packing” proof. And each of these three has variations.

Any of the Steinitz-type proofs will tell you not only that there is a polyhedron but also that there’s a polyhedron with integers for the coordinates of the vertices. And the circle packing proof tells you that there’s a polyhedron that has all its edges tangent to a sphere. You don’t get that from the Steinitz-type proof, or the other way around — the circle packing proof will not prove that you can do it with integer coordinates. So, having several proofs leads you to several ways to understand the situation beyond the original basic theorem.

This kind of discussion highlights how mathematical ideas can be multi-aspected, the very thing that makes a mathematical idea powerful and difficult to categorize in our experience. But in the lower right margin of the article were links to related articles, and it was here that I found Michael Atiyah’s Imaginative State of Mind.  This piece was written about a year ago, when Michael Atiyah hosted a conference at the Royal Society of Edinburgh on The Science of Beauty. There is a video of his introductory remarks on youtube worth a listen.  The article was built around Atiyah’s response to some questions that the authors were able to ask him on the occasion of the conference.

Roughly speaking, he has spent the first half of his career connecting mathematics to mathematics, and the second half connecting mathematics to physics….
….Now, at age 86, Atiyah is hardly lowering the bar. He’s still tackling the big questions, still trying to orchestrate a union between the quantum and the gravitational forces. On this front, the ideas are arriving fast and furious, but as Atiyah himself describes, they are as yet intuitive, imaginative, vague and clumsy commodities.

I felt encouraged by the refreshingly sensory ways Atiyah characterized his experience as a mathematician. Like here:

The crazy part of mathematics is when an idea appears in your head. Usually when you’re asleep, because that’s when you have the fewest inhibitions. The idea floats in from heaven knows where. It floats around in the sky; you look at it, and admire its colors. It’s just there. And then at some stage, when you try to freeze it, put it into a solid frame, or make it face reality, then it vanishes, it’s gone. But it’s been replaced by a structure, capturing certain aspects, but it’s a clumsy interpretation.

In response to being asked if he had always had mathematical dreams he said this:

The crazy part of mathematics is when an idea appears in your head. Usually when you’re asleep, because that’s when you have the fewest inhibitions. The idea floats in from heaven knows where. It floats around in the sky; you look at it, and admire its colors. It’s just there. And then at some stage, when you try to freeze it, put it into a solid frame, or make it face reality, then it vanishes, it’s gone. But it’s been replaced by a structure, capturing certain aspects, but it’s a clumsy interpretation.

And when asked about the two works for which he is well known (the index theorem and K-threory) he suggested this very visual way of describing k-theory:

The index theorem and K-theory are actually two sides of the same coin. They started out different, but after a while they became so fused together that you can’t disentangle them. They are both related to physics, but in different ways.

K-theory is the study of flat space, and of flat space moving around. For example, let’s take a sphere, the Earth, and let’s take a big book and put it on the Earth and move it around. That’s a flat piece of geometry moving around on a curved piece of geometry. K-theory studies all aspects of that situation — the topology and the geometry. It has its roots in our navigation of the Earth.

The maps we used to explore the Earth can also be used to explore both the large-scale universe, going out into space with rockets, and the small-scale universe, studying atoms and molecules. What I’m doing now is trying to unify all that, and K-theory is the natural way to do it. We’ve been doing this kind of mapping for hundreds of years, and we’ll probably be doing it for thousands more.

I found a nice description of how the index theorem can connect the curvature of a space to its topology (or the number of holes it has).

One of the things Atiya is committed to at the moment is reversing the mistake of ignoring the small effect of gravity on an electron or proton. He says he’s going back to Einstein and Dirac and looking at them again and he thinks he sees things that people have missed. “If I’m wrong,” he says, “I made a mistake. But I don’t think so.”

At the end of introductory remarks he made at The Science of Beauty conference he said that he found himself closer to the mystical views of Pythagoras than to those who completely rejected mysticism. ”A little bit of mysticism is important in all forms of life.”

When asked if he thought a computer could be made to recognize beauty, his response led to his characterizing the mind as a parallel universe. More than just logic, the mind has aspects that recognize states. These are not verbal or pictorial states, but conceptual states. And beauty lives somewhere in the mind. This is the kind of insight that doing mathematics can produce. And it will, I believe, lead us to completely new ideas about who we are and what it is that our minds may be producing.

A last thought on mathematics:

People think mathematics begins when you write down a theorem followed by a proof. That’s not the beginning, that’s the end. For me the creative place in mathematics comes before you start to put things down on paper, before you try to write a formula. You picture various things, you turn them over in your mind. You’re trying to create, just as a musician is trying to create music, or a poet. There are no rules laid down. You have to do it your own way. But at the end, just as a composer has to put it down on paper, you have to write things down.

Mathematical hybrids and the like

My attention was just recently brought to the work of philosopher and poet Emily Grosholz.  It’s rare to find an individual so steeped in the ways of poetry and mathematics, and the desire to explore how and what they express about us.  What I would like to consider here, in this particular post, is really a detail of the extensive thought and research that Grosholz brings to the discussion of how mathematics grows.  But I think it’s a powerful idea that can have a good deal to say about how we work, and how we, as a species, produce the bountiful and variegated products of human culture.

Grosholz is the author of many books that include works on the philosophy of mathematics as well as works of poetry. Her latest is Starry Reckoning: Reference and Analysis in Mathematics and Cosmology.  What follows is based on a piece that she contributed to a book she edited with Herbert Breger. The book is called The Growth of Mathematical Knowledge, and her piece is given the title The Partial Unification of Domains, Hybrids, and the Growth of Mathematical Knowledge.  Here Grosholz argues that unlike what has been considered before, different branches of mathematics do not reduce to other branches.  Philosophers of mathematics have discussed the possibility that geometry can be reduced to arithmetic, arithmetic to predicate logic, and arithmetic and geometry to set theory. This is understood in much the same way that one might claim that biology can be reduced to chemistry and chemistry to physics.  The vocabulary of the reduced theory is redefined in terms of the reducing theory.  In the sciences, the reducing theory has been thought to play an explanatory role, suggesting an inherent unity among the various scientific disciplines.  But in mathematics the so-called reducing theory is not used so much as an explanation of the reduced ideas, but more as a foundation for them.  And mathematicians have long had difficulty with foundational questions.   Grosholz, on the other hand, proposes that mathematics is a collection of rationally related but autonomous domains and then highlights the potent role of what she calls mathematical hybrids.

She explains that in Greek mathematics the autonomy of domains is clear.  Geometry is about points, lines, planes, and figures, and geometric problems involve relations between parts of the whole of spatial figures.  Arithmetic is about numbers, and problems in arithmetic involve monotonic, discreet succession.  The vocabulary of logic is one of terms, propositions, and arguments, and problems in logic involve ideas of inclusion, exclusion, consistency, and inconsistency.   While these separate domains may seem to resist assimilation, 17th century mathematics introduced some unifications.  Among these unifications is Descartes’ application of algebraic techniques to geometric constructions, and Leibniz’s application of combinatorics to an analysis of curves.  Grosholz spends some time on each of these.  She points out that Leibniz was fascinated with formal languages and number theory, and that he believed that the art of combinations was central to the art of discovery.  She argues that Leibniz’s investigation of algebraic forms in the calculus is grounded in “an imperfect but suggestive analogy between numbers and figures.”  The infinite summing of infinitesimal differences, that becomes the integral, emerges from his ability to bridge geometric ideas about a curve (like tangent, arc length, area), with algebraic equations, and through the notion of an infinite-sided polygon approximating the curve, patterns of integers were also connected.  Here the mathematical hybrid emerges: an abstract structure that rationally relates different domains in the service of problem solving.  On a deeper level, objects in each domain must actually exhibit features of both domains, despite the instability created by their differences. But, Grosholz argues, this instability does not mean that hybrids are defective.  They are held together by the clarity of the domains from which they emerge, and the abstract structures that link them.  “Logical gaps are to be found at the heart of many hybrids,” Grosholz explains, but imaginative analogies inspire the kind of revision and invention that promotes the growth of mathematical knowledge.

I was always impressed by the fact that these intuitive leaps that Leibniz took, while prompting subsequent generations to feel the need to bring acceptable rigor to the notions, were nonetheless substantiated.  Grosholz lends some important detail to the picture Richard Courant paints of 17th century pioneers of mathematics in his classic text, What is Mathematics?

In a veritable orgy of intuitive guesswork, of cogent reasoning interwoven with nonsensical mysticism, with a blind confidence in the superhuman power of formal procedure, they conquered a mathematical world of immense riches.

This talk of hybrids reminded me of the interdisciplinarity that Virginia Chaitin writes about.  I wrote this in an earlier post about one of her papers:

What she proposes is not the kind of interdisciplinary work that we’re accustomed to, where the results of different research efforts are shared or where studies are designed with more than one kind of question in mind. The kind of interdisciplinary work that Chaitin is describing, involves adopting a new conceptual framework, borrowing the very way that understanding is defined within a particular discipline, as well as the way it is explored and the way it is expressed. The results, as she says, are the “migrations of entire conceptual neighborhoods that create a new vocabulary.”

In her own words:

…an epistemically fertile interdisciplinary area of study is one in which the original frameworks, research methods and epistemic goals of individual disciplines are combined and recreated yielding novel and unexpected prospects for knowledge and understanding. This is where interdisciplinary research really proves its worth.

Grosholz’s identification of the hybrid is an important insight, and I would argue that it has implications beyond mathematics.  It may be that because the objects of mathematics are so clean, or unambiguous, the value of the hybrid is more easily observed.   But my hunch is that productive analogies likely belong to the stuff of life itself.