Categories

Autopoiesis, free energy, and mathematics

I have long been interested in the notion of autopoiesis introduced by Humberto Maturana and Francisco Varela in 1972. In short, autopoiesis is the model of living systems that sees every living system (from single cells to multicellular organisms) as individual unities whose living is the creation of themselves. Through the interaction of their components, they continuously regenerate and realize the processes that produce them. Living systems exist is a space determined by their structure. In this light, cognition became defined as the action or behavior that accomplishes this continual production of the system itself.

From my perspective, the notion of structural coupling which developed out of this framework, has the potential to contribute something important to a philosophy of mathematics. Two or more unities are structurally coupled when they enter into a relatedness that accomplishes their autopoiesis by virtue of ‘a history of recurrent interactions’ that leads to their ‘structural congruence.’ Also true is that every autopoietic system is closed, meaning that it lives only with respect to itself. Whether interactions happen within the internal components of a system, or with the medium in which the system exists, the system is only involved in its own continuous regeneration. The view of cognition proposed by Maturana requires that the nervous is just such a closed, autopoietic system, which also functions as a component of the organism that contains it.

Mathematician Yehuda Rav used these ideas to propose a philosophy of mathematics (which I referenced in a 2012 post). In an essay with the title Philosophical Problems of Mathematics in the Light of Evolutionary Epistemology, Rav writes:

Thus, Maturna (1980, p. 13) writes: “Living systems are cognitive systems, and living as a process is a process of cognition”. What I wish to stress here is that there is a continuum of cognitive mechanisms, from molecular cognition to cognitive acts of organisms, and that some of these fittings have become genetically fixed and are transmitted from generation to generation. Cognition is not a passive act on the part of an organism, but a dynamic process realized in and through action.

When we form a representation for possible action, the nervous system apparently treats this representation as if it were a sensory input, hence processes it by the same logico-operational schemes as when dealing with an environmental situation. From a different perspective, Maturana and Varela (1980, p. 131) express it this way: “all states of the nervous system are internal states, and the nervous system cannot make a distinction in its process of transformations between its internally and externally generated changes.”

Thus, the logical schemes in hypothetical representations are the same as the logical schemes in coordination of actions, schemes which have been tested through eons of evolution and which by now are genetically fixed.

As it is a fundamental property of the nervous system to function through recursive loops, any hypothetical representation which we form is dealt with by the same ‘logic’ of coordination as in dealing with real life situations. Starting from the elementary logico-mathematical schemes, a hierarchy is established. Under the impetus of socio-cultural factors, new mathematical concepts are progressively introduced, and each new layer fuses with the previous layers. In structuring new layers, the same cognitive mechanisms operate with respect to the previous layers as they operate with respect to an environmental input. …..The sense of reality which one experiences in dealing with mathematical concepts stems in part from the fact that in all our hypothetical reasonings, the object of our reasoning is treated by the nervous system by means of cognitive mechanisms which have evolved through interactions with external reality.

Mathematics is a singularly rich cognition pool of mankind from which schemes can be drawn for formulating theories which deal with phenomena which lie outside the range of daily experience, and hence for which ordinary language is inadequate.

Rav is imagining the development of mathematics as a feature of human cognition. But the perspective proposed by Maturana includes a theory of language. For Maturana, language is not a thing, and the essence of what we call language is not in the words or the grammar. Language happens as we live in the units that our coupling defines – through living systems, interlocked by structural congruences, that build unities. We are languaging beings the way we are breathing beings.

My experience with mathematics has suggested to me that, like words and grammar, the symbolic representation of mathematics is secondary to what mathematics is. Mathematics also seems to happen. And Maturana’s emphasis on autopoiesis and structural coupling has suggested to me that mathematics, like language, happens through the recursive coordination of behaviors. But perhaps unlike language, the relational dynamics that bring it about are somehow fed by the more fundamental structures in the physical world (both living and non-living), to which we are coupled, rather than by the features of the day-to-day experience that we share.

Conceptually, the view of biology proposed by Maturana is significantly different from main stream thinking in the biological sciences. One of the most important differences is the way living systems are each bounded by their individual autopoietic processes and, at the same time, nested within each other, infinitely extending living possibilities. In my opinion, this particular aspect of their thinking is the most promising in the sense that it is this aspect of their thinking that has the greatest potential to produce something new.

A recent article in Wired about the work of Karl Friston suggested to me that I might be right. Friston, a neuroscientist who has made important contributions to neuroimaging technology, is the author of an idea called the free energy principle. Free energy is the difference between the states a living system expects to be in, and the states that its sensors determine it to be in. Another way of saying it is that when free energy is minimized, surprise is minimized. For Friston, a biological system (Maturana’s unity) that resists disorder and dissolution (is autopoietic) will adhere to the free energy principle – “whether it’s a protozoan or a pro basketball team.”

Friston’s unities are separated by what are called Markov blankets.

Markov is the eponym of a concept called a Markov blanket, which in machine learning is essentially a shield that separates one set of variables from others in a layered, hierarchical system. The psychologist Christopher Frith—who has an h-index on par with Friston’s—once described a Markov blanket as “a cognitive version of a cell membrane, shielding states inside the blanket from states outside.

In Friston’s mind, the universe is made up of Markov blankets inside of Markov blankets. Each of us has a Markov blanket that keeps us apart from what is not us. And within us are blankets separating organs, which contain blankets separating cells, which contain blankets separating their organelles. The blankets define how biological things exist over time and behave distinctly from one another. Without them, we’re just hot gas dissipating into the ether.

The free energy principle is mathematical, and grounded in physics, Bayesian statistics, and biology. It involves action, or the living system’s response to surprise, in addition to the systems predictive abilities. This is one reason the theory has far-reaching potential for application. The audience that the free energy principle attracts is consistently expanding.

Journeying in and out

When ecently reminded of the images in Catholic texts and prayers, I considered, again, my hunch that mathematics could somehow help connect unrelated aspects our experience, in particular counterintuitive religious images and familiar sensory experience. I am not suggesting that mathematics would explain these images, but more that it could be used to encourage, perhaps even contribute to, their exploration. This possibility would rely on a refreshed understanding of what mathematics is.  There are numerous mathematical ideas or objects – necessary, productive and useful ideas – that do not correspond to anything familiar to the senses.  Some of the most accessible are things like the infinite divisibility of the line, the point at infinity, bounded infinities, or the simple fact that the open interval from 0 to 1 is equivalent to the whole number line.  The infinite divisibility of the line relies, to a large extent, on the fact that there are no spaces between ‘individual points.’  A similar construction happens in complex analysis, where one can consider layered complex planes with no space between them.  My hope is that these mathematical possibilities become more widely known and considered.

Today I looked back at a post that I wrote in 2010 with the title, Archetypes, Image Schemas, Numbers and the Season.  The subject of the post is a chapter from the book, Recasting Reality: Wolfgang Pauli’s Philosophical Ideas and Contemporary Science.  The chapter was written by cognitive scientist Raphael Nuñez who uses Pauli’s collaboration with Jung to address Pauli’s philosophy of mathematics. Jung understood ‘number’ in terms of archetypes, primitive mental images that are part of our collective unconscious.  But Nuñez seems most interested in addressing Platonism.  Pauli’s interest in Jung doesn’t address Platonism directly but it is nonetheless implied in many of the things he says.  As a cognitive scientist, Nuñez rejects Platonism.  Despite the complexity of mathematical abstractions, he argues that the discipline is heavily driven by human experience. While his observations of Pauli’s interest in Jung’s psychology are nicely laid out, and some parallels to his own theory are highlighted, Pauli’s ideas don’t really contribute to the non-mystical position that Nuñez has staked out.

But today I looked at the entire text to which Nuñez contributed, and could see that there are a number of things in Pauli’s view that address my own preoccupation with the nature of mathematics, and more deeply than does the question of whether mathematics exists independent of human experience or not. Pauli was preoccupied with reconciling opposites, finding unity, making things whole, and was strongly motivated to think about the problem of how scientific knowledge, and what he called redemptive knowledge, are related to each other. I find it fairly plausible that mathematics could help with this since it exists in the world of ideal images as well as the world of physical measurement and logical reasoning.

Today I found a really nice essay by physicist Hans von Baeyer with the title Wolfgang Pauli’s Journey Inward.  It tells a more intimate story of Pauli’s ardent search for what’s true, and is well worth the read.

In time Pauli came to feel that the irrational component of his personality, represented by the black, female yin, was every bit as significant as its rational counterpart. Pauli called it his shadow and struggled to come to terms with it. What he yearned for was a harmonious balance of yin and yang, of female and male elements, of the irrational and the rational, of soul and body, of religion and science

During his lifetime, Pauli’s fervent quest for spiritual wholeness was unknown to the public and ignored by his colleagues. Today, with the debate between science and religion once more at high tide, Pauli’s visionary pursuit speaks to us with renewed relevance.

I particularly enjoyed von Baeyer’s description of the famous Exclusion Principle for which Pauli received a Nobel Prize in 1945. It went like this:

The fundamental question had been why the six electrons in the carbon atom, say, don’t all carry the same amount of energy — “why their quantum numbers don’t have identical values… it should be expected that the electrons would all seek the same lowest possible energy configuration, the way water seeks the lowest level, and crowd into it.” If this rule applied to electrons in atoms, there would be very little difference between, say, carbon with its six and nitrogen with its seven electrons. There would be no chemistry.

Pauli answered the question by decree: the electrons in an atom, he claimed, don’t have the same quantum numbers because they can’t. If one electron is labeled with, say, the four quantum numbers (5, 2, 3, 0) the next electron you add must carry a different label, say (5, 2, 3, 1) or perhaps (6, 2, 3, 0). He proposed no new force between electrons, no mechanism, not even logic to support this injunction. It was simply a rule, imperious in its peremptoriness, and unlike anything else in the entire sweep of modern physics. Electrons avoid each other’s private quantum numbers for no reason other than, as one physicist put it, “for fear of Pauli.” …With the invention of the fourth quantum number and the exclusion principle Pauli opened the way for the systematic construction of Mendeleev’s entire periodic table.

What struck me from reading von Baeyer’s account was the depth of Pauli’s concern. And the boldness of his Exclusion Principle somehow makes him seem particularly trustworthy. The reconciliation he sought was not one that just allowed for the accepted coexistence of different concerns, but rather one that changed both of them to accommodate something new. As von Baeyer points out, physics has become more and more dominated by “the manipulation of symbols that facilitate thinking but bear only an indirect relationship to observable facts.” Pauli seemed to expect that symbol was the link between the rational and the irrational.  This would easily support my hunch.  He seemed to expect that science be able to deal with the soul, where the soul in turn informs science.

Eventually, he hoped, science and religion, which he believed with Einstein to have common roots, will again be one single endeavor, with a common language, common symbols, and a common purpose.

This is what I expect. And, at the moment, mathematics seems to be my most trustworthy guide. It lives on the boundary that we think we see between pure thought and material, between mind and matter. Pauli’s conviction is particularly reassuring.

Spaces upon spaces – topology and slum conditions

I didn’t know anything about topology before I entered graduate school, but I continue to see it as one of the more provocative specialties in mathematics and an important transition of thought. Most definitions of the subject describe it as the study of the properties of objects that are preserved through deformations like stretching and twisting.  Cutting or tearing or gluing is not allowed.  A circle is topologically equivalent to an ellipse because the circle can just be stretched into the ellipse.  Removing one point, however, from either the circle or the ellipse, produces something else. We now have a line segment.  A sphere is topologically equivalent to an ellipsoid, again because one can be squeezed or stretched into the other. But a doughnut, because of the hole it has, is not topologically equivalent to either,  Holes, in fact, become key to creating equivalence classes of things.  A well known and lighthearted equivalence, is one where a coffee cup is topologically equivalent to a doughnut.   Even without any training, one can see that these equivalences depend on another level of abstraction, and one that challenges intuitive notions.  While topology often considers shapes and spaces, it is not concerned with distance or size.

My affinity for this branch of mathematics may have been helped along by the fact that my favorite teacher in graduate school was a topologist.  Sylvain Cappell, now still at The Courant Institute of Mathematical Sciences at NYU, introduced me to topological ideas. I’ve saved a Discover Magazine article from 1993 that landed in my mailbox not long after I left Courant. In it Cappell discusses the motivation and effectiveness of a topological approach to problems. The late Fields Medalist, William Thurston, also contributed to that article. Thurston suggested that our difficulty with perceiving the higher dimensions that are a fundamental consequence of topological ideas is primarily psychological. He believed that the mind’s eye is divided between linear, analytic thinking and geometric visualization.

Algebraic equations, for example, are like sentences. The formula that gives you the area for a cube, x times x times x, can easily be communicated in words. But the shape of the cube is another matter. You have to see it.

When we talk about higher-dimensional spaces, Thurston says, we’re learning to think in and plug into this other spatial processing system. The going back and forth is difficult because it involves two really foreign parts of the brain.

Emphasizing the value of “see it” Cappell makes the argument that even with a 2-dimensional graph, relating something like interest to consumer spending, where neither has anything to do with geometry, the shape of the line that represents their relationship gives you a better grasp of the situation.

The same holds true in five or even ten-dimensional models. Logically, it may seem like the geometry is lost, that it’s just numbers, says Cappell. But the geometry can tell you things that the numbers alone can’t: how a curve reaches a maximum, how you get from there to here. You can see hills and valleys, sharp turns and smooth transitions; holes in a doughnut-shaped nine-dimensional model might indicate realms where no solutions lie.

A few days ago, an article in Forbes tells us that topology can help us see something about how to reduce slum conditions in cities. Researchers, it explains, opt for a “shape-based”  understanding of cities.

According to the team’s research, when two or more city sections have the same number of blocks, they’re topologically equivalent and can be deformed into each other.” Using that approach, sections of Mumbai can be deformed into Las Vegas suburbs or even areas of Manhattan.

How are slums and planned cities topologically different? The difference emerges essentially from better or worse access to the infrastructure and researchers claim that once cities are understood as topological spaces, the access issue can be resolved mathematically.

Their approach uses an algorithm that can be applied to any city block, they note in their paper. It applies tools from topology and graph theory — the branch of mathematics concerned with networks of points connected by lines — to neighborhood maps to diagnose and “solve critical problems of development,” they wrote.

This is an interesting peek at what can happen in or with mathematics. Topology itself requires a willingness to look differently. Using topological ideas to analyze or to address the development of slum conditions in sprawling cities is unexpected. It’s a geometry being applied to a space, but not directly.  Not because it resembles the space.  It says more about how abstractions can give us greater access to the real world.  Or how the minds eye can see.

Both the Discover and Forbes articles are worth a look.

Update on site

Hello everyone,

I want to let subscribers know that I am making some hosting changes.  I will be posting a blog tomorrow.  If you don’t receive notice of the post, I encourage you to resubscribe.

Thanks for your interest in the site.

Joselle

Prejudice in an abstract world

I was struck today by the title of an article in Science News that read, Before his early death, Riemann freed geometry from Euclidean prejudices. The piece, by science writer Tom Siegfried, was no doubt inspired by the recent claim from award-winning mathematician Michael Atiyah that he has proved the long standing Riemann hypothesis, one of the most famous unsolved problems in mathematics for close to 160 years. But Siegfried’s article was more about Riemann’s extraordinary insights than it was about Atiyah’s claim (which I’ll get to before I’m done here). First, let me say this: by using the term ‘Euclidean prejudices,’ Siegfried is telling us that we could be mislead by prejudices even in mathematics. And what prejudices usually do is conceal the truth. The word actually appears in the lecture itself.  This is from a translation of the lecture by William Kingdom Clifford:

Researches starting from general notions…can only be useful in preventing this work from being hampered by too narrow views, and progress in knowledge of the interdependence of things from being checked by traditional prejudices.

Siegfried seems not so much interested in talking about mathematics itself as he is in illustrating the significance of a change of perspective within mathematics. Most young students of mathematics would never imagine that there could be more than one mathematical way to think, or even that within the discipline there is mathematical thinking that’s not just problem solving. Referring to Riemann’s famous lecture, given in 1854, that essentially redefined what we mean by geometry, Siegfried says this:

In that lecture, Riemann cut to the core of Euclidean geometry, pointing out that its foundation consisted of presuppositions about points, lines and space that lacked any logical basis. As those presuppositions are based on experience, and “within the limits of observation,” the probability of their correctness seems high. But it is necessary, Riemann asserted, to “inquire about the justice of their extension beyond the limits of observation, on the side both of the infinitely great and of the infinitely small. (emphasis added)

Physics gets us beyond the limits of observation with extraordinarily imaginative instruments, detectors of all sorts. But how is it that mathematics can get us beyond those limits on its own? How is it possible for Riemann to see more without getting outside of himself? I don’t think this is the usual way the question is posed, but I have become a bit preoccupied with understanding how it is that purely abstract formal structures, that we seem to build in our minds, with our intellect, can get us beyond what we are able to observe. Again from Siegfried:

Riemann’s insights stemmed from his belief that in math, it was important to grasp the ideas behind the calculations, not merely accept the rules and follow standard procedures. Euclidean geometry seemed sensible at distance scales commonly experienced, but could differ under conditions not yet investigated (which is just what Einstein eventually showed)…

…Riemann’s geometrical conceptions extended to the possible existence of dimensions of space beyond the three commonly noticed. By developing the math describing such multidimensional spaces, Riemann provided an essential tool for physicists exploring the possibility of extra dimensions today.

Riemann appears in a number of my posts. I’ve taken particular interest in the significance of his work in part because in his famous lecture on geometry he cited the philosopher John Friedrich Herbart as one of his influences. Herbart pioneered early studies of perception and learning and his work played an important role in 19th century debates about how the mind brings structure to sensation. In Jose Ferreiros’ book Labyrinth of Thought, he takes up Riemann’s introduction of the notion of manifold and says this:

Herbart thought that mathematics is, among the scientific disciplines, the closest to philosophy. Treated philosophically, i.e., conceptually, mathematics can become a part of philosophy.~According to Scholz, Riemann’s mathematics cannot be better characterized than as a “philosophical study of mathematics” in the Herbarian spirit, since he always searched for the elaboration of central concepts with which to reorganize and restructure the discipline and its different branches, as Herbart recommended [Scholz 1982a, 428; 1990a].

I think the way I first grappled with the depth of Riemann’s insights was to consider that he was somehow guided, by the cognitive processes, that govern perception, despite the fact that they operate outside our awareness. Some blend of experience, psychology, and rigor worked to establish the clarity of his view. I wrote a piece for Plus magazine on this very topic.

Herbart’s thinking foreshadows what studies in cognitive science now show us about how we perceive space and magnitude — it may be that Riemann’s mathematical insights reflect them.

More recently I’ve become focused on asking a related question, but maybe from a different angle, and that is what is actually happening when we explore mathematical territories? How is this internal investigation accomplished? What does the mind think it’s doing?  These questions are relevant because it is clear that there is significantly more going on in mathematics than calculation and problem solving.  Riemann’s groundbreaking observations make that clear.  The questions I ask may sound like impossible questions to answer, but even just  organizing an approach to them is likely to involve, at the very least, cognitive science and neuroscience, mathematics, and epistemology, which makes them clearly worthwhile.

About Atiya’s breakthrough, an NBC news article said this:

Atiyah is a wizard of a mathematician, but there’s a lot of skepticism among mathematicians that his wizardry has been sufficient to crack the Riemann Hypothesis,” John Allen Paulos, a professor of mathematics at Temple University in Philadelphia and the author of several popular books on mathematical topics, told NBC News MACH in an email.

This skepticism is present in almost every article I read, but Atiya remains confident and is promising to publish a full version of the proof.

To be or not to be abstract

I’m not completely sure I understand where my desire to grasp the value of abstractions is taking me, but as I think about mathematics, and more recent trends in the sciences, I keep wanting to get further and further behind what our symbolic reasoning is actually doing, and how it’s doing it.  I have this idea that if I can manage to somehow see around, or inside, the products of our minds, I’ll see something new.  Maybe I can simplify the question that my own mind keeps asking, but that won’t help answer it.   Here’s the thing I’m stuck on.  Everything that we do (in the arts and the sciences) is based on the continuous flow of thoughts, that begin to amass into threads of thoughts, that have now moved through the hands and minds of countless individuals, over thousands of years, creating giant fabrics of meaningful information.  What are these threads made from?  How do they develop?  How are they related to everything else in nature?  And what might the giant fabrics of human culture have to do with everything else.  While we generally distinguish symbols from reality, a very large part of our reality is now built more from ideas than from concrete or wood, using the symbols that make the sharing of these ideas possible.   Even language captures relations, among sentiments and experience, in symbol.  These relations, together with a kind of reasoning or logic,  build our social and political systems.  We’re so immersed in our languages and our symbols that we don’t even see them.   And I don’t know if we can see them in the way that could address my curiosities.  But I’m convinced that we can see more than we do.  And the steady growth of information theory, (and its more immediate relatives, like algorithmic information theory and quantum information theory) seems to shed new light on the reality of abstract relations.  Mathematics is distinguished as the discipline that explores purely abstract relations.  The fruits of many of these explorations now service the parts of our world that we try to get our hands on – things like astronomy, engineering, physics, computer science, biology, medicine, and so on. I’m beginning to consider that information sciences may yet uncover something about why mathematics has been so fruitful.

I read today about Constantinos Daskalakis who was awarded the Rolf Nevanlinna Prize at the International Congress of Mathematicians 2018 for his outstanding contributions to the mathematical aspects of information sciences.   In particular, Daskalakis made some new observations of some older ideas –  namely game theory and what is called Nash equilibrium.  Marianne Freiberger explains Nash equilibrium in Plus Magazine:

When you throw together a collection of agents (people, cars, etc) in a strategic environment, they will probably start by trying out all sorts of different ways of behaving — all sorts of different strategies. Eventually, though, they all might settle on the single strategy that suits them best in the sense that no other strategy can serve them better. This situation, when nobody has an incentive to change, is called a Nash equilibrium.

A Nash equilibrium is not necessarily positive, it’s just stable.  Nash proved in 1950 that no matter how complex a system is, it is always possible to arrive at an equilibrium.  But the questioned remained – knowing that a system can stabilize doesn’t tell us whether it will.  And nothing in Nash’s proof tells us how these states of equilibrium are constructed, or how they happen.  People have searched for algorithms that could find the Nash equilibrium of a system, and they found some, but the time it would take to do the computations, or to complete the task, wasn’t clear.  Daskalakis explains in  Freidberger’s article:

My work is a critique of Nash’s theorem coming from a computational perspective,” he explains. “What we showed is that [while] an equilibrium may exist, it may not be attainable. The best supercomputers may not be able to find it.  This theorem applies to games that we play, it applies to road networks, it applies to markets. In many complex systems it may be computationally intractable for the system to find a stable operational mode. The system could be wondering around the equilibrium, or be far away from the equilibrium, without ever being drawn to a stable state.

Daskalakis’ work alerts people working in relevant industries that a Nash equilibrium, while it exists, may be essentially unattainable because the algorithms don’t exist, or because the complexity of the problem is just too difficult.  These considerations are relevant to people who design things like road systems, or online products like dating sites or taxi apps.

When designing such a system, you want to optimise some objective: you want to make sure that traffic flows consistently, that potential dates are matched up efficiently, or that taxi drivers and riders are happy.

If you are counting on an equilibrium to deliver this happy state of affairs, then you better make sure the equilibrium can actually be reached. You better be careful that the rules that you set inside your system do not lead to a situation where our theorem applies,” says Daskalakis. “Your system should be clean enough and have the right mathematical structure so that equilibria can arise easily from the interaction of agents. [You need to make sure] that agents are able to get to equilibrium and that in equilibrium the objectives are promoted.

Another option is to forget about the equilibrium and try to guarantee that your objective is promoted even [with] dynamically changing behaviour of people in your system.

This confluence of game theory, complexity theory and information science has made it possible to see the abstract more clearly, or has made a mathematical notion somehow measurable.  The work includes a look at how hard the solution to a problem can be, and whether or not the ideal can be actualized.  What struck me about the discussion in Plus was the fact that Daskalakis’ work was thought to address the difference between the mathematical existence demonstrated by Nash and its real world counterparts, maybe even whether or how they are related.  These things touch on my questions.  Nash’s proof is non-constructive existence proof.  It doesn’t build anything, it just finds something to be true.  Daskalakis is a computer scientist and an engineer.  He expects to build things.  But the problem is attacked with mathematics.  His effort spans game theory in mathematics, complexity theory (a branch of mathematics that classifies problems according to how hard they are) and information sciences.   There is an interesting confluence of things here.   And it didn’t answer any of the questions I have. But it encouraged me.  I also like this quote from a recent Quanta Magazine article about Daskalakis:

The decisions the 37-year-old Daskalakis has made over the course of his career — such as forgoing a lucrative job right out of college and pursuing the hardest problems in his field — have all been in the service of uncovering distant truths. “It all originates from a very deep need to understand something,” he said. “You’re just not going to stop unless you understand; your brain cannot stay still unless you understand.”

Abstractions: What’s happening with them?

We all generally know the meaning of abstraction.  We all have some opinion, for example, about the value of abstract painting.  And I’ve heard from many that mathematics is too abstract to be understood or even interesting.  (But I must admit, it is exactly this about mathematics that keeps me so captivated).  An abstraction is usually thought of as the general idea as opposed to the particular circumstance.  I thought today of bringing a few topics back into focus, all of which I’ve written about before, to highlight something about knowledge – what it is, or how we seem to collect it.   This particular story centers around the idea of entropy.

First, here’s as brief a description of the history of the mathematics of this idea as I can manage at the moment:

In the history of science and mathematics, two kinds of entropy were defined – one in physics and one in information theory.  Mathematical physicist Rudolf Clausius first introduced the concept of entropy in 1850, and he defined it as a measure of a system’s thermal (or heat) energy that was not available to do work.  It was a fairly specific idea that provided a mathematical way to pin down the variations in physical possibilities. I’ve read that Clausius chose the word entropy because of its Greek ties to the word transformation and the fact that it sounded like energy. The mathematical statement of entropy provided a clear account, for example, of how gas, confined in a cylinder, would freely expand if released by a valve, but could also be made to push a piston in response to the pressure of something that confined it, like in our cars. The piston event is reversible, while the expansion is not.  In the piston event, however, some amount of heat or energy is always lost to entropy.  And so Clausius’ version of the second law of thermodynamics says that spontaneous change, for irreversible processes in isolated systems, always moves in the direction of increasing entropy.

In 1948 Claude Shannon initiated the development of what is now known as information theory when he formalized a mathematics of information based on the observation that transmitted messages could be encoded with just two bursts of energy – on and off.  In this light he defined an information entropy, still referred to as Shannon’s entropy, which is understood as the measure of the randomness in a message, or a measure of the absence of information.  Shannon’s formula was based on the probability of symbols (or letters in the alphabet) showing up in the message.

In the late 1800’s, James Clerk Maxwell, developed the statistical mechanical description of entropy in thermodynamics, where macroscopic phenomena (like temperature and volume) were understood in terms of the microscopic behavior of molecules.   Soon after, physicist and philosopher Ludwig Boltzman generalized Maxwell’s statistical understanding of the action and formalized the logarithmic expression of entropy that is grounded in probabilities.  In that version entropy is proportional to the logarithm of the number of microscopic ways (hard to see ways) that the system could acquire different macroscopic states (the things we see).   In other words we can come to a statistical conclusion about how the behavior of an immense number of molecules, that we don’t see, will affect the events we do see.  It is Boltzman’s logarithmic equation (which appears on his gravestone) that resembles Shannon’s equation, allowing both entropies to be understood, essentially, as probabilities related to the arrangement of things.

It is certainly true that the mathematics that defined entropy at each stage of its development is an abstraction of the phenomenon.  However reducing both the thermodynamic definition, and information theory definition, to probablities related purely to the arrangement of things is another (and fairly significant) level of abstraction.

You are likely familiar with the notion that entropy always increases, or as it is often understood, things always tend to disorder.  But, unlike what one might expect, it is this ‘arrangement of things’ idea that seems to best explain why eggs don’t un-crack or ice doesn’t un-melt.  The number of possible arrangements of atoms in an un-cracked egg is far, far, smaller that the number of possible arrangements of atoms in a cracked egg, and so far less likely.  Aatish Bhatia does a really nice job of explaining this way of understanding things here.

Next, the relationship between information and thermodynamics has had the attention of physicists since James Clerk Maxwell introduced a hypothetical little creature who seemed to challenge the second law of thermodynamics and has come to be know as Maxwell’s demon.  Some discussion of the demon can be found in a post I wrote in 2016.  In 2017, a Quanta Magazine article by Philip Ball, reviews the work of physicists, mathematicians, computer scientists, and biologists who explore the computational (or information processing) aspect of entropy as it relates to biology.

Living organisms seem rather like Maxwell’s demon. Whereas a beaker full of reacting chemicals will eventually expend its energy and fall into boring stasis and equilibrium, living systems have collectively been avoiding the lifeless equilibrium state since the origin of life about three and a half billion years ago. They harvest energy from their surroundings to sustain this nonequilibrium state, and they do it with “intention.”

In 1944, physicist Erwin Schrödinger proposed that living systems take energy from their surroundings to maintain non-equilibrium (or to stay organized) by capturing and storing information. He called it “negative entropy.”

Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information.

Now, physicist Jeremy England is considering pulling biology into physics (or at least some aspect of it) with the suggestion that the organization that takes place in living things is just one of the more extreme possibilities of a phenomena exhibited by all matter. From an essay written by England,

The theoretical research I do with my colleagues tries to comprehend a new aspect of life’s evolution by thinking of it in thermodynamic terms. When we conceive of an organism as just a bunch of molecules, which energy flows into, through and out of, we can use this information to build a probabilistic model of its behaviour. From this perspective, the extraordinary abilities of living things might turn out to be extreme outcomes of a much more widespread process going on all over the place, from turbulent fluids to vibrating crystals – a process by which dynamic, energy-consuming structures become fine-tuned or adapted to their environments. Far from being a freak event, finding something akin to evolving lifeforms might be quite likely in the kind of universe we inhabit – especially if we know how to look for it.

Living things manage not to fall apart as fast as they form because they constantly increase the entropy around them. They do this because their molecular structure lets them absorb energy as work and release it as heat. Under certain conditions, this ability to absorb work lets organisms (and other systems) refine their structure so as to absorb more work, and in the process, release more heat. It all adds up to a positive feedback loop that makes us appear to move forward in time, in accordance with the extended second law. (emphasis added)

Finally (not in any true sense, just for the scope of this post) physicist Chiara Marletto has a theory of life based on a new fundamental theory of physics called Constructor Theory.  I wrote a guest blog for Scientific American on Constructor Theory in 2013.  In her essay, also published by Aeon, Marletto explains,

In constructor theory, physical laws are formulated only in terms of which tasks are possible (with arbitrarily high accuracy, reliability, and repeatability), and which are impossible, and why – as opposed to what happens, and what does not happen, given dynamical laws and initial conditions. A task is impossible if there is a law of physics that forbids it. Otherwise, it is possible – which means that a constructor for that task – an object that causes the task to occur and retains the ability to cause it again – can be approximated arbitrarily well in reality. Car factories, robots and living cells are all accurate approximations to constructors.

But the constructor itself, the thing that causes a transformation, is abstracted away in constructor theory, leaving only the input/output states.  ‘Information’ is the only thing that remains unchanged in each of these transformations, and this is the focus of constructor theory.  With Constructor Theory, this underlying independence of information involves a more fundamental level of physics than particles, waves and space-time. And the expectation is that this ‘more fundamental level’ may be shared by all physical systems (another generality).

The input/output states of Constructor Theory are expressed as “ordered pairs of states” and are called construction tasks.  The idea is no doubt a distant cousin of the ordered pairs of numbers we learned about in high school, along with the one-to-oneness, and compositions taught in pre-calculus!  And Constructor Theory is an algebra, a new one certainly, but an algebra nonetheless.  This algebra is not designed to systematize current theories, but rather to find their foundation and then open a window onto things that we have not yet seen.

According to Marletto:

The early history of evolution is, in constructor-theoretic terms, a lengthy, highly inaccurate, non-purposive construction that eventually produced knowledge-bearing recipes out of elementary things containing none. These elementary things are simple chemicals such as short RNA strands…Thus the constructor theory of life shows explicitly that natural selection does not need to assume the existence of any initial recipe, containing knowledge, to get started.

Marletto has also written on the constructor theory of thermdynamics in which she argues that constructor theory highlights a relationship between information and the first law of thermodynamics, not just the second.

This story about information, thermodynamics, and life, certainly suggests something about the value of abstraction.  As a writer, I’m not only interested in the progression of scientific ideas, but also in the power of generalities that seem to produce new vision, as well as amplify the details of what we already see.  It seems to me that there is a particular character to the knowledge that is produced when communities of thinkers move through abstractions that bring them from measuring temperature and volume, to information driven theories of a science that could contain both physics and biology.  It’s all about relations.  I haven’t written this to answer my question about what’s happening here, but mostly to ask it.

 

 

 

Multiple personality disorder – a glimpse into the cosmos?

A recent post on scientificamerican.com got my attention – no surprise given its title, Could Multiple Personality Disorder Explain Life, the Universe, and Everything.  It was coauthored by three individuals: computer scientist Bernardo Kastrup, Psychotherapist Adam Crabtree, and cognitive scientist Edward F. Kelly.  The article’s major source is a paper written by Kastrup, published this year in the Journal of Consciousness Studies with the title The Universe is Consciousness.  I’ll try to outline here the gist of the argument.

It begins with a very convincing narrative substantiating the presence of multiple personalities in individuals who experience this.  One of the most remarkable was the case (reported in Germany in 2015) of a woman who had dissociated personalities, some of whom were blind.

The woman exhibited a variety of dissociated personalities (“alters”), some of which claimed to be blind. Using EEGs, the doctors were able to ascertain that the brain activity normally associated with sight wasn’t present while a blind alter was in control of the woman’s body, even though her eyes were open. Remarkably, when a sighted alter assumed control, the usual brain activity returned.

This was a compelling demonstration of the literally blinding power of extreme forms of dissociation, a condition in which the psyche gives rise to multiple, operationally separate centers of consciousness, each with its own private inner life.  (emphasis added)

The history of cases of dissociated personalities goes back to the late 1800s and the authors tell us that the literature provides significant evidence that “the human psyche is constantly active in producing personal units of perception” – what we would call selves.  While it continues to be unclear how this happens, they argue that the development of selves, or personal units of perception, should play a role in how we understand “what is and is not possible in nature.”

The case they make requires an appeal to alternative philosophical perspectives specifically physicalism, constituitive panpsychism, and cosmopsychism.  Proponents of physicalism believe that we should be able to understand mental states through a thorough analysis of brain processes.  The ongoing problem with this expectation is that there is still no way to connect feelings to different arrangements of physical stuff.  Constituitive panpsychism is the idea that what we call experience is inherent in every physical thing, even fundamental particles.  Human consciousness would somehow be built “by a combination of the subjective inner lives of the countless physical particles that make up our nervous system.”  But, the authors argue, the articulation of this perspective does not provide a way to understand how lower level points of view (atoms and molecules) would combine to produce higher level points of view (human experience).   The alternative would be that consciousness is fundamental in nature but not fragmented.  This is cosmopsychism which, the authors say, is essentially the classic idealism, where the objects of our experience depend on something more fundamental than particles, and that fundamental thing is more like mind or thought than matter.

The difficulty with this view is understanding how various private conscious centers (like you and everyone around you) emerge from a ‘universal consciousness.’   Keying on this question is what makes the presence of multiple personalities in one individual a useful indicator of how to think about this larger question.

Kastrup’s paper, on which this very readable Scientific American article is based, is steeped in the language of philosophy.  He works to unpack the mainstream physicalist perspective and why it doesn’t work, and then he examines a number of panpsychism views and their weaknesses.  For his own aragument, he relies most heavily on a proposal from philosopher Itay Shani.

Shani does still postulate a duality in cosmic consciousness to account for the clear qualitative differences between the outer world we, as relative subjects, perceive and measure and the inner world of our thoughts and feelings. He calls it the ‘lateral duality principle’ (Shani 2015, p412) and describes it thus:

[Cosmic consciousness] exemplifies a dual nature: it has a concealed (or enfolded, or implicit) side to its being, as well as a revealed (or unfolded, or explicit) side; the former is an intrinsic dynamic domain of creative activity, while the latter is identified as the outer, observable expression of that activity. (ibid., original emphasis)

Kastrup’s thinking is in line with Shani’s, but he goes to great lengths to examine the weaknesses in Shani’s view.  For the remainder of the paper, Kastrup focuses on addressing the following questions: how do fleeting experiential qualities arise out of “one enduring cosmic consciousness,”  what causes individual experiences to be private,  how can the physical world we measure be explained in terms of a concealed, thoughtful, order, why does brain function correlate so well with our awareness if it doesn’t generate it, and finally, why are we all imagining the same world outside the control of our personal volition.

Kastrup’s analysis of these questions is thorough and precise, and he uses the phenomenon of dissociated personalities which he calls alters) to address the privacy of individual experiences (since the alters within one individual are nonetheless private from each other) and the functional brain scans, that distinguish actual alters from ones that are just acted out, to imagine how each of us is the result of a “cosmic level dissociative processes.”

These are difficult ideas to accept given what we have come to expect from the sciences.  But I will point out that aspects of these proposals run parallel to ones proposed by contemporary neuroscientists and physicists.  The intimate connection between physics and mathematics always raises questions about the relatedness of mind and matter. For 17th century mathematician and philosopher Gottfried Wilhelm Liebniz, the fundamental substance of the universe could not be material. It had to be something undividable, something resembling a mathematical point more than a speck of dust. The material in our experience is then somehow a consequence of the relations among these non-material substances that actually resemble ‘mind’ more than ‘matter.’  For physicist and author David Deutsch, information and knowledge are the fundamentals of physical life. In his book, The Beginning of Infinity, Deutsch compares and contrasts human brains and DNA molecules. “Among other things,” he says, they are each “general- purpose information-storage media….” And so Deutsch sees biological information and explanatory information each as instances of knowledge which, he says, “is very unlikely to come into existence other than through the error-correcting process of evolution or thought.  The Integrated Information Theory of Consciousness proposed by neuroscientist Giulio Tononi, and defended by neuroscientist Christof Koch, suggests that some degree of consciousness is an intrinsic fundamental property of every physical system.  Also, cosmologist and author Max Tegmark is of the opinion that if we want to understand all of nature we have to consider all of it together.  For Tegmark there are three pieces to every puzzle – the thing being observed; the environment of the thing being observed (where there may be some interaction); and the observer.  He identifies three realities in his book Our Mathematical Universe – external reality, consensus reality, and internal reality.   External reality is the physical world which we believe would exist even if we didn’t (and is described in physics mathematically).  Consensus reality is the shared description of the physical world that self-aware observers agree on (and it includes classical physics). Internal reality is the way you subjectively perceive the external reality.  As with many ideas in physics, the universe is understood in terms of information, and Tegmark has said that he thinks that consciousness is the way information ‘feels’ when processed in complex ways.

It seems to me that a similar insight into what we have been overlooking, about ourselves and our world, is being approached from several directions and in languages specific to individual disciplines.  The ones proposed by physicists and neuroscientists are held together with mathematics.  But they all bring to mind again, something I thought when I watched my mother’s mind change with the development of a tumor in the right frontal lobe of her brain.  Among the many things I questioned was how it is that the cells in her body could produce her experience if something like consciousness or thought did not already in the world that created her.

 

 

 

 

 

 

 

Intelligence, artificial and otherwise

Earlier this month, Nature  reported on Artificial Intelligence (AI) research, where deep learning networks (an AI strategy) spontaneously generated patterns of computations that bore a striking resemblance to the activity generated by our own grey matter – namely by the neurons called grid cells in the mammalian brain. The patterned firing of grid cells enable mammals to create cognitive maps of their environment. The artificial network, that unexpectedly produced something similar, was developed by neuroscientists at University College London, together with AI researchers at the London-based Google company DeepMind.  A computer-simulated rat was trained to track its movement in a virtual environment.

The Nature article by Alison Abbott tells us that the grid-cell-like coding was so good, the virtual rat was even able to learn short-cuts in its virtual world. And here’s an interesting response to the work from neuroscientist Edvard Moser, a co-discover of biological grid cells:

“This paper came out of the blue, like a shot, and it’s very exciting,” says neuroscientist Edvard Moser at the Kavli Institute for Systems Neuroscience in Trondheim, Norway. Moser shared the 2014 Nobel Prize in Physiology or Medicine for his co-discovery of grid cells and the brain’s other navigation-related neurons, including place cells and head-direction cells, which are found in and around the hippocampus region.

“It is striking that the computer model, coming from a totally different perspective, ended up with the grid pattern we know from biology,” says Moser. The work is a welcome confirmation that the mammalian brain has developed an optimal way of arranging at least this type of spatial code, he adds.

There is something provocative about measuring the brain’s version of grid cell navigation against this emergent but simulated grid cell action.

In Nature’s News and Views  Francesco Savelli and James J. Knierim tell us a bit more about the study. First, for the sake of clarity, what researchers call deep learning is a kind of machine learning characterized by layers of computations, structured in such a way that the output from one computation becomes the input of another. Inputs and outputs are defined by a transformation of data, or information, being received by each layer.  The data is translated into “compact representations” that promote the success of the task at hand – like translating pixel data into a face that can be recognized. A system like this can learn to process inputs so as to achieve particular outputs. The extent to which each of the computations, in each of the layers, affects the final outcome is determined by how they are weighted. With optimization algorithms, these weights will be adjusted to optimize results. Deep learning networks have been successful with computer vision, speech recognition, and games, among other things. But navigating ones self through the space of ones environment is a fairly complex task.

The research that led to Moser’s Nobel Prize in 2014 was the discovery of a kind of family of neurons that produces the cognitive maps we develop of our environments. There are place cells, neurons that fire when an organism is in a particular position in an environment, often with landmarks.  There are head-direction neurons that signal where the animal seems to be headed. There are also neurons that respond to the presence of an edge to the environment.   And, most relevant here, there are grid cells.  Grid cells fire when an animal is at any of a set of points that define a hexagonal grid pattern across their environment. The neuron’s firing maps to a point on the ground. They contribute to the animal’s sense of position, and correspond to the direction and distance covered by some number of steps taken.

Banino and colleagues wanted to create a mechanism for self-locating, in a deep-learning network.  Such a mechanism is referred to as path integration.

Because path integration involves remembering the output from the previous processing step and using it as input for the next, the authors used a network involving feedback loops.They trained the network using simulations of pathways taken by foraging rodents. The system received information about the simulated rodent’s linear and angular velocity, and about the simulated activity of place and head-direction cells…

And this is what happened:

The authors found that patterns of activity resembling grid cells spontaneously emerged in computational units in an intermediate layer of the network during training, even though nothing in the network or the training protocol explicitly imposed this type of pattern. The emergence of grid-like units is an impressive example of deep learning doing what it does best: inventing an original, often unpredicted internal representation to help solve a task.

These grid-like units allowed the network to keep track of position, but whether they would function in the network’s navigation to a goal was still a question. They addressed this question by adding a reinforcement-learning component. The network learned to assign values to particular actions at particular locations, and higher values were assigned to actions that brought the simulated animal closer to a goal.

The grid-like representation markedly improved the ability of the network to solve goal-directed tasks, compared to control simulations in which the start and goal locations were encoded instead by place and head-direction cells.

Unlike the navigation systems developed by the brain, in this artificial network, the place cell layer is not changed during the training that affects grid cells. But the way that grid and place cells influence each other in the brain is not well understood. Further development of the artificial network might help unravel their interaction.

From a broader perspective, it is interesting that the network, starting from very general computational assumptions that do not take into account specific biological mechanisms, found a solution to path integration that seems similar to the brain’s. That the network converged on such a solution is compelling evidence that there is something special about grid cells’ activity patterns that supports path integration. The black-box character of deep learning systems, however, means that it might be hard to determine what that something is.

There is clear pragmatic promise in this research, involving both AI and it’s many applications, as well as cognitive neuroscience.  But I find it striking for a different reason.  I find it striking because it seems to provide something new, and provocative, about mathematics’ ubiquitous presence.  When I first learned about the action of grid cells I was impressed with the way this fully biological, unconscious, cognitive mechanism resembled the abstract coordinate systems in mathematics.  But here there is an interesting reversal.  Here we see the biological one emerging, without our direction, from a system that owes its existence entirely to mathematics.  It puts mathematics somewhere between everything and in a way that we haven’t quite grasped.  It’s intelligence we can’t locate.

The fluency of geometry

My thoughts started jumping around today, trying to land on what it was that I found so fascinating about a recent article in Quanta Magazine.  This is one of the statements that got me going:

…Numbers emerging from one kind of geometric world matched exactly with very different kinds of numbers from a very different kind of geometric world.

To physicists, the correspondence was interesting. To mathematicians, it was preposterous.

It was in the early nineties that the surprise first occurred, like an alert that there is a mirror symmetry between two different mathematical structures,  and mathematicians have been investigating it for almost three decades now.   The Quanta magazine article reports that they seem to be close to being able to explain the source of the mirroring.  Kevin Hartnett, author of the Quanta article, characterizes their effort as one that could produce “a form of geometric DNA – a shared code that explains how two radically different geometric worlds could possible hold traits in common.”  (I like this biologically-themed analogy)

The whole mirroring phenomenon rests largely on the development of string theory in physics, where theorists found that the strings, that they hoped were the fundamental building blocks of the universe, required 6 dimensions more than is contained in Einstein’s 4-dimensional spacetime.  String theorists answered the demand by finding two ways to account for the missing six dimensions – one from symplectic geometry and the other from complex geometry.  These are the two distinct arrangements of geometric ideas that mathematicians are now examining.

The nature of a symplectic geometric space is grounded in the idea of phase space, where each point actually represents the state of a system at any given time.  A phase space is defined by patterns in data, not by the spatial arrangement of objects.  It is a multidimensional space in which each axis corresponds to a coordinate that specifies an aspect of the physical system.  When all the coordinates are represented, a point in the space corresponds to a state of the system.  The nature of complex geometry, on the other hand, has its roots in algebraic geometry, where the objects of study are the graphed solutions to polynomial equations.  Here the ordered pairs represent exactly positions on a grid (like those x,y pairs we learn about in high school), or complex numbers in a complex space, where those numbers are solutions to equations.  The beauty of this arrangement is that the properties possessed by the geometric representation of these solutions (or the objects they produce) provide us with more about the equations they represent than we would have without these representations.  But wherever they are, these solutions are rigid geometric objects.  The phase space is more flexible. Hartnett tells us that:

In the late 1980s, string theorists came up with two ways to describe the missing six dimensions: one derived from symplectic geometry, the other from complex geometry. They demonstrated that either type of space was consistent with the four-dimensional world they were trying to explain. Such a pairing is called a duality: Either one works, and there’s no test you could use to distinguish between them.

Robert Dijkgraaf, Director and Leon Levy Professor at the Institute for Advanced Study tells an interesting story. Around 1990, a group of string theorists asked geometers to calculate a number related to the number of curves, of a particular degree, that could be wrapped around the kind of space or manifold that is heavily used in string theory (a Calabi-Yau space) A result from the nineteenth century established that the number of lines or degree-one curves is equal to 2,875. The number of degree-two curves is 609,250.  This was computed around 1980.  The number of curves of degree three had not been computed.  This was the one geometers were asked to compute.

The geometers devised a complicated computer program and came back with an answer. But the string theorists suspected it was erroneous, which suggested a mistake in the code. Upon checking, the geometers confirmed there was, but how did the physicists know?

String theorists had already been working to translate this geometric problem into a physical one. In doing so, they had developed a way to calculate the number of curves of any degree all at once. It’s hard to overestimate the shock of this result in mathematical circles.

The duality appeared to run deep and mathematicians and physicists alike began to try to understand the underlying feature that would account for the mirroring phenomenon. A proposed strategy is to deconstruct a shape in the symplectic world in such a way that it can be reconstructed as a complex shape. The deconstruction can make a multidimensional simplectic manifold easier to visualize and it can also reduce one of the mirror spaces into building blocks that can be used to construct the other. This would likely lead to a better understanding of what connects them.

Again from Dijkgraaf:

Mathematics has the wonderful ability to connect different worlds. The most overlooked symbol in any equation is the humble equal sign. Mirror symmetry is a perfect example of the power of the equal sign. It is capable of connecting two different mathematical worlds. One is the realm of symplectic geometry, the branch of mathematics that underlies much of mechanics. On the other side is the realm of algebraic geometry, the world of complex numbers. Quantum physics allows ideas to flow freely from one field to the other and provides an unexpected “grand unification” of these two mathematical disciplines.

This is a remarkable story, and there are many in mathematics.  I’ve always been captivated a bit by how the spatial ideas of this discipline, once charged with measuring the earth, became the abstract ideals described by Euclid, that were then stretched to accommodate spaces with non-Euclidean shapes, that include our spacetime, and were further developed to create spaces defined by patterned data of any kind – the symplectic kind.  In this story, mathematicians, like experimentalists, become charged with the need to find reason for an unexpected observation.  But it is an observation of the fully abstract world that mathematics built.  What are these abstract worlds made of?  How do they become more than we can see?  I’m well aware of the lack of precision in these questions, but there is value in stopping to consider them.   To what extent are these abstract spaces objective?  Where are these investigations happening?  There is no doubt that we have yet to understand what we realize when we find mathematics.