Categories

Mental Magnitudes

I am increasingly fascinated by the mathematics of fundamental cognitive processes – like creatures finding their way to and from significant locations, or foraging for food, or foraging with the eyes, or comprehending the duration of an event. I’m excited by the fact that there are cognitive neuroscientists that have become focused on the architecture of these processes in particular. Their work seems to always suggest that our formal mathematical systems are growing out of these very same processes.

I read today Charles Gallistel’s contribution to Dehaene and Brannon’s Space, Time and Number in the Brain. Gallistel has a link to the pdf version on the Rutgers website.

Gallistel is concerned with the abstractions of space, time, number, rate and probability that have been experimentally studied and found to be playing a fundamental role in the lives of nonverbal animals and preverbal humans. His premise is this:

the brain’s ability to represent these foundational abstractions depends on a still more basic ability, the ability to store, retrieve and arithmetically manipulate signed magnitudes.

He makes a point of distinguishing between magnitude and our symbolic numbers. Magnitudes are what he calls ‘computable numbers’ a quantity that “can be subjected to arithmetic manipulation in a physically realized system.”

Being a bit pressed for time, I’ll just reproduce some of his observations.

The representation of space, he says:

requires summing successive displacements in an allocentric (other-centered) framework, a framework in which the coordinates of locations other than that of the animal do not change as the animal moves. By summing successive small displacements (small changes in its location), the animal maintains a representation of its location in the allocentric framework. This representation makes it possible to record locations of places and objects of interest as it encounters them, thereby constructing a cognitive map of its experienced environment. Computational considerations make it likely that this representation is Cartesian and allocentric.

But in order to have a directive function, these representations of experienced locations must be vectors – ordered sets of magnitudes. And the organism accomplishes arithmetic with them.

A fundamental operation in navigation is computing courses to be run… Assuming that the vectors are Cartesian, the range and bearing are the modulus and angle of the difference between the destination vector and the current-location vector. This difference vector is the element-by-element differences between the two vectors. Thus, the representation of spatial location depends on the arithmetic processing of magnitudes.

Gallistel challenges the notion that time-interval experience is generated by an interval-timing mechanism, pointing out that

There is, however, a conceptual problem with this supposition: The ability to record the first occurrence of an interesting temporal interval would seem to require the starting of an infinite number of timers for each of the very large number of experienced events that might turn out to be “the start of something interesting”–or not

Instead, he proposes

that temporal intervals are derived from the representation of temporal locations, just as displacements (directed spatial intervals) are derived from differences in spatial locations. This, in turn leads to arithmetic operations on temporal vectors (see Gallistel, 1990, for details). Rats represent rates (numbers of events divided by the durations of the intervals over which they have been experienced) and combine them multiplicatively with reward magnitudes [9]. Both mice and adult human subjects represent the uncertainty in their estimates of elapsing durations (a probability distribution defined over a continuous variable) and discrete probability (the proportion between the number of trials of one kind and the number of trials of a different kind) can combine these two representations multiplicatively to estimate an optimal target time [1].

I found one of the most interesting parts of this discussion to be the one on closure.

Closure is an important constraint on the mechanism that implement arithmetic processing in the brain. Closure means that there are no inputs that crash the machine. Closure under subtraction requires that magnitudes have sign (direction), because otherwise entering a subtrahend greater than the minuend would crash the machine; it would not be able to produce a valid output. Rats learn directed (signed) temporal differences; they distinguish between whether the reward comes before or after the signal and they can integrate one directed difference with another [11].

I find this particularly interesting because it took us some time to find signed differences in our symbolic system of subtraction or even to recognize the significance of closure.

I’ll end this with his brief conclusion. Some of the details of these studies can be found in the linked pdf.

It seems likely that magnitudes (computable numbers) are used to represent the foundational abstractions of space, time, number, rate, and probability. The growing evidence for the arithmetic processing of the magnitudes in these different domains, together with the “unreasonable” efficacy of representations founded on arithmetic, suggests that there must be neural mechanisms that implement the arithmetic operations. Because the magnitudes in the different domains are interrelated–in for example, the representation of rate (numerosity divided by duration) or spatial density (numerosity divided by area)–it seems plausible to assume that the same mechanism is used to process the magnitudes underlying the representation of space, time and number. It should be possible to identify these neural mechanisms by their distinctive combinatorial signal processing in combination with the analytic constraint that numerosity 1 be represented by the multiplicative identity symbol is the system of symbols for representing magnitude.

The geometry of hallucinations

A recent blog from Jennifer Ouellette (from the Scientific American Blog Network)  brought my attention once again to how mathematics is related to the structure-building functions of the brain. As I followed up on some of the references in her post, I found myself on a little journey through hallucinatory experiences that I really enjoyed.

Her post is generally about Turing models applied to patterns found in the characteristic features of animals.  But she got into territory that I find particularly provocative when she began to discuss the evidence for whether a Turing model can be applied to neurons in the brain.

If we really want to get into some interesting speculation, we can think about whether a Turing model can be applied to neurons in the brain, which could be “described mathematically as activators or inhibitors, encouraging or dampening the firing of other, nearby neurons in the brain.” And that could potentially explain why we see certain recurring patterns when we hallucinate.

I did a blog about how Turing insights appear to bridge otherwise disparate trends in science.  But the link to Turing in this context is drawn through hallucination patterns, categorized in the early 20th century by University of Chicago neurologist Heinrich Kluever, into what he called the form constants: checkerboards, honeycombs, tunnels, spirals, and cobwebs.

Over seventy years later, another Chicago researcher, Jack Cowan – who holds dual appointments in mathematics and neurology – set out to reproduce those hallucinatory patterns mathematically, believing they could provide clues to the brain’s circuitry.

The paper is very technical and involves quite a lot of mathematics as well as neuroscience.  But the authors succeed in modeling a structure of the primary visual cortex whose activity would produce “certain basic types of geometric visual hallucinations.”  Bressloff and Cowam conclude:

…thus our new work provides a stronger link between the nature of hallucinatory patterns and the actual structure of cortex.  Indeed, we hypothesize that the symmetries and length-scales of these hallucinatory images are a direct consequence of the geometry of cortical interactions as well as the retino-cortical map.

Apparently they found that the patterns predicted by their calculations closely matched what people will see when under the influence of hallucinogenic drugs, and they suspect that these patterns could be emerging from a kind of Turing mechanism.

While the random fluctuations in brain activity might technically just be “noise,” the brain will take that noise and turn it into a pattern. Since there is no external input when the eyes are closed, that pattern should reflect the architecture of the brain, specifically the functional organization of the visual cortex.

Ouellette also spoke with neuroscientist Robin Carhart-Harris, who has done quite a lot of work on the brain mechanisms that lead to hallucinatory experiences, and how they might be used to help in the treatment of depression and addiction.  I very much enjoyed watching an interview with him, shot as part of a forthcoming documentary on consciousness. There he made some really nice observations of the integrative function of  ‘brain hubs’  that unify activity from different regions of the brain into coherent patterns or narratives.  The geometric patterns of hallucinations are perceptual errors but, he tells us, the error “is a function of how the perceptual system works.”  Carhart-Harris argues that the way we currently understand the action of the visual system,  “says, very strongly, that reality is a construction. Reality only becomes something as we piece it together.”

With respect to how hallucinatory patterns reflect brain activity, Carhart-Harris tells Ouellette:

You are not seeing the cells themselves, but the way they’re organized – as if the brain is revealing itself to itself.

This is, in fact, what I have thought about mathematics.

Wigner, Persig, Leibniz and the nature of reality

I saw an opinion piece by Stephen Ornes, in the March 16 issue of New Scientist which ties the ongoing debate about the nature of mathematical ideas, to a modern one about money and ownership.  Ornes argues that patentability is one of the most hotly contested issues in software development.  The problem, as many see it, is that not all software is patentable because of its dependence on mathematics.  Mathematics is understood as the exploration of abstract ideas, not the invention of new products.   Ornes referred to an essay by David Edwards (University of Georgia) in the April 2013 issue of Notices of the American Mathematical Society.  In the end, Edwards is calling for an update of the patent laws because the current laws do not promote the development of technological innovation.  I wasn’t very inspired by the discussion.  However, when I went to find the Edwards piece in the AMS Notices, I stumbled upon an essay, written in a completely different spirit, and published in January 2012.  Jason Scott Nicholson, then a Ph.D. candidate in mathematics at the University of Calgary, addressed Eugene Wigner’s consistently cited query into the “unreasonable effectiveness of mathematics.”  But Nicholson explores the puzzle of mathematics’ effectiveness using the structure of the ideas brought to life in the book Lila by Robert M. Persig, author of the widely read Zen and the Art of Motorcycle Maintenance.

Nicholson explains that, in Lila, reality is dual-aspected. One of these aspects is what Persig calls Static Quality, and the other, what he calls Dynamic Quality.  A very brief explanation of these ideas is this:

Dynamic Quality is understood as the creative urge, the constant stimulus to move, perhaps to something ‘better.’  Static Quality is what is given in the patterns reflecting the “realization” of the undefined Quality that is the world.  Static Quality is created in response to Dynamic Quality.   It exists on 4 discrete but related levels:

Inorganic                Biological                 Social                and                Intellectual

In this system, the biological builds on the inorganic, the social on the biological, and the intellectual on the social.  Nicholson tells us that Perig uses a computer analogy to illustrate this idea:

He describes the relationship between these levels as being analogous to the relationship of computer hardware to computer software—the software is run on the hardware, but has nothing, really, to do with it. The program that you run on your computer and write your article with has nothing to do with the computer hardware itself. Furthermore, the content of your article has nothing to do with the program you write it in. In this way the levels of static quality are related to each other: Biological is built on Inorganic, Social is built on Biological, and Intellectual is built on Social, but each level is independent of the other.

Persig’s Static Quality creates a relationship among manifold patterns – from the bonding of atoms, to the mating of animals, to the formation of nations, to the dogma of religions, and the intellectual patterns of art and science.  And this relatedness becomes the crux of Nicholson’s argument:

…since nature is simply inorganic and biological patterns of value that follow Dynamic Quality, it is not surprising that mathematics, a static intellectual pattern of quality that also follows Dynamic Quality, should arrive at the same conclusions. That is the reason that mathematics that is done in isolation ends up explaining nature so well—both are patterns of static quality created by following Dynamic Quality!

This configuration of Quality, Dynamic Quality and Static Quality is also used by Nicholson to describe the art/science character of mathematics:

Art is the realization of Dynamic Quality in a given medium—that is, Art is following Dynamic Quality, and the pattern of static quality which is a “work of art” is left in its wake, in whatever medium the artist chose. In this sense, mathematics, especially pure mathematics, is an art, as it is the realization of Dynamic Quality in the medium of mathematical definitions and their logical consequences.

But mathematics is also a science. It is commonly classified as such, being in the science faculty of most universities. More to the point, though, it is also generally seen as similar to empirical sciences in that it involves an objective, careful, and systematic study of an area of knowledge. It is, however, different because it verifies its knowledge using a priori rather than empirical methods. But, within the Metaphysics of Quality, its methods are totally empirical. In fact, it may be argued that from this perspective, it is even more empirical than the other sciences. Mathematics is following empirical reality (Quality) directly, whereas other sciences are one step removed from empirical reality (Quality): they follow nature, which, in turn, follows Quality. Thus mathematics is really both an art and a science and, in fact, can act as something of a bridge between the two.

The nature of Pirsig’s ‘Quality,’ and the use that Nicholson makes of it, reminded me of Leibniz again.  For both Pirsig and Leibniz, our perceived reality is the consequence of structure being brought to something we cannot see, something that isn’t even material in the way we understand material.  For Leibniz, this fundamental reality is the harmonious existence of monads. Leibniz’s monad is

Something that has no parts can’t be extended, can’t have a shape, and can’t be split up. So monads are the true atoms of Nature—the elements out of which everything is made.

The text of Leibniz’s Monadology is not easy reading.  It is a heavily logic-based analysis.  The Internet Encyclopedia of Philosophy is one of many philosophy sites that discusses the document. There the point is clarified that:

Leibniz thus distinguishes four types of monads: humans, animals, plants, and matter. All have perceptions, in the sense that they have internal properties that “express” external relations; the first three have substantial forms, and thus appetition; the first two have memory; but only the first has reason (see Monadology §§18-19 & 29).

There is no formal correspondence between the Persig and Leibniz.  But there are most certainly parallels.  Leibniz’s appetitions, for example, as explained by the Stanford Encyclopedia of Philosophy are:

“tendencies from one perception to another” (Principles of Nature and Grace, sec.2 (1714)). Thus, we represent the world in our perceptions, and these representations are linked with an internal principle of activity and change (Monadology, sec.15 (1714)) which, in its expression in appetitions, urges us ever onward in the constantly changing flow of mental life. More technically explained, the principle of action, that is, the primitive force which is our essence, expresses itself in momentary derivative forces involving two aspects: on the one hand, there is a representative aspect (perception), by which that the many without are expressed within the one, the simple substance; on the other, there is a dynamical aspect, a tendency or striving towards new perceptions, which inclines us to change our representative state, to move towards new perceptions.  (emphasis added)

I’ve been intrigued for some time by the view of reality Leibniz gave us and, to a large extent, because of its unmistakeable mathematical character. But I’ve also been captivated by how non-materialistic it is.  Also from The Stanford Encyclopedia of Philosophy is this about Leibniz’s philosophy of mind.

In short, Leibniz stands in a special position with respect to the history of views concerning thought and its relationship to matter. He rejects the materialist position that thought and consciousness can be captured by purely mechanical principles. But he also rejects the dualist position that the universe must therefore be bifurcated into two different kinds of substance, thinking substance, and material substance. Rather, it is his view that the world consists solely of one type of substance, though there are infinitely many substances of that type. These substances are partless, unextended entities, some of which are endowed with thought and consciousness, and others of which found the phenomenality of the corporeal world. The sum of these views secures Leibniz a distinctive position in the history of the philosophy of mind.

I thought it worthwhile to bring these ideas up again in the context of Jason Scott Nicholson’s response to Wigner.

Lines on ochre and the roots of creativity

A nice article, focused on the origins of creativity, appears in the March 13 issue of Scientific American. Author, Heather Pringle, surveys research that seems to indicate that the human talent for innovation actually emerged over hundreds of thousands of years ago, before homo sapiens left Africa.  This is contrary to the view held previously that a genetic mutation ignited sudden cognitive advances in homo sapiens already living on the European continent some 40,000 years ago.

Pringle describes the archeological evidence that motivates this revision in the story of our evolution.  She also fills the story in with new insights into the evolution of the modern human brain. While mathematics is never mentioned specifically, questions about the emergence of symbol in our experience are inevitably relevant to questions about the source of mathematical creativity.

I followed up on one of Pringle’s examples, finds at an archeological site on the very tip of Africa.

The hunter-gatherers who inhabited Blombos Cave between 100,000 and 72,000 years ago, for example, engraved patterns on chunks of ocher; fashioned bone awls, perhaps for tailoring hide clothing; adorned themselves with strands of shimmering shell beads; and created an artists’ studio where they ground red ocher and stored it in the earliest known containers, made from abalone shells.

I have given most of my attention to the engraved patterns on ochre.  Professor Christopher Henshilwood led the early investigation at this site.  His work continues more recently through the TRACSYMBOLS Project.  A wealth of information related to the project is easily accessed on their website.   Their home page has two videos and, in one of them, Ian Tattersall, Curator of the Museum of Natural History, introduces the work with a description of symbolic thought that certainly calls mathematics to mind (at least it does to my mind).

“The one thing that makes us feel unique is our extraordinary symbolic mental capacity.  We disassemble the world around us; we break it down into a mass of symbols. Then we recombine those symbols to remake the world in our heads.  The human brain is a product of 350 millions years of vertebrate evolution.”

This last statement is indicative of the trend in cognitive neuroscience to put us back into the larger history of our lives.

Science News had an article on the finds at Blombos Cave in June, 2009.  Here, author Bruce Bower, refers to the ochre engravings found in Blombos Cave as ‘meaningful geometric designs.’ The article can be accessed on this website.

“What makes the Blombos engravings different is that some of them appear to represent a deliberate will to produce a complex abstract design,” Henshilwood says. “We have not before seen well-dated and unambiguous traces of this kind of behavior at 100,000 years ago.”

An earlier (2007) paper in the Journal of Archeological Science focuses on another African excavation site on the western cape. But in that paper Alex Mackay and Aara Welz define the terms of the debate, namely, what we consider ‘design,’ and how we understand ‘symbolic.’  I thought these observations worth including.

The question, ‘‘What, in archaeological terms, constitutes a symbol?’’, remains anything but clear (‘‘What isn’t a symbol?’’ even less so).

Indeed, the formation of lines through a series of actions strongly implies an element of design, regardless of whether it was expediently formulated or realised over multiple stages. By design we require only that the artisan(s) undertook the act(s) of scoring in order to give physical manifestation to a mental concept.

Whether or not engraved ochre necessarily carries any symbolic significance is a different matter. In order to be symbolic, it is necessary that the design has a cognitively constructed and conventionally maintained relationship with some other thing, either physical or conceptual (Chase, 1991; Noble and Davidson, 1991). Clearly, no such relationship can be demonstrated on the basis of the available evidence. Of course, the same argument can be made with regard to the engraved ochre from Blombos Cave. Though there is almost a self-evident sense of meaningfulness to the Blombos piece, this is not, in truth, sufficient to make any argument for its symbolic significance in the sense above.

Our suspicion is that the KKH ochre, the finds from Blombos, and the various other shell beads all had symbolic significance to their makers, and that some MSA people thus had the capacity to create and deploy symbols, and to store information externally. However, we must also accept the possibility that the motivations for engraving and breaking this particular piece were far more mundane…              (emphasis added)

Pringle completes her Scientific American piece with references to anthropological studies of the brain and the effect that the size of a population has on the likelihood that new ideas will emerge from with it and connect with other ideas that enhance their utility.  Regarding the brain,

At the University of California, San Diego, physical anthropologist Katerina Semendeferi has been studying a part of the brain known as the prefrontal cortex, which appears to orchestrate thought and action to accomplish goals. Examining this region in modern humans and in both chimpanzees and bonobos, Semendeferi and her colleagues discovered that several key subareas underwent a major reorganization during hominin evolution. Brodmann area 10, for example—which is implicated in bringing plans to fruition and organizing sensory input—nearly doubled in volume after chimpanzees and bonobos branched off from our human lineage. Moreover, the horizontal spaces between neurons in this subarea widened by nearly 50 percent, creating more room for axons and dendrites.

Researchers have imagined that it is this bigger brain that led to our ability to free-associate, and also to encode finer-grained memory.  But free association needs analytic thought (referred to as the default mode) if we are to make something of freely associated connections. The body’s somehow learning to regulate subtly altering concentrations of dopamine (and other neurotransmitters) in order to switch smoothly from one mode to another, may be one of the keys to our idea-driven modern lives. And this mechanism, they say, could have taken tens of thousands of years to fine tune.  These ideas are now being tested on an artificial neural network. Pringle concludes:

Once that final piece of the biological puzzle fell into place—perhaps a little more than 100,000 years ago—the ancestral mind was a virtual tinder box, awaiting the right social circumstances to burst into flame.

I enjoyed this view of our ancestors.  And if our talent for free association is as critical to creativity as it would seem, one might wonder about how this electrochemical move from thought to thought actually translates into a useful idea.  I think this question runs parallel to many of our questions about how surprisingly effective mathematics can be in the exploration of our worlds.

The light that Einstein sees

I read another New Scientist article today. The article was written by Brian Greene. While it didn’t give me a lot of new information, it made an interesting point about what it means (and when is it particularly effective) to take our mathematics seriously.  He talked about Einstein’s insight regarding the speed of light.  It was in the late 1800s, he explains, when Maxwell’s equations gave it the value of 300,000 kilometers per second (close to experimental measurements).  But the equations didn’t say anything about the standard of rest that gave this speed meaning.  Greene reminds us of the postulated invisible medium for transmitting light (the ether) which he calls a makeshift resolution to the problem. He then goes on to highlight a particular aspect of Einstein’s insight.

It was Einstein who in the early 20th century argued that scientists needed to take Maxwell’s equations more seriously. If Maxwell’s equations did not refer to a standard of rest, then there was no need for a standard of rest. Light’s speed, Einstein forcefully declared, is 300,000 kilometers per second relative to anything. The details are of historical interest, but I’m describing this episode for a larger point: everyone had access to Maxwell’s mathematics, but it took the genius of Einstein to embrace it fully. His assumption of light’s absolute speed allowed him to break through first to the special theory of relativity – overturning centuries of thought regarding space, time, matter and energy – and eventually to the general theory of relativity, the theory of gravity that is still the basis for our working model of the cosmos. (emphases added)

This is a detail about Einstein’s thinking that I hadn’t understood in quite that way.  It’s a provocative idea. Mathematical necessity overrides the expectations created by our physical intuition.  If the equation doesn’t depend on a standard of rest, than neither does the speed of light.  Mathematics, here, is acting much like a human sense, a mode of perception.

After reading this, my own thoughts went down a number of different paths, which I can’t recall well enough to repeat here.  But the precedence that mathematics has taken in physical theories, eventually led me to look at discussions centered around whether reality was fundamentally made of material or meaning.  One of the schools of thought that reflects this question finds information to be more fundamental to reality than material.  Paul Davies and Niels Henrik Gregersen compiled a collection of essays that address this issue in the book Information and the Nature of Reality. In his introduction, Davies describes Einstein’s theory of special relativity and general relativity as the first blow to our confidence in the idea of ‘matter.’

By stating the principle of an equivalence of mass and energy, the field character of matter came into focus, and philosophers of science began to discuss to what extent relativity theory implied a ‘de-materialization’ of the concept of matter.

Later, of course, quantum physics not only amplified this question, but also raised other yet unanswered questions about the significance of the observer.  Again from Davies:

A wave function is an encapsulation of all that is known about a quantum system. When an observation is made, and that encapsulated knowledge changes, so does the wave function, and hence the subsequent quantum evolution of the system. Moreover, informational structures also play an undeniable causal role in material constellations, as we see in, for example, the physical phenomenon of resonance, or in biological systems such as DNA sequences.

In an interview for the radio show To The Best of Our Knowledge Davies said this about the view of reality that quantum theory may be expressing:

…when we human beings, make observation of the world, we are interrogating nature, we are getting yes/no answers in the most primitive way. Every scientific experiment consists of doing exactly that. Come back to the simple example I gave where it is obviously true that the electron bounds to the left or the right. You get a yes/no answer. In the world of Quantum Physics, we get into another subtlety here. Which is the possibility of the super position. Now, you toss a coin it is heads or tails. But in Quantum Physics, if you toss a quantum coin, and this might be like the spin of particle or something, you can have a little of heads and another tails. Or a little bit of tails, but a lot of heads. You can have any mixture of the two.  In other words, an atom can be in the head and tails state, or in both states, at once. So in this sense the theory can take us to a God’s eye view, not a human view. Whenever human beings make observations, they get definite yes/no answers. But, if we could look to the world through these God-like eyes, and see the superposition, we would see that there is more than just yes and no or one and zero.

Certainly this opens the door to theological discussions, which the book does include. But just as interesting is the more fundamental question:  What is the mode of perception that mathematics provides?  Our visual systems structure the data that floods the retina.  To what does mathematics give structure?

Avalanches, structure, and expectations

New Scientist did an article in their February 6 issue called Mind Maths: Five laws that rule the brain.  

As is usually the case, the article’s allure is the suggestion that new research may hold the promise of capturing the brain’s complexity in just a few mathematical models.  And, as is usually the case, I find that studies such as these could be used as a springboard to ideas about the source of mathematics’ itself.  Unfortunately, you can’t read the article for free.  So I will note here what I believe are key features and why I care. One of the things they say early on is this:

What’s surprising is just how often the brain’s dynamics mimic other natural phenomena, from earthquakes and avalanches to the energy flow in a steam engine.

Yet this observation is not used to make a connection between internal and external events (by finding the activity of thought and sensation like the activity of the world around us). Despite this shortcoming, however, the broad range of things discussed is worth a look.  Mikhail Rabinovich at the University of California San Diego, for example, finds that the behavior of cognitive patterns that fight for our attention are captured by predator-prey equations that predict fluctuations in populations of interacting species.

None ever manages to gain more than a fleeting supremacy, which Rabinovich thinks might explain the familiar experience of the wandering mind. “We can all recognize that thinking is a process,” he says. “You are always shifting your attention, step-by-step, from one thought to another through these temporary stable states.”

This is interesting and even reminds me of what 19th century philosopher Johann Friedrich Herbart once proposed, namely, that all ideas struggle to gain expression in consciousness, and compete with each other to do so.  He even used the term self-preservation to describe an idea’s tendency to seek and maintain conscious expression. (source in an earlier blog)

I’m attracted to the notion of “temporary stable states,” as this suggests the consistent potential for revolutions of thought, creative breakthroughs, and unexpected new structure, that is the very life of mathematics.

The article also discusses what is referred to as the avalanche of cascading firing in neurons.

The familiar chords of our favorite song reach the ear, and moments later a neuron fires. Because that neuron is linked into a highly connected small-world network, the signal can quickly spread far and wide, triggering a cascade of other cells to fire. Theoretically it could even snowball chaotically, potentially taking the brain offline in a seizure… This suggests there is a healthy balance in the brain – it must inhibit neural signals enough to prevent a chaotic flood without stopping the traffic altogether.

Jack Cowan, at the University of Chicago, has found that this balance represents a state known as the critical point named “the edge of chaos” by theoretical physicists.

But the two observations I found most interesting had to do with the integrating functions of the brain (those that are thought to produce conscious experience) and the brain’s predictive functions (that establish our expectations).  With respect to the former, the article explains:

An experience’s colors, smells and sounds are impossible to isolate from one another, except through deliberate actions such as closing your eyes. At the same time, each conscious experience is a unique, never-to-be-repeated event. In computational terms, this means that a seat of consciousness in the brain does two things: it makes sense of potentially vast amounts of information and, just as importantly, it internally binds this information into a single, coherent picture that differs from everything we have ever – or will ever – experience.

This is consistent with other ideas emerging from the field of computational neuroscience, which studies brain functions in terms of their information processing properties.  It is an interdisciplinary effort, linking fields such as neuroscience, cognitive scientists, engineering, computer science, mathematics, and physics. The article describes the cerebral cortex as home to many highly interconnected “rich club,” hubs through which neural signals zip freely and build experience.

The latter considers the brain’s use of Bayesian statistics, named after 18th century mathematician, Thomas Bayes.  It is a way to calculate the probability of a future event, using what has happened in the past, while consistently updating expectations with new data.

For decades neuroscientists had speculated that the brain uses this principle to guide its predictions of the future, but Karl Friston at University College London took the idea one step further. Friston looked specifically at the way the brain minimizes the errors that can arise from these Bayesian predictions; in other words, how it avoids surprises. Realizing that he could borrow the mathematics of thermodynamic systems like a steam engine to describe the way the brain achieves this, Friston called his theory “the free energy principle.”

Even the authors of this free energy principle are hoping that it might provide a ‘unified brain theory.’  But if we just look at it another way, what we see is the brain making statistical calculations of some kind.  It’s using a Bayesian-like thing to establish our expectations.  This is a provocative idea.  It reminds me of Gregory Chaitin’s comment about what he calls biological software.  Chaitin has said, on more than one occasion, that only after we discovered artificial software could we imagine biology as an archeology of software, with respect to things like the coding property of DNA.

 

Are we finding the mathematical structure of reality?

I’m intrigued by Max Tegmark’s conviction that the universe is, itself, a mathematical structure.  He presented his ideas, again, on February 15 at the recent annual meeting of AAAS, in a symposium called Is Beauty Truth? He said that he has just completed a book on the same topic.  I listened to the entire session and I suspect that I won’t be able to get a really good sense for the meaning and the implications of what he proposes until I read the forthcoming book.  He did a brief interview for a short article in Science (published by AAAS) but the interview didn’t do much to clarify things.

I agree that, as he said, if you look closely at our current working assumption (which he called the external reality hypothesis) it is equally suspect. This external reality hypothesis assumes that the existence of the physical world is fully independent of us, that it doesn’t require an observer, and that in itself devoid of anything human.  But there is little doubt that our reality is a perceived reality, built from our interaction with it.  I want to hear more about what he means when he says that mathematical existence and physical existence are the same thing.  He describes mathematical structure easily, as “abstract entities with relations between them.”  They “don’t exist in space and time,” he says, rather “space and time exist in them.” My hunch is that this is true, but how? He also said that he doesn’t believe that mathematics is a human creation.  He believes that we discover mathematical structure.  What may be human about it are just the names we give things.

I will admit that the fact that he is a cosmologist at MIT influences my expectations, so I want to know more clearly how he comes to this view, and what he expects such a view will change about how we imagine ourselves and the world around us.  My own view is that the same structure exists everywhere, in us and in everything of which we are a part.  And we have exploited this by formalizing it (in mathematics for example) then using it to see the things that extend far beyond our perceptual range.

The introduction to the symposium included the following:

In 1939, Paul Dirac observed that “the physicist, in his study of natural phenomena, has two methods of making progress”: experiment and observation, and mathematical reasoning. Although he said, “there is no logical reason why the second method should be possible,” nevertheless it works, and to great effect. The key, Dirac felt, was beauty, leading him to his principle that successive theories of nature are characterized by increasing mathematical beauty. The results of this were rich and included some predictions not confirmed until after Dirac’s death. Nevertheless, the powerful guidance Dirac found in mathematics did sometimes lead him astray, as he rejected the principle of “renormalization,” developed by Feynman, Schwinger, and Tomonaga, to remedy the nonphysical infinities that kept cropping up in Dirac’s equations for quantum electrodynamics. Even as other physicists accepted it, Dirac never did, saying it was “just not sensible mathematics.” Nevertheless, it was powerful physics.

Sylvan Schweber of Brandeis University was the first to speak at the symposium. He provided participants a number of enlightening facts in his brief survey of the history of mathematics’ relationship to physics.  His survey was fairly dense with information, and so hard to paraphrase.  He quoted Einstein responding in 1933 to the question: How does the physicist know that he can find the right way?  Einstein’s reply was this: “Nature is the realization of the simplest conceivable mathematical ideas.”  For Einstein, creativity resided in mathematics.  Schweber cited the emergence of mathematical physics as its own discipline, and commented that there was little talk about philosophies of science and mathematics (or about beauty and truth) by mathematical physicists and experimentalists, after World War II.  The prevailing concerns became “getting the numbers out,” tackling the complexity of accelerator experiments, and the expanding use of computers.  But he also made the observation that prolific advances in computing inspired views like that of physicist-turned-biologist John Hopfield who sees physical and biological processes as hierarchies of computations and computational devices.

It could seem like Schweber would be tempted to agree with Tegmark, but I’m pretty sure he doesn’t.

 

 

 

Networks: The brain, the internet, and the cosmos

I was completely captivated by something David Deutsch said in a TED talk in 2005.  This particular observation was not the theme of his talk.  But I found the language he chose to describe the working model of the universe (that physics and mathematics have provided) to be loaded with implications about human knowledge, even human awareness, and our ties to nature itself.  I’ve referred back to it in more than one blog, but I will reproduce it here again.  The chemical scum to which he refers is humanity.  It’s a phrase he borrows from Stephen Hawking.  Here Deutsch is describing the energy (that was pressed out by magnetic fields around a galaxy collapsing into a black hole) that shot out in jets (producing a quasar).  His observation is that this happened:

in precisely such a way that billions of years later, on the other side of the universe, some bit of chemical scum could accurately describe and model and predict and explain what was happening there, in reality.  The one physical system, the brain, contains an accurate working model of the other, the quasar, not just a superficial image of it (though it contains that as well) but an explanatory model embodying the same mathematical relationships and the same causal structure… The faithfulness with which the one structure resembles the other is increasing with timeThis chemical scum has universality.  Its structure contains, with ever-increasing precision, the structure of everything…Physical objects that are as unlike each other as they could possibly be can, nevertheless, embody the same mathematical and causal structure and do it more and more so over time…This place is a hub which contains within itself the structural and causal essence of the whole of the rest of physical reality.  The fact that the laws of physics allow this or even mandate that this can happen is one of the most important things about the physical world.

Deutsch’s unique choice of words are brought to mind for me often. But today they seemed particularly appropriate when  I read a few reports of a study that was led by physicist and data analyst Dmitri Krioukov.  The study found a structural similarity among networks that included the brain, the internet and the universe.  A Live Science report describes, in broad outline, some of the ways that computer simulations were used to do the study.  The report explains:

The results, published Nov.16 in the journal Nature’s Scientific Reports, suggest that some undiscovered, fundamental laws may govern the growth of systems large and small, from the electrical firing between brain cells and growth of social networks to the expansion of galaxies.

“Natural growth dynamics are the same for different real networks, like the Internet or the brain or social networks,” said study co-author Dmitri Krioukov, a physicist at the University of California San Diego.

“For a physicist it’s an immediate signal that there is some missing understanding of how nature works,” Krioukov said,  “It’s more likely that some unknown law governs the way networks grow and change, from the smallest brain cells to the growth of mega-galaxies. This result suggests that maybe we should start looking for it”

I have come to see everything as the growth of structure, whether organic, inorganic, or conceptual (like language and mathematics) and I’m fairly sure that shared structure exists across the boundaries of these categories.  Mathematics has the particular power of revealing sameness despite apparent differences.  The study San Diego study supports this view of shared structure and uses mathematics to find it.  As stated in UC San Diego’s press release:

“This is a perfect example of interdisciplinary research combining math, physics, and computer science in totally unexpected ways,” said SDSC (San Diego Supercomputer Center) Director Michael Norman.

“Such an explanation could one day lead to a discovery of common fundamental laws whose two different consequences or limiting regimes are the laws of gravity (Einstein’s equations in general relativity) describing the dynamics of the universe, and some yet-unknown equations describing the dynamics of complex networks,” added  Marián Boguñá, a member of the research team from the Departament de Física Fonamental at the Universitat de Barcelona, Spain.

Can we see where math begins and science ends?

Galileo is often called the father of modern science because of an insight he had about the relationship between mathematics, and what we are able to see in our world. Two of John Horgan’s recent blog posts (and the writing to which they refer) nicely demonstrate what I think is a remarkable oversight in discussions about the prospects for the future of science as we know it.  Neither John, nor any of the writers to whom he refers, consider the significance of the role that mathematics is playing in the development of scientific ideas and analyses.  None of them wonder about how mathematics shapes our views of reality.  If we want to consider that we’ve reached some limit to the progress that science can make, perhaps we should revisit Galileo’s original insight about what science is, and think again about the role mathematics plays.

Galileo understood that science could not be done without mathematics.  And it’s this science that so many seem to be worried about. From his book Il Saggiatore (The Assayer) published in 1623:

Philosophy [i.e. physics] is written in this grand book — I mean the universe — which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth.

The words that strike me are “continually open to our gaze, but not understood.”  Galileo’s insight was that mathematics could bridge the rift between the gaze and the understanding.  But how or even why does it happen?  Trying to get at the how might shed new light on what we call physical law.  Getting at why might lead to fresh ways to consider questions about consciousness and objectivity. These questions are at the bottom of any other questions we might have about the value or future of science. I’m not meaning to suggest that there are any easy or definitive answers to these questions, but they are certainly relevant to the questions being asked about what science has or may yet accomplish, and consistently overlooked.  The Euclidean geometry that lit Galileo’s way was not originally motivated by pragmatic concerns or by the use Galileo made of them.  There is something independent about the spirit of mathematics that we barely understand.  And the mathematical landscape has exploded with ideas since Euclid’s survey of geometry.  It is the wealth of conceptual forms, provided by mathematics, that has shaped the often bewildering, counter-intuitive ideas in modern physics.  It is mathematics that resolves the flood of experimental observations into the space-time continuum of Relativity, or into the quirky laws of quantum mechanics.  It is mathematics that leads some physicists to consider multiverse ideas, or leads others to the dispute over which is primary to the universe – material or information.  It should not be possible to leave mathematics out of a discussion of science, and of physics in particular.

Questions that address the emergence of mathematics, and the cognitive structures it may mirror, could give us a new way to tackle the enigmatic relationship between mind and matter (as we now imagine them).  Mathematics is, after all, an almost purely introspective science, yet it builds the science of material, the structure of modern physics.

I remember reading John Horgan’s 1996 book The End of Science and really enjoying it. The intimacy of his interviews brought real vitality to the ideas.  His latest post is called The End-of-Science Bandwagon is Getting Crowded.  In it he quotes from some of the responses to Edge.org question, What should we be worried about?  He’s chosen scientists concerned with the future of science.  Horgan also references a Nature essay by Dean Keith Simonton who argues:

Our theories and instruments now probe the earliest seconds and farthest reaches of the Universe, and we can investigate the tiniest of life forms and the shortest-lived of subatomic particles. It is difficult to imagine that scientists have overlooked some phenomenon worthy of its own discipline alongside astronomy, physics, chemistry and biology…Future advances are likely to build on what is already known rather than alter the foundations of knowledge.

But it seems to me that if we can manage to get our attention on some useful questions about the more specific nature of this knowledge – about what it’s made from and how we built it – then we might begin to see a revolutionary way to extend its limits.  We might replace our current notion of objectivity with something more appropriate to the interplay between ourselves and the world that we see in pure sensation as well as mathematics.  Mathematics is not just the tool.  It is the strategy.

 

 

Chaitin, creativity, biology and mathematics

I was looking today, once again, at Gregory Chaitin’s most recent work which is described in his book Proving Darwin.  I realized that much of what has been written about this work (even what I have written) doesn’t give adequate attention to the crucial shifts in perspective that metabiology proposes. Chaitin says concisely:

According to metabiology the purpose of life is creativity.  It is not preserving one’s genes.  Nothing survives, everything is in flux, ta panta rhei, everything flows, all is change as in Heraclitus. (emphasis my own)

Metabiology explores randomly evolving artificial software (computer programs) in the hope that it will reflect randomly evolving natural software (DNA).  Chaitin’s work is built on many ideas, in mathematics and in biology.  One of the most significant of these is the observation of the infinite complexity of mathematics and its ‘incompleteness’ which allows the equivalence to be drawn between math creativity and biological creativity.  But Chaitin also subscribes to the view that the world is built out of mathematics,

that the ultimate ontological basis of the universe is mathematical, which is the hardest, sharpest, most definite substance there is, static, eternal, perfect.

And he goes on to say that:

…our physical world is but an infinitesimal portion of the world of mathematical ideas, which includes all possible physical universes, and which is all that exists, all that really is…But, following Godel, our knowledge of that perfect world is always incomplete, always partial, and constantly changing.

Chapter Eight is given the title What Can Mathematics Ultimately Accomplish?  And here Chaitin characterizes the living mathematics he has observed:

Math evolves, math is completely organic.  I am not talking about what Newtonian math might ultimately be able to achieve, nor what modern Hilbertian formal axiomatics might ultimately be able to achieve (see Jeremy Gray, Plato’s Ghost: The Modernist Transformation of Mathematics, Princeton University Press, 2008), http://press.princeton.edu/chapters/i8833.pdf and not even what our current postmodern math might ultimately be able to achieve.  Each time it faces a significant new challenge, mathematics transforms itself. (emphasis my own)

The idea that life itself is creativity, and that our knowledge of it is always incomplete, is a view of things that I believe mathematics easily inspires.  My own experience with mathematics has always led me in this direction, both within the confines of my very personal experience as well as when I explore ideas in science, philosophy and art.  It’s a provocative and optimistic view.  Mathematics seems to be coming from us, yet it keeps giving us images of the larger thing of which we are a part, to the point of showing us that we can never fully know that larger thing.  We are seeing something about us and the world.

Chaitin considers the political and theological implications of these ideas as well, which he addresses in chapters 6 and 7.  All of the ideas on which Chaitin’s work is based are big ideas, and there are many references in the text to related thoughts.  And it does seem to be the big ideas to which Chaitin is devoted when he concludes:

Even if almost everything in this book is wrong, I still hope that Proving Darwin will stimulate work on mathematical theories of evolution and biological creativity.  The time is ripe for creating such a theory.