Categories

Dante, art, vision, and mathematics

We adopted a dog a couple of months ago, and there have been moments when I have watched a change in his attention or a change in his behavior, and wondered how his awareness might be structured.   When we drive with him, he usually sits in front of the back seats of our Honda Element, appearing to be looking out the front windshield (although I think he’s too low to see anything).  I look back at him occasionally when I’m driving and he often meets my glance.  I’ve wondered whether it would be possible for him to meet my glance in the rearview mirror, the way my children do.   This question led me to think about the structure of our awareness and whether, or how, it might be related to mathematics.  In the completely familiar trick of light that causes a reflection, we find meaning in seeing exactly the same visual image (which the brain already works to compute) from two different perspectives.  My children may not know anything about the reflective properties of light, but they know how to respond to a reflection.  What is it that the brain is doing when it learns to occasionally expect the presence of equivalent visual stimuli from more than one source?  When my daughter speaks to me through the rear view mirror of our car, her brain is doing facial recognition processing as well as with something that has to do with sense space, direction, and light. Perhaps the mathematical investigation of projections is the symbolic representation of some of this neuronal action.

I was reminded of this question, and the impact that a new visual/spatial configuration might have on how we understand what we perceive, when Davide Castelvecchi recently blogged about the resemblance between Dante’s early 14th century description of the cosmos and the one we’ve built with modern mathematics.  

The cosmic microwave background, or CMB, shows us a slice of the universe as it looked more than 13.7 billion years ago, and the structure of that universe bears a striking resemblance to that of Dante’s heaven—at least according to some commentators. It is as if the poet had presaged some of the most striking developments of modern mathematics and cosmology six centuries before they emerged.

The parallel rests, to some extent, at what it means to “look” at space that has a non-Euclidean geometry.  This is far more complex than what it means to look at a reflected image, but for me they were related.  Castelvecchi does a nice job of describing a perspective that contradicts our experience but, in fact, is geometrically possible and, more to the point, is used to understand the structure of the perceived universe.  Dante journeys through a series of concentric spheres centered at the earth and extending out to the stars.  Past the stars is another sphere that encloses the entire physical universe and crossing its boundary he is in the spiritual realm.

The otherworld however also has a geometric structure, and it is completely symmetrical to that of the physical world, with nine concentric spheres, which are inhabited by angels and the souls of the most virtuous dead. But instead of growing ever larger, these spheres grow ever smaller. And at the center, Dante says, sits God, occupying a single point and emanating a blinding light.

Thus Dante’s entire universe—both physical and spiritual—consists of two sets of concentric spheres, one centered at Earth, the other at God. If you were to point a laser vertically up toward the sky from any point on Earth, you’d be pointing it straight at that single point where Dante places God.

This is not the first time that the mathematics of Dante’s universe has been discussed.  Mathematician David Pierce wrote on the same topic back in 2008 in response to a series of quotations about infinite spheres.  And while still an undergrad (in 1979), Mark Peterson did a really nice job of describing how Dante’s universe looks like a 3-sphere Mark Peterson is author of the book Galileo’s Muse in which he makes the claim that it was the art of the Renaissance (not the science) that led to way to modern science.

All of these insights circle around the idea that mathematics gives us a precise way to access things that it may be possible to just “see,” the way an artist might, or a savant. And I am of the opinion that the pursuit of these ideas may help unlock some of the secrets, not just of nature, but of perception itself.  One of the references in the Castelvecchi blog is the book Poetry of the Universe by the late Robert Osserman.

Early in the book Osserman refers to mathematics as ‘mind goggles’ that make it possible to clarify images beneath the surface of our world.  He says that his book:

is a celebration of the human imagination – the facility to make the kind of mental leaps without which the impact of the outer world on our senses would be mostly noise. Mathematical imagination and imagery, closely linked, provide the vision that allows us to see the hidden but exquisite structure below the surface.  (emphasis my own).

 

Anosognosia, Consciousness and Mathematics

In last weeks post, I reported on the work of a computer scientist (Jürgen Schmidhuber’s  artificial curiosity) and neuroscientist Gerald Edelman.   I would like to follow-up a bit with more about Edelman’s work and perspective, in part because I was captivated by a story he told (in more than one venue) to illustrate the fact that the brain “is not a machine for logic but in fact a construction that does pattern recognition. And it does it by filling in, in ambiguous situations.”  I’m quoting from an interview that appeared in Discover. Here’s the story as he told it:

There’s a neurologist at the University of Milan in Italy named Edoardo Bisiach who’s an expert on a neuropsychological disorder known as anosognosia. A patient with anosognosia often has had a stroke in the right side, in the parietal cortex. That patient will have what we call hemineglect. He or she cannot pay attention to the left side of the world and is unaware of that fact. Shaves on one side. Draws half a house, not the whole house, et cetera. Bisiach had one patient who had this. The patient was intelligent. He was verbal. And Bisiach said to him, “Here are two cubes. I’ll put one in your left hand and one in my left hand. You do what I do.” And he went through a motion.

And the patient said, “OK, doc. I did it.”

Bisiach said, “No, you didn’t.”

He said, “Sure I did.”

So Bisiach brought the patient’s left hand into his right visual

field and said, “Whose hand is this?”

And the patient said, “Yours.”

Bisiach said, “I can’t have three hands.”

And the patient very calmly said, “Doc, it stands to reason, if you’ve got three arms, you have to have three hands.”

I found a five-part series of essays by ERROL MORRIS in a 2010 New York Times blog that considers some of the implications of this anomaly.  The title of the series was The Anosognosic’s Dilemma: Something’s Wrong but You’ll Never Know What It Is.

These essays use the anosognosic’s experience to highlight the difference between the mysteries we can solve (questions we can formulate but not yet answer) and “unknown unknowns,” questions of which we cannot conceive.   For David Dunning, a social psychologist at Cornell, “ignorance profoundly channels the course we take in life.  And unknown unknowns constitute a grand swath of everybody’s field of ignorance.”   Applied to the sciences, Morris quotes Freeman Dyson (from “What Price Glory? A review of Steven Weinberg’s Lake Views, The World and the Universe,” New York Review of Books, June 10, 2010).  Dyson is addressing the possibility of understanding everything when he says:

“I would be disappointed if nature could be so easily tamed.  I find the idea of a Final Theory repugnant because it diminishes both the richness of nature and the richness of human destiny.  I prefer to live in a universe full of inexhaustible mysteries, and to belong to a species destined for inexhaustible intellectual growth.”

The thoughts that ran through my mind circled around the way that mathematics, as an almost pure study of concepts, has opened up previously inconceivable possibilities – like curved space or the quantum mechanical world.

I listened to Edelman speak in an interview and a brief talk and he has a lot to say about the processes that bring consciousness about, and what seems to be unique about what he calls the “higher order consciousness” of human experience.  He begins with the idea that the brain is embodied and the body and the brain are embedded in the world.  Perception he describes as the brain carving up the world into something sensible – what we see and what we here.  It is with language (in particular the effectiveness of syntax) that we can remove ourselves from the remembered present (primary consciousness) and this creates a higher order consciousness.  He often makes the point that the power of this language rests more in its ambiguity more than its clarity because it can describe situations that are not precisely defined.  He attributes our sense of individuality to our own sense of our own movements (a proprioception) and acknowledges that we don’t yet understand motor control very well.  The anosognosic’s experience he describes as the brain adjusting to what the body is equipped to do.  He also made an interesting claim – that evidence of  artificially created mental activity (mental activity in an artifact) would be that the artifact could mentally rotate an object, or see the object in the rotation.  Rotations always make me think of mathematics.

A few of Edelman’s observations are particularly relevant to how I think about mathematics.  One is that family arguments as well as solutions to complex mathematics problems are happening within the same system of combinatorial interactions which he calls the dynamic core of consciousness.  These interactions involve signals from the world (both input and output), signals from the body, and the brain speaking to itself through memory, fantasy and imaging.  About this he says the combinatorial interactions are “hair-raising” and we don’t have measurement systems that can understand them microscopically.   But he seems to regularly associate mathematics with the “precision of computing,” rather than the powerful “ambiguity of language.”   I would suggest, however, that the conceptual grounding of modern mathematics shares something with Edelman’s ideas about the power of ambiguity in language, when the structure and range of mathematics’ applicability was enhanced with its very broad generalizations.   For Edelman, associativity and metaphors start things off and then computation is applied.  But mathematics occurs in both.  And I would agree with Dyson.  I also prefer “to live in a universe full of inexhaustible mysteries, and to belong to a species destined for inexhaustible intellectual growth.”  I often see mathematics as the evidence for, as well as the access to, these inexhaustible mysteries.

Edelman ended one of his talks with this poem from Emily Dickinson and I liked it very much:

The Brain – is wider than the Sky –

For – put them side by side –

The one the other will contain

With ease – and you beside –

 

The Brain is deeper than the sea –

For – hold them – Blue to Blue –

The one the other will absorb –

As sponges – Buckets – do –

 

The Brain is just the weight of God –

For – Heft them – Pound for Pound –

And they will differ – if they do –

As Syllable from Sound –

Compression, meaning, and mathematics

One of the more interesting applications of algorithmic action can be seen in Jürgen Schmidhuber’s work on artificial curiosity.

Schmidhuber has been building what he calls ‘artificial scientists and artists’ that possess an algorithmic mechanism for motivating invention. He provides a brief and fairly straightforward description of his creative machines in the transcript of a talk he gave at TEDxLausanne on January 20.

Let me explain it in a nutshell. As you are interacting with your environment, you record and encode (e.g., through a neural net) the growing history of sensory data that you create and shape through your actions.

Any discovery (say, through a standard neural net learning algorithm) of a new regularity in the data will make the code more efficient (e.g., less bits or synapses needed, or less time). This efficiency progress can be measured — it’s the wow-effect or fun! A real number.

This ‘efficiency progress’ or ‘learning progress’ is the ongoing and successful compression of data, the discovery of regularities or symmetries that reduce the work necessary to encode the data.  The webpage describing Schmidhuber’s Theory of Creativity says that

Since 1990 Jürgen Schmidhuber has built curious, creative agents that may be viewed as simple artificial scientists & artists with an intrinsic desire to explore the world by continually inventing new experiments. They never stop generating novel & surprising stuff…..

It’s an interesting model, and clearly effective.  The question, of course, is to what extent does it account for human creativity.  It does make use of an important development in artificial intelligence – the artificial Recurrent Neural Network – one that mimics the feedback mechanisms in the brain.  But it is still algorithmic in nature.

Yet nobel laureate and neuroscientist Gerald Edelman thinks it’s clear that the brain does not function algorithmically.  Edelman has been working on a theory of mind since the late 70’s.  During an interview for Discover in 2009, he says that someday scientists will make a conscious artifact.  He was then asked that if by proposing the possibility of artificial consciousness he was comparing the human brain to a computer?
 And his answer was

No. The world is unpredictable, and thus it is not an unambiguous algorithm on which computing is based. Your brain has to be creative about how it integrates the signals coming into it. And computers don’t do that. The human brain is capable of symbolic reference, not just syntax. Not just the ordering of things as you have in a computer, but also the meaning of things, if you will.

I think the key here is probably ‘meaning,’ which for Edelman is what brings humanity to its current level of consciousness.  Also during the interview, Edelman describes the evolution of our consciousness in this way:

About 250 million years ago, when therapsid reptiles gave rise to birds and mammals, a neuronal structure probably evolved in some animals that allowed for interaction between those parts of the nervous system involved in carrying out perceptual categorization and those carrying out memory. At that point an animal could construct a set of discriminations: qualia. It could create a scene in its own mind and make connections with past scenes. At that point primary consciousness sets in. But that animal has no ability to narrate. It cannot construct a tale using long-term memory, even though long-term memory affects its behavior. Then, much later in hominid evolution, another event occurred: Other neural circuits connected conceptual systems, resulting in true language and higher-order consciousness. We were freed from the remembered present of primary consciousness and could invent all kinds of images, fantasies, and narrative streams.(emphases my own).

Edelman pursues the creation of conscious artifacts by constructing what he calls brain-based devices (BBDs).  Their intent is to model the brain for the sake of understanding it, not imitating it.

It looks like maybe a robot, R2-D2 almost. But it isn’t a robot, because it’s not run by an artificial intelligence [AI] program of logic. It’s run by an artificial brain modeled on the vertebrate or mammalian brain. Where it differs from a real brain, aside from being simulated in a computer, is in the number of neurons. Compared with, let’s say, 30 billion neurons and a million billion connections in the human cortex alone, the most complex brain-based devices presently have less than a million neurons and maybe up to 10 million or so synapses, the space across which nerve impulses pass from one neuron to another.

What is interesting about BBDs is that they are embedded in and sample the real world. They have something that is equivalent to an eye: a camera. We give them microphones for the equivalent of ears. We have something that matches conductance for taste. These devices send inputs into the brain as if they were your tongue, your eyes, your ears. Our BBD called Darwin 7 can actually undergo conditioning. It can learn to pick up and “taste” blocks, which have patterns that can be identified as good-tasting or bad-tasting. It will stay away from the bad-tasting blocks, which have images of blobs instead of stripes on them —rather than pick them up and taste them. It learns to do that all on its own.

There is some fundamental disagreement about how or whether one can artificially produce a creative, intelligent agent.  But a few things in each of these perspectives got my attention.  I do find it interesting that curiosity and inventive action can be understood as compression, that the way to compression is through finding regularity and symmetry, and that the strategy can be reproduced with software.  It says something about the power of the strategy and the software.  I remember reading about how language was cognitive compression.  I thought at the time that mathematics excelled at compressing meaning.  And mathematics studies exactly the strategies of compression (like symmetry, regularity and pattern)  Compression must have very broad application in cognitive science. No doubt this software mimics something biological.

The other is that Edeman’s work can only be understood when the brain is understood in a wholly biological way – embedded in the body which is embedded in the world.  This includes Edelman’s evolutionary view of learning:

In Edelman’s grand theory of the mind, consciousness is a biological phenomenon and the brain develops through a process similar to natural selection. Neurons proliferate and form connections in infancy; then experience weeds out the useless from the useful, molding the adult brain in sync with its environment.

The creativity of Schmidhuber’s agents is impressive, but how likely is it that they would invent a mathematical idea.  It is, I think the evolutionary process that Edelman is exploring that brings about mathematics.

Seeing, dreaming and mathematics

I was struck by the clarity of statements made about perception in a recent Mind Hacks blog.  When Tom Stafford reports on a talk he just gave in Berlin he says this:

Perception is the production of meaning, not the production of images. Our associations and experience are incorporated in the act of perception, so that they are intrinsic to the perceptual act (not somehow added “on top”, or as an after thought).

That one cannot separate seeing from comprehension is a vital part of how the visual brain is now understood.  Also gaining traction in biology is the idea that there is a kind of continuum of cognitive mechanisms – from the chemistry of proteins receiving and transmitting signals, to the cognitive acts of organisms.  Cognition, in this sense, is consonant with living.

Stafford used the idea that perception is the production of meaning to explore why a city can be so difficult to navigate for someone from another country.

Native city dwellers have learned to read the city, through experience forming webs of association that build up into symbols. This allows them to instantly perceive what different scenes in the city mean for how they should act.

These “webs of associations” are probably the way we do anything.  It is the way the body establishes its reality.  And “web of association,” of course, will inevitably lead me to mathematics.  But this time I would like to get there through another observation  – about sleep.

Back in 2007, Radiolab did a show about sleep The show’s segment on dreams featured two individuals who each investigate cognitive activity that happens during dreaming.  The first to speak was Robert Stickgold of the Harvard Medical School who has found a way to study the relationship between memories and dreams.   One of his talks is posted on YouTube.  It’s called Sleep, Memory and Dreams: Fitting the Pieces Together. He summarizes his main points with the following list of what happens during sleep dependent memory processing.  During sleep, he explains, the brain:

stabilizes, enhances and integrates new memories

extracts rules and the gist of things – the idea common to multiple experiences

integrates new and old memories

imagines possible futures

These are all about meaning, like the meaning that the visual brain gives to perceived objects, or that language gives to experience, or that mathematics gives (even to itself).

It’s also interesting that Stickgold has been able to demonstrate that, during sleep, the brain plays with memories that may not be available for conscious recall.  This is probably true of any integrative or synthesizing cognitive activity.  He co-authored a paper on sleep-dependent learning where he describes a classification scheme for the many things we call memories.  The paper can be found here.

The bit of information that MIT professor Matt Wilson contributed to the Radiolab broadcast was particularly interesting to me and loaded, I thought, with potential insights into learning and creativity.  In his own research, Wilson has learned how to read the sound of the electrical signals from thousands of individual nerve cells in the brains of rats.  He could identify when sleeping rats replayed or revisited maze roots that they had learned when they were awake.  But the very interesting piece is that when a rat was given more than one maze route to run, it began to invent entirely new routes when it was sleep.  Wilson described the dream-like creativity of this rat as a demonstration of how sleep can provide a unique opportunity to learn. In our experience, he suggested, this would give us a way to take two apparently unrelated things, find the connection, the hidden rules (or gist as Stickgold calls it) and create something new.  That a perceived sameness triggers invention (where a single experience does not) is a really interesting detail.  The experience of more than one opens the door to ‘others.’  It is as if some idea of a ‘class’ of things has been introduced.  There isn’t an obvious reason for why the rat begins to synthesize and invent.  But it does cause me to wonder about what would cause this spontaneous, sleep driven invention to transition into a more directed one.

At the source of most mathematical invention is a perceived sameness – like all instances of two things or all possible triangles or all possible geometries, the space of all functions or all possible spaces.  This work on sleep, together with what has been understood about cognition in general, suggests that the body is always poised to structure, invent and comprehend, even without any direction from our conscious mind. And I myself have little doubt that this has contributed to the emergence and development of mathematics.

Leibniz’s Insight? Looking forward and back

Leibniz disassociated ‘substance’ from ‘material’ and reasoned that the world was not fundamentally built from material.  His is not simple or familiar reasoning but it was clear to Leibniz that for a substance to be real, it had to be indivisible and since matter was infinitely divisible, the true nature of reality could not be material.  This bit of philosophical history startled the students in one of my calculus classes who were fully embedded in the materialist perspective of the sciences, and whose only experience with the name Leibniz came from his role in the development of calculus.  But even today, there is disagreement about whether the universe is made from matter or from concepts. As Frank Wilczek says in The Lightness of Being,

Philosophical realists claim that matter is primary, brains (minds) are made from matter, and concepts emerge from brains.  Idealists claim that concepts are primary, minds are conceptual machines and conceptual machines create matter.

Wilczek was making the point that Wheelers “its from bits” idea (that the universe is composed of information) provides a way for both to be true. There’s not much value in wondering what the word ‘machine’ actually means in this context.  But the idealist’s claim is consistent with Leibniz’s and, often, computational ideas are traced back to Leibniz.  What’s important, in my opinion, is that the debate persists. In a post earlier this year, I reported on the view being explored by physicist Max Tegmark who proposes that perhaps reality is so well described by mathematics because our physical world is a mathematical structure.  He once said this:

So I think we’re all living in a gigantic mathematical object – not one of the simple ones that we learn about in high school math. We’re not living inside of a cube or a dodecahedron or in the set of integers, but there’s some more complicated mathematical object, maybe M-theory, maybe some – more likely something we haven’t discovered yet which somehow is our reality.

In April, I collected some references to quantum information theory and Vlatko Vedral, whose idea that information (defined to a large extent by probability) builds the fabric of the universe is discussed in his book Decoding Reality. And just recently, I was introduced to Gregory Chaitin’s most recent work where mathematics and biology are nicely woven as he explores the idea that life is evolving software (his most recent book is: Proving Darwin: Making Biology Mathematical) As I see it, we are finding mathematics in the way we perceive, and in everything around us.  Cognitive neuroscientists see it emerging from the hard-wiring that the human organism shares with other creatures (namely its talent for discerning and encoding magnitudes), and some cosmologists imagine it is the very fabric of the universe.  Chaitin’s work finds it provocatively equivalent to how we understand a living thing. In an essay on Leibniz, Complexity and Incompleteness, Chaitin makes the following observation:

You see, the Discours was written in 1686, the year before Leibniz’s nemesis Newton published his Principia, when medieval theology and modern science, then called mechanical philosophy, still coexisted. At that time the question of why science is possible was still a serious one. Modern science was still young and had not yet obliterated all opposition.    (emphasis my own)

All of this new rumbling about mathematics and reality encourages a hunch that I have had for a long time – that the next revolution in the sciences will come from a newly perceived correspondence between matter and thought, between what we are in the habit of distinguishing as internal and external experience, and it will enlighten us about ourselves as well as the cosmos. New insights will likely remind us of old ideas, and the advantage that modern science has over medieval theology will wane.  I expect mathematics will be at the center of it all. Leibniz had a philosophical dream where he found himself in a cavern with “little holes and almost imperceptible cracks” through which “a trace of daylight entered.”  But the light was so weak, it “required careful attention to notice it.”  His account of the action in the cavern (translated by Donald Rutherfore) describes this:

One frequently heard voices which said, “Stop you mortals, or run like the miserable beings you are.” Others said, “Raise your eyes to the sky.” But no one stopped and no one raised their eyes… I was one of those who was greatly struck by these voices. I began often to look above me and finally recognized the small light which demanded so much attention. It seemed to me to grow stronger the more I gazed steadily at it. My eyes were saturated with its rays, and when, immediately after, I relied on it to see where I was going, I could discern what was around me and what would suffice to secure me from dangers. A venerable old man who had wandered for a long time in the cave and who had had thoughts very similar to mine told me that this light was what is called “intelligence” or “reason” in us. I often changed position in order to test the different holes in the vault that furnished this small light, and when I was located in a spot where several beams could be seen at once from their true point of view, I found a collection of rays which greatly enlightened me. This technique was of great help to me and left me more capable of acting in the darkness.

Mathematics and the Higgs

In general, I tend to resist talking about the thing that everyone is talking about, but I find reason to make an exception today.  I do want to say something about yesterday’s announcement from physicists at the LHC that they saw the Higgs particle.  Frank Wilczek describes the significance of this observation (particularly nicely) in a blog post that appears on both the FQXi Blog and NOVA’s Nature of Reality Blog.  He says:

It confirms, as it completes, the Standard Model of fundamental physics. It hints at the splendid new prospect of supersymmetry while debunking rival speculations. Most fundamentally, it reaffirms our scientific faith that nature works according to precise yet humanly comprehensible laws—and, importantly, rewards our moral commitment to testing that faith rigorously.

Those “precise yet humanly comprehensible laws” are interpreted mathematics.  In this context, mathematically formulated is the meaning of “humanly comprehensible.” I’m not suggesting that the observation is an achievement in mathematics.  It is not.  I only want to take note of the fact that mathematics is the reasoning, as well as the strategy, that brings this quantum mechanical world into view.  The extent to which the effort rests on mathematics is as important as the extent to which it rests on the accelerator.  Yet the point is rarely made.

In an earlier post, however, one that anticipated the July 4th announcement, Frank Wilczek points, more than once, to mathematics.  He explains that modern physics proposes a way to simplify the laws, or the equations, that describe nature if we’re willing to see the empty space of our everyday perception as a medium “whose influence complicates how matter is observed to move.”

He refers to this as “a time-honored, successful strategy.”  Classical mechanics, for example, which postulates complete symmetry among the three dimensions of space, wouldn’t account for the actually observed motions (that are not symmetric in all directions) without the idea of “a pervasive gravitational field.”

A much more modern example occurs in quantum chromodynamics (QCD), our fundamental theory of the strong force between quarks and gluons. There we discover that the universe is filled with a medium, the sigma (σ) field, that forms a sort of cosmic molasses for protons and neutrons. The σ field slows protons and neutrons down. Allowing a bit of poetic license, we can say that the σ field gives protons and neutrons mass. Many consequences of the σ field have been calculated and successfully observed, so that to modern physicists it is now every bit as real as Earth’s gravity field. But the σ field exists everywhere and everywhen; it is not tied to Earth.

In the theory of the weak force, we need to do a similar trick for less familiar particles, the W and Z bosons. We could have beautiful equations for those particles if their masses were zero; but their masses are observed not to be zero. So we postulate the existence of a new all-pervasive field, the so-called Higgs condensate, which slows them down. This proposal, which here I’ve described only loosely and in words, comes embodied in specific equations and leads to many testable predictions. This proposal has been resoundingly successful.  (emphasis my own)

Since no known matter had the properties necessary for the Higgs condensate, the search was on for the one that did.  To answer the question, what is the Higgs particle specifically?  Wilczek says the following

“There’s a quotation I love from Heinrich Hertz, about Maxwell’s equations, that’s relevant here.

To the question: “What is Maxwell’s theory?” I know of no shorter or more definite answer than the following: “Maxwell’s theory is Maxwell’s system of equations.”

Similarly, Higgs particles are the entities that obey the equations of Higgs particle theory. Those equations prescribe everything about how Higgs particles move, interact with other particles, and decay—with just one, albeit glaring, exception: The equations do not determine the mass of the Higgs particle. The theory can accommodate a wide range of values for that mass.

And so a tremendous amount of analysis has been done to narrow the range within which the Higgs particle mass is expected to be.  Wilczek also explains:

Physicists will have used intricate equations and difficult calculations to predict not only the mere existence of the Higgs particle, but also (given its mass) its rate of production in the complex, extreme conditions of ultra high energy proton-proton collisions. Those equations will also have accurately rendered the relative rates at which the Higgs particle decays in different ways. Yet the most challenging task of all may be computing the much larger, competing background “noise” from known processes, in order to successfully contrast the Higgs’ “signal.” Virtually every aspect of our current understanding of fundamental physics comes into play, and gets a stringent workout, in crafting these predictions.

Because the equations in the Standard Model stipulate four different forces (the strong, the weak, electromagnetic and gravitational) and six different materials that they act on, the search is always on for the simpler, prettier, unified theory.  There are proposals for such a theory, where there is only one kind of material and one force.  To make them work quantitatively, the equations of the Standard Model have to be expanded to accommodate a concept called supersymmetry (one of a number of proposals) and supersymmetry predicts the existence of many additional new fundamental particles that are likely to be accessible to the LHC.

It is mathematics that gives the various models of the universe their structure and that determines the specifications for what has come to be called the Higgs particle.  In another way, it is mathematics that provides the way to decipher the data produced when theories are tested.  And the testing is a herculean effort.  Wilczek says it right, “…detection requires cunning.”  The search for the Higgs has relied on a history of inventive and probing analyses, clever programming, and endless calculations.  In his June blog Wilczek also takes note of something many Higgs stories ignore:

Finding the Higgs boson depends on assuming that the Standard Model is reliable, so we can work around the “background noise”. Here years of hard bread-and-butter work at earlier accelerators—especially the Large Electron-Positron Collider (LEP), which previously occupied the same CERN tunnel in which the LHC resides today, and the Tevatron at Fermilab, as well as at the LHC itself—pays off big. Over the years, many thousands of quantitative predictions of the Standard Model have been tested and verified. Its record is impeccable; it has earned our trust.

I care about the absence of mathematics in physics discussions only because it gives every non-scientist yet another opportunity to ignore its living presence and fail to see how it functions, not only to describe, but to perceive.

 

Spider webs and a random walk in software space

Yesterday I happened upon a Huffington Post blog from Mario Livio. For anyone who has been following my blog, it will come as no surprise that this piece, about the surprising similarity between spider webs and computer generated cosmic webs, caught my attention.  After showing us a few, Livio says:

For an astrophysicist, perhaps the most amazing aspect of these webs is how much they resemble computer simulations of the cosmic web — the filamentary structure of the Dark Matter in the universe.

And he tells us what the cosmic web is:

Dark matter provides the scaffolding on which the large-scale structure of the universe is constructed. Ordinary matter is gravitationally attracted to the densest parts of the cosmic web, and there galaxies and clusters of galaxies are formed, leaving large, relatively empty voids. To examine the filamentary intergalactic gas, astronomers use the light from distant quasars. Observing with the Hubble Space Telescope, they utilize the quasar light just like shining a flashlight through fog. Hubble observations have also helped to map the 3D distribution of dark matter through the effect of gravitational lensing — the deflection of light of distant objects by the gravitational field of Dark Matter along the line of sight.

Apparently some spider webs even resemble the graphics that describe how black holes warp their surrounding space.  All of these images caught my attention because I am always looking for biology in mathematics or mathematics in biology.  Livio, however, didn’t say much about it other than that artist Tomás Saraceno created a work called “14 Billion” in which he constructed a large spider-web-like sculpture composed of ropes and elastic chords.  But he referred to a lively conversation at the World Science Festival involving Saraceno, Livio, arachnologist, P. Jäger, architect M. Wigley, and astrobiologist C. McKay.  When I clicked the link to check this event out I found another discussion in which Livio participated, from a year earlier, and I hit the jackpot.  It was called The Limits of Human Understanding, and included Rebecca Goldstein and Gregor Chaitin  This one was organized around Gödel’s Incompleteness Theorem.  I always enjoy listening to Rebecca Goldstein talk about Gödel.  She does a nice job of both addressing the significance of his work and breathing life into the his history.  Her voice can be heard in another recent blog of mine. But it was Gregor Chaitin who got my attention this time.  As a way of introducing his own work, he said that he was using ideas inspired by biology and, in doing that, he found the positive implications of Gödel’s Theorem rather than the negative.  Chaitin sees it this way: that the world of pure mathematics has infinite complexity while any one mathematical theory has finite complexity.  This, he says, “makes incompleteness seem natural.”

Chaitin believes that Gödel and Turing (in his 1936 paper) opened the door to a provocative connection between mathematics and biology, between life and software. I’ve looked at how Turing was inspired by biology in two of my other posts.   They can be found here and here.

But Chaitin is working to understand it with what he hopes will be a new branch of mathematics called Metabiology.  I very much enjoyed hearing him describe the history of the ideas that inspired him in one of his talks:  Life as Evolving Software in which he says:

After we invented software we could see that we were surrounded by software.  DNA is a universal programming language and biology can be thought of as software archeology – looking at very old, very complicated software.

Chaitin is postulating that biological creativity = math creativity.  And, in this light, Gödel’s Theorem helps to show that evolution is never-ending.

To begin, Chaitin invents a mathematical life form, one that satisfies the definition of ‘life’ as a system that has heredity and history, that can maintain itself, and that can evolve.  He begins with the simplest case – a single software organism that has no body, no population, no environment and no competition – calling it a toy model of evolution.  It’s a single software organism, a program, and it will mutate when it is given something challenging to do that requires creativity.  The goal of this organism will be to name a very big positive integer (called the busy beaver problem in computer science).  Chaitin insists, this is not a trivial problem.  And I believe him.

The problem requires an unlimited amount of creativity and Gödel’s incompleteness theorem applies.  No closed system will give you the best possible answer.  There are always better and better ways.

There is no doubt that this will be a fascinating and productive effort.  I will try to find out more.  The title of this post (except for the spider) is his image.

The solstice, archaeoastronomy and mathematics

Given the arrival of the summer solstice and this post on the EarthSky website, I decided to write a little bit about what prehistoric monuments (like Stonehenge) suggest to me about some of the roots of mathematics.

With a photograph to support the claim, the EarthSky post tells us:

If you stood inside the Stonehenge monument at sunrise on the day of the summer solstice, you would see the sun rise above the famous Heel Stone.

This Stonehenge monument – built in 3,000 to 2,000 BC – shows how carefully our ancestors watched the sun.

In their photo, the slightly pointed top of the Heel Stone seems to direct your attention to the center of the rising sun.  There is disagreement, however, about the significance of this alignment.  Evidence that there was once a stone neighboring the Heel Stone has challenged the idea that the Heel Stone is a solstice sunrise marker.  Some scholars now even believe that prehistoric people visited the site only during the winter solstice.

Stonehenge’s construction (over the course of at least 1500 years) began as early as 3100 BC. Although many of claims about its astronomical alignments have also been disputed, there does seem to be an astronomical aspect to the structure, and one that it shares with other prehistoric monuments.  At Stonehenge, on the day of the northern winter solstice, the setting sun lines up with the narrow space created by three stones known as the Trilithon – two large vertical stones that support a third horizontal stone across the top.

Newgrange, a monument located on the eastern side of Ireland, is primarily a large mound within which was built a chambered passage. Newgrange dates back to 3200 BC and has the striking feature that, on the day of the winter solstice, the rising sun floods the stone room within the mound with light.  Their website describes it this way:

At dawn, from December 19th to 23rd, a narrow beam of light penetrates the roof-box and reaches the floor of the chamber, gradually extending to the rear of the chamber. As the sun rises higher, the beam widens within the chamber so that the whole room becomes dramatically illuminated. This event lasts for 17 minutes, beginning around 9am.

The accuracy of Newgrange as a time-telling device is remarkable when one considers that it was built 500 years before the Great Pyramids and more than 1,000 years before Stonehenge. The intent of its builders was undoubtedly to mark the beginning of the new year. In addition, it may have served as a powerful symbol of the victory of life over death.

Newgrange also has some abstract rock art that includes circles, spirals, dot-in-circles, and parallel lines.

Exploring some of the prehistoric monuments that are known to have astrological significance led me to read a bit about the relatively new area of study called Archaeoastronomy.  Wikipedia describes it as the study of how people in the past have understood the sky, how they used what they understood, and what role the sky played in their culture.  Inevitably this kind of investigation crosses paths with archaeology, anthropology, the history of science, and cognitive science.  There’s a nice collection of relevant sites at a website called Ancient Wisdom that includes discoveries in the Americas – like the ‘sun-dagger’ in Chaco Canyon, New Mexico, the Bighorn Medicine Wheel in Wyoming, and Chichen Itza in Mexico.  The alignments of the Bighorn Medicine Wheel are particularly clear and described here.

On top of the Bighorn Range in Wyoming, a desolate 9,642 feet high and only reachable during the warm summer months, lies an ancient Native American construction — an 80′ diameter wheel-like pattern made of stones. At the center of the circle is a doughnut-shaped pile of stones, a cairn, connected to the rim by 28 spoke-like lines of stones. Six more stone cairns are arranged around the circle, most large enough to hold a sitting human. The central cairn is about 12 feet in diameter and 2′ high.

If you stand or sit at one cairn looking towards another, you will be pointed to certain places on the distant horizon. These points indicate where the Sun rises or sets on summer solstice and where certain important stars rise heliacally, that is, first rise at dawn after being behind the Sun. The dawn stars helped foretell when the Sun ceremonial days would be coming. The area is free of snow only for 2 months — around the summer solstice. The wheel has 28 spokes, the same number used in the roofs of ceremonial buildings such as the Lakota Sundance lodge. These always includes an entrance to the east, facing the rising Sun, and include 28 rafters for the 28 days in the lunar cycle. The number 28 is sacred to some of the Indian tribes because of its significance as the lunar month. In Bighorn’s case, could the special number 28 also refer to the helicial or dawn rising of Rigel 28 days past the Solstice, and Sirius another 28 past that?

For me, a glimpse of these ancient constructions gives me a chance to muffle the voice of scientific objectivity in favor of a more participatory cosmic experience.  The consistency of the human desire to mark the summer and winter solstices is impressive.  These ancient feats also tell me something about how our attention was directed, how determined we were to look out at the distant horizon.  Our ancestors began early to live within observed patterns and cycles. Their monuments give us the chance to see something about how they were inclined to mark what they saw, and re-present it to themselves, in patterns of their own making.  Building the alignments that are contained in these structures is an analysis of space as well as pattern and often also involves a count of some kind, with ourselves at the very center of the action. These are really the elements of measurement, that abstract arrangement of sensory facts that we so often make.  This most certainly leads to what we later call a mathematical intuition. But they emerge through the kinship of the earth, the sky and the organism, and use the strength of the whole organism, the whole body.   In this light, another look at these ancient accomplishments could contribute something new to what we think about the nature of science and mathematics.  These are things that the body is doing, and we may not fully understand why.

Computational Linguistics, Matter and Meaning

Not long ago I wrote about the work of Bob Coecke, an Oxford University physicist, who is pioneering an application of category theory to quantum mechanics.  In that post I referred to the work he is also doing with language, using the same kind of graphic structures. I drew attention to the fact that category theory’s ‘process’ rather than ‘object’ orientation may have helped open this door. But there are a number of things that distinguish Coecke’s work that are worth thinking about.  I spent some time today doing just that.

Coecke spoke briefly, in a Foundational Questions Institute pod cast, and tried to clarify what makes his approach to quantum mechanics and linguistics unique and powerful. He first developed his graphical language to simplify the mathematics of quantum theory. One of the keys to the effectiveness of this tool is that, while it is a picture narrative, it is also computational.  The graphics contain quantitative information.  They are, Coecke explains, a calculus.  And apparently when the object of our attention changes from being a quantitative description, to a scheme of diagrammed interactions, we can actually see more.  It’s as if this calculus lets our visual senses do some of the thinking.  It is the state of a physical system that flows through such a diagram.  Physical laws are then expressed in the topology of the diagram, in how the flow is characterized or defined.  Coecke suggested a cooking analogy.  A recipe can be described as a flow of actions like chopping, mixing, boiling, frying, etc. For some aspects of the recipe, the order of actions is not important and for other aspects the order is crucial.  A computation in Coecke’s graphical language can be thought of as rearranging the diagram without changing its topology (or in the case of the recipe, without changing the outcome of the dish).

That a picture can contain quantitative information is one of the keys to mathematics in general.  But computing with the pictures reaches another level of abstraction.   This possibility grows out of the branch of mathematics called category theory, about which the Stanford Encyclopedia of Philosophy says:

It could be argued that category theory represents the culmination of one of deepest and most powerful tendencies in twentieth century mathematical thought: the search for the most general and abstract ingredients in a given situation.

Category theory can identify ‘universal properties,’ or the way that different actions are actually doing the same thing. For example, through the lense of category theory, a Cartesian product in set theory, a product of topological spaces, and the conjunction of propositions in a deductive system, are the same kind of action. Identifying this fact makes it possible to express a problem in one area of mathematics as a problem in another area. What matters is the way the objects within a given structure are related, whether they are points on a line or the propositions in logic.  With Coecke’s work, this level of abstraction not only bridges different areas within mathematics, it also creates some cross-disciplinary bridges and opens the door to the application of Coecke’s graphics to linguistics.

In the podcast, Coecke tells us that the way words interact in a sentence (to create meaning) is similar to the interactions in the subatomic world of quantum physics.  This thought alone could inspire a host of philosophical discussions, but I’ll stay with computational linguistics for the moment.

Most of the existing models of human language use the meaning of individual words (for the algorithms of search engines) or the rules of grammar.  But it has not been possible to formalize their interaction in an algorithmic way.  Coecke’s work has suggested a way to do this.  In existing models, words have been defined as vectors in a multi-dimensional space where each dimension represents an key attribute of the word.  The vectors used to build the meaning of the word dog or cat, for example, would have no overlap with the ones used to build the meaning of a word like banker.  The dictionaries compiled from these vectors are understood in terms of distance.  The distance between dog and cat in a dictionary compiled with these vectors would be much less than the distance between, say, dog and banker.   Coecke and his team have figured out a way to use his graphical links to connect individual words and create a vector for a sentence.   This implies distances between sentences.  The links contain grammatical rules.  And the vectors create distances between sentences, neighborhoods of related meaning, as happens with the vector words.

In the podcast interview, Coecke emphasized that while there is a logic to the graphical analysis of sentence construction, it is not propositional, not a yes/no logic, not about facts that beget other facts, or things that are true or false.   It is more an analysis of the information flowing through the wires of the diagram.

The work was outlined in a New Scientist article at the end of 2010.  It’s young but, as the article says, “aims to create a universal “theory of meaning” in which language and grammar are encoded in a set of mathematical rules.”

I’m particularly interested in the fact that the same kinds of orders seem to bring about both matter and meaning, and that mathematics has actually found some of them.

Kuhn, Gödel, on being wrong and being heroic

Three things I read today converged in a way I had not anticipated and they all had something to do with truth.  First, there was the announcement of the Foundational Questions Institute’s 4th essay contest.  Entrants are invited to address this topic: Which of Our Basic Physical Assumptions Are Wrong?  Scientific American is a cosponsor of the contest and George Musser introduced it in a blog.

The two theories that need to be unified, quantum field theory and Einstein’s general theory of relativity, are both highly successful………And yet, if the theories are incompatible, something has to give. That is what makes unification so hard. In conferences, I see physicists go down the list of assumptions that underpin their theories. Each, it seems, is rock solid. But they can’t all be right. Maybe one will, on closer inspection, prove to be not like the others. Or maybe physicists have left the culprit off their list because it is so deeply embedded in their way of thinking that they don’t even recognize it as an assumption. As economist John Maynard Keynes wrote, “The difficulty lies, not in the new ideas, but in escaping from the old ones, which ramify… into every corner of our minds.”

This effort might actually be too self-conscious to be successful but it does remind me of Thomas Kuhn’s ideas about scientific revolutions (and their misinterpretations), which brings me to the second thing I read today that was written to commemorate the 50th anniversary of Thomas Kuhn’s highly influential book.  John Horgan posted an edited version of his encounter with Kuhn that had appeared in Horgan’s own 1996 book, The End of Science.  According to Hogan, despite Kuhn’s reluctance to search out the roots of his own thoughts,

He nonetheless traced his view of science to an epiphany he experienced in 1947, when he was working toward a doctorate in physics at Harvard. While reading Aristotle’s Physics, Kuhn had become astonished at how “wrong” it was. How could someone who wrote so brilliantly on so many topics be so misguided when it came to physics?

Kuhn was pondering this mystery, staring out his dormitory window (“I can still see the vines and the shade two thirds of the way down”), when suddenly Aristotle “made sense.” Kuhn realized that Aristotle invested basic concepts with different meanings than modern physicists did. Aristotle used the term “motion,” for example, to refer not just to change in position but to change in general—the reddening of the sun as well as its descent toward the horizon. Aristotle’s physics, understood on its own terms, was simply different from rather than inferior to Newtonian physics.

This is a really nice insight, largely about how we shape our experience and how we build our realities.

“Obviously all humans share some responses to experience, simply because of their shared biological heritage, Kuhn added. But whatever is universal in human experience, whatever transcends culture and history, is also “ineffable,” beyond the reach of language. Language, Kuhn said, “is not a universal tool.”

In his conversation with Horgan, Kuhn rejected mathematics as a candidate for a universal language (or a language at all) because it has no semantic content.  But the role that mathematics plays in shaping ideas is one of the reason that science produces, as Kuhn says, “the greatest and most original bursts of creativity” of any human enterprise.

Mathematics does work to transcend the limitations of ordinary language, perhaps even of culture and history.  It is a very directed effort to sort out the patterns in ideas, relationships among concepts, to find sameness where there seems to be difference.  Mathematics is about ‘things’ but not things found directly with the senses.

Biologists Maturana and Varela pointed to something related to Kuhn’s insight in their book The Tree of Knowledge.

…our experience is moored to our structure in a binding way.  We do not see the “space” of the world; we live our field of vision.  We do not see the “colors” of the world; we live our chromatic space.

So what are we living in mathematics, in its consistently refined notions of space?  It is the way mathematics has been used to unravel sensations that makes me particularly interested in the aspect of our structure that it may be reflecting.

It’s clear in Horgan’s encounter with Kuhn that Kuhn was very weary of the almost viral proliferation of interpretations of his book.

Kuhn tried, throughout his career, to remain true to that original epiphany he experienced in his dormitory at Harvard. During that moment Kuhn saw—he knew!—that reality is ultimately unknowable; any attempt to describe it obscures as much as it illuminates. But Kuhn’s insight forced him to take the untenable position that because all scientific theories fall short of absolute, mystical truth, they are all equally untrue.

And this reminded me of the third thing I read today – Rebecca Goldstein’s description of what Gödel hoped to show about the nature of mathematical truth when he established his incompleteness theorem.  Goldstein explains:

Gödel wanted to prove a mathematical theorem that would have all the precision of mathematics—the only language with any claims to precision—but with the sweep of philosophy. He wanted a mathematical theorem that would speak to the issues of meta-mathematics. And two extraordinary things happened. One is that he actually did produce such a theorem. The other is that it was interpreted by the jazzier parts of the intellectual culture as saying, philosophically exactly the opposite of what he had been intending to say with it. Gödel had intended to show that our knowledge of mathematics exceeds our formal proofs. He hadn’t meant to subvert the notion that we have objective mathematical knowledge or claim that there is no mathematical proof—quite the contrary. He believed that we do have access to an independent mathematical reality. Our formal systems are incomplete because there’s more to mathematical reality than can be contained in any of our formal systems.

There is an interesting similarity in these accounts of how Kuhn’s and Gödel’s insights were received.  Kuhn’s encouraging awareness of the creativity of science had the unanticipated effect of highlighting its limitations, of discouraging some with what Kuhn saw as its lack of progress or, as he described it, its movement not toward something, but just away from something.  And Gödel’s conviction about the truth of mathematical reality had the unplanned effect of somehow diminishing that reality.  Rather than seeing that mathematics can never be fully captured in a formal system, mathematics, identified completely with these formal systems, was seen as weaker.

I think Goldstein makes an important point when she says:

There’s nothing less exhilarating than reducing everything to social constructs and to our piddly human points of view. The pleasure of thinking is in trying to get outside of ourselves—this is as true in the arts and the humanities as in math and the sciences. There’s something heroic in the idea of objective knowledge; the farther away knowledge takes you from your own individual point of view, the more heroic it is.

Whether objective or not, this journey, away from an individual point of view, is worth celebrating.