Categories

Infinities, Tolstoy, dreams and Nabokov

My interest in mathematics is more personal than it is academic. I learned what I know formally, in the usual sequence of undergraduate and graduate math courses. But it has penetrated my personal life, and I have come to see mathematics as deeply rooted in a fundamental human drive to live more, or to live more fully.  It is this conviction that has motivated me to search for mathematics in sensory mechanisms, in the ways we learn, or the ways we make art and music.  Some of mathematics’ historical development has fascinated me because there one can see how the combined effect of intuition and rigor reveal the power of a precise examination of thought.  Other math enthusiasts share my desire to search out its ubiquitous presence in nature, happy to find some of our imagined possibilities in, for example, the navigational processes of insects. But it is only in Daniel Tammet’s latest book, Thinking in Numbers, that I find the unexpected glimpses of mathematics that can tell us more about how it lives in us.

I wrote about Tammet’s book last week.  It is interesting to me that, while Tammet’s experience bears little resemblance to mine, his view of mathematics has a good deal in common with mine.  Mathematics is woven into Tammets experience in a very personal way – in his sensations – making it all much more immediate for him.  For me, it was by taking classes that I learned the power of its precision, got a look at its highly imaginative structure, and was able to enjoy its often surprising results. These had an emotional impact on my own search for meaning.  For Tammet and for me, mathematics is an aspect of life itself.  So today I decided to highlight two of his observations that I found provocative (and perhaps unexpected).  They each focus on infinities, the source of mathematics’ great richness as well as its many paradoxes. The first of these is Tammet’s discussion of how the calculus informed Tolstoy.  The other is his view of the infinite stories contained in dreams. (Tammet also tells his story about Tolstoy in this short piece published by The Guardian this past August).

Tammet quotes from War and Peace in both The Guardian piece and the book:

“The movement of humanity, arising as it does from innumerable arbitrary human wills, is continuous,” he writes. “To understand the laws of this continuous movement is the aim of history … only by taking infinitesimal units for observation … and attaining to the art of integrating them (that is, finding the sum of these infinitesimals) can we hope to arrive at the laws of history.”

And he adds:

Mathematics, Tolstoy understood, is like literature: a way in which the world expresses itself. Words and numbers: both allow us to entertain pure possibilities, immune from prior experience or expectation. Perhaps that is why some of Count Leo’s closest friends were mathematicians.

I particularly like the phrase,  “a way in which the world expresses itself.”  Tolstoy used the language of calculus to criticize historians for what he considered erroneous simplifications and for their failure to grasp the significance of change brought about by a multitude of infinitely small actions.  Change itself, Tammet points out, is generally misperceived.

Change appears to us mysterious because it is invisible.  It is impossible to see a tree grow tall or a man grow old, except with the precarious imagination of hindsight.  A tree is small and later it is tall.  A man is young, and later he is old.  A people are at peace, and later they are at war.  In each case, the intermediate states are at once infinitely many and infinitely complex, which is why they exceed our finite perceptions.

Mathematics often reveals the way our finite perceptions are misleading and it corrects them. In The Guardian piece, Tammet provides a link to a paper written by Stephen Ahearn for the Mathematical Association of America on the same topic in 2005.

In another essay entitled Book of Books, Tammet reflects on sleep.

Our dreams contain the infinite.  Uninhibited by wakefulness, words and pictures and emotions circulate and combine freely inside our head….Dreams defy our finite scrutiny; too often they evaporate in the narrow light of day…Like a book, like a life, where does the explanation start?  A dream has no beginning, and therefore no middle and no end.

This reflection is connected to the, often made but little understood, observation – that the unconscious mind can solve problems and write stories.  According to Tammet, “the Unconscious mind has authored some of the greatest works in literature:  Goethe and Coleridge are only two of its pseudonyms.”   And in this context Tammet talks about how Vladimir Nobokov wrote his famous Lolita. He explains that the novel

began life on a long series of here-by-five index cards.  He sketched out the story’s closing scenes first.  On subsequent cards Nabokov jotted down not only paragraphs of text but also plot ideas and other bits of information…Every so often Nobokov would rearrange his index cards, searching for the most promising combination of scenes.

Tammet imagines some of the potential versions of Lolita, ones that would be viable alternatives, and suggests that there are more than a million of them. This imagined array of possibilities, based essentially on the possible permutations of the index cards that make up its three hundred fifty plus pages of text, is particularly interesting to me.  It is about meaning-making as well as permutations, which is what the mind does all of the time, including what happens in our dreams.  But Tammet is placing his attention on the meaning-making itself, the possibilities for other stories, other ways, other lives.  And this admonition about the presence of these alternatives is one that mathematics has always  provided me

Nabokov himself makes a math analogy:

Reality is a very subjective affair…You can get nearer and nearer, so to speak, to reality; but you never get near enough because reality is an infinite succession of steps, levels of perception, false bottoms, and hence unquenchable, unattainable.

The question that comes up in my mind is how is it that we come to explore these intuitions precisely.  It is the precision of math ideas that will make mathematics uninteresting or irritating to the student who happens to be required to learn them.  But it is precisely this precision that permits us access to what we cannot easily see about ourselves. So how did we come to intuit the enormous value in formalizing what seem to be living processes?  This is a big question, and I won’t attempt to answer it, but feel free to let me know what you think.

Daniel Tammet and imagination

I recently got a copy of Daniel Tammet’s latest book, Thinking in Numbers. As you may know, Daniel Tammet has been described as a high functioning autistic savant.   He gained some notoriety in 2004, when he recited the decimal expansion of pi to 22,514 places in just over 5 hours.  You can see him do some of his remarkable calculations on this video. He is now the author of three books –  Born on a Blue Day, Embracing the Wide Sky, and this one.  He also has a blog which can be found here.

One of the more important things about Tammet’s story is his own view of his extraordinary abilities.  He is convinced that what he can do is an extreme variation of something we all do.  He can do what he does because of the rich and complex associativity that is thinking and imagination. Since Tammet can accomplish startling calculations, making use of the color and shape of the numbers he sees, his experience addresses both what mathematics is as well as what the imagination can do.

Here is a nice excerpt from the preface:

Imagine.

Close your eyes and imagine a space without limits, or the infinitesimal events that can stir up a country’s revolution.  Imagine how the perfect game of chess might start and end:  a win for white, or black or a draw?  Imagine numbers so vast that they exceed every atom in the universe, counting with eleven or twelve fingers instead of ten, reading a single book an infinite number of ways.

Such imagination belongs to everyone.  It even possesses its own science: mathematics.  Ricardo Nemirovsky and Francesca Ferrara, who specialize in the study of mathematical cognition, write that, ‘Like literary fiction, mathematical imagination entertains pure possibilities.’  This is the distillation of what I take to be interesting and important about the way in which mathematics informs imaginative life.  Often we are barely aware of it, but the play between numerical concepts saturates the way we experience the world.

Tammet’s book is a collection of 25 eclectic essays, many of which trigger thoughts that I have been exploring in these blogs.  In one of the essays, called Einstein’s Equations, he writes the following:

Human beings’ quest for meaning is perpetual; lack of meaning is offensive to the mind, and whatever the scale of the problem, a solution is a thing of beauty.  Einstein’s equations solved problems such as ‘What do we mean by the words “time’ and “mass”?’

Answering this question ‘what do we mean by…?” doesn’t just reveal something about time and mass, it also reveals something about us.  I believe that this ‘perpetual quest for meaning’ is not an intellectual event.  It begins deep within some of the most fundamental attributes of the human organism, like vision itself.  Vision itself is a quest for meaning, for order and recognition.  Sensory processes bring structure to sensory data and within the complex associative actions of the nervous system we finally perceive the world as well as our thoughts.  In the light of embodied cognition, the roots of something as purely symbolic as mathematics are found in the body’s action. I looked up a paper written by the individuals Tammet references in his preface -Ricardo Nemirovsky and Francesca Ferrara.  The paper is called Mathematical imagination and embodied cognition.

They begin with the statement Tammet makes in his preface and follow up with interesting observations.

Like literary fiction, mathematical imagination entertains pure possibilities.  However numerous traits distinguish them.  The one that we want to emphasize here is that the possibilities held by mathematical imagination are regulated by or embedded in, a sense of logical necessity in all its deductive and inductive modalities.

Then, a little later:

Whatever we recognize as rational, rule-based, or inferential, is fully embedded in our bodily actions; perception and motor activity do not function as input and output for the “mental” realm; what we usually recognize as mental are inhibited and condensed perceptuo-motor activities that do not reach the periphery of our nervous system…Any perceptuo-motor activity is inscribe in a realm of possibilities encompassing all those for which the subject achieves a certain state of readiness.  In this sense all animals imagine, not just humans, although it is not clear to what extent non-human animals entertain pure possibilities.

Tammet’s book covers so much ground, and his experience is so unusual, I’m sure I will write again about some of his thinking in numbers.  In Tammet’s stories, mathematics pervades human lives and one of the moments I particularly enjoyed was this quote from Vladimir Nabokov:

Reality is a very subjective affair….You can get nearer and nearer, so to speak, to reality; but you never get near enough because reality is an infinite succession of steps, levels of perception, false bottoms, and hence unquenchable, unattainable.  You can know more and more about one thing but you can never know everything about one thing:  it’s hopeless.

I would end this thought differently. This asymptotic approach to reality is no doubt, hopeful.  Because there is always more.

Mathematical life forms and really big numbers

I finally got hold of a copy of Gergory Chaitin’s latest book, Proving Darwin: Making Biology Mathematical. The thesis of the book is very appealing to me, since it equates mathematical creativity with biological creativity. And, I would say that Chaitin’s work is a captivating experiment.  He is, as he says, “attempting to find the simplest possible mathematical life-form.”  At the beginning of a tour of this work, he makes these important remarks:

Mathematics isn’t the art of answering mathematical questions – most questions can’t be answered or have ugly, messy, uninteresting answers.  Rather math is the art of asking the right questions, the questions that have beautiful, fertile, suggestive answers.

And mathematics isn’t a practical tool, a way of getting answers.  For that, use a machine, use a computer!  Math is an art form, a way to achieve understanding! The purpose of a proof is not to establish that something is true, but to tell us why it is true, to enable us to understand what is happening, what is going on!

I really enjoyed the point being made here about ‘proof.’  I imagined a proof as a picture of what was going on inside something – like an x-ray or a sonogram – something we can see.

I described some of the points of Chaitin’s work in an earlier post after having listened to him discuss his work online.  The discussion was one that was recorded at a World Science Festival last year.  But, to recap, Chaitin begins his thoughts with two ideas:

1.that DNA is what computer scientists call a “universal programming language,” in other words, a programming language sufficiently powerful to express any algorithm.

2. at the level of abstraction that Chaitin is working “there is no essential difference between mathematical creativity and biological creativity…”(quoting from his talk at the Santa Fe Institute)

Chaitin challenges his mathematical life forms with mathematical problems to force them to keep evolving.  He is studying toy models of evolution, using the simplest life forms he can produce.  They are based on the understanding that life exists when heredity operates, with mutations, and when evolution by natural selection taking place.  Chaitin explains:

So to make things as simple as possible, no metabolism, no bodies, only DNA.  My organisms will be computer programs.

A mutation in this model is an algorithmic mutation, a computer program. The original organism produces the mutated organism as its output.  And in order to ensure that these organisms evolve forever, they are challenged with a mathematical problem that can never be solved perfectly.  According to Chaitin the organisms

are mathematicians that are trying to become better and better, to know more and more mathematics.

I love that.

The problem chosen for these mathematicians is what computer scientists call the Busy Beaver problem – “concisely naming an extremely large positive number, an extremely large unsigned whole number.”  Chaitin points out that to do this effectively one has to be creative, because to do it effectively one needs to invent addition, multiplication, exponentiation, hyper-exponentiation.   If you had the large number N, and you wanted to name a larger number, it will become necessary to consider things like N+N, N*N or N to the nth power, or N to the N to the N….  Successfully finding a larger number, than the one last one found, increases the fitness of the organism/program.  Each of the software organisms calculates a single positive integer and the bigger the number, the fitter the organism.

Chaitin has a way to measure “evolutionary progress,” and “biological creativity,” and he uses these computer science ideas to outline a proof that Darwinian evolution works in his model.   The detail (not obvious here) is provided in the book and it is important if the field of metabiology is going to progress.  I expect that it will.  My confidence, in no small way, lies in the prospect that it isn’t just proving that evolution works, but that it is also revealing some of the aspects of evolution that have been neglected or misunderstood.  In particular, these models highlight what Chaitin sees –

Biology is ceaseless creativity, not stability, not at all.

One last note:

When talking about the Busy Beaver problem, Chaitin referenced an essay by quantum computer complexity theorist, Scott Anderson entitled  “Who Can Name the Biggest Number?” It’s a great piece, and Anderson describes the extraordinarily fast growth of a sequence of Busy Beaver numbers and makes clear what they all mean when they talk about really, really big numbers.  But Anderson also raised an interesting question about whether people are afraid of really big numbers.  To address the question, he referred to studies led by neuroscientist Stanislas Dehaene. These studies suggest that two separate brain systems contribute to mathematical thinking – exact calculations were seen to line up with verbal reasoning while approximations were seen to line up with spatial reasoning.

For approximate reckoning we use a ‘mental number line,’ which evolved long ago and which we likely share with other animals. But for exact computation we use numerical symbols, which evolved recently and which, being language-dependent, are unique to humans…

If Dehaene et al.’s hypothesis is correct, then which representation do we use for big numbers? Surely the symbolic one—for nobody’s mental number line could be long enough to contain , 5 pentated to the 5, or BB(1000). And here, I suspect, is the problem. When thinking about 3, 4, or 7, we’re guided by our spatial intuition, honed over millions of years of perceiving 3 gazelles, 4 mates, 7 members of a hostile clan. But when thinking about BB(1000), we have only language, that evolutionary neophyte, to rely upon. The usual neural pathways for representing numbers lead to dead ends. And this, perhaps, is why people are afraid of big numbers.”  (emphasis my own)

This last statement is pretty interesting.

Order, computation and creativity in biology

Current research into the neuroscience of our visual system tells us that what we see is constructed through the coordinated effect of cells sensitive to particular aspects of a visual scene. Attributes such as motion, form and color are processed in individually specialized areas, along paths that lead to the primary visual cortex, creating what we see. The latest issue of Scientific American reports on research that tells us that one of the ways the brain communicates with itself about sensory data is based on the timing of neuron firings. Terry Sejnowski of the Howard Hughes Medical Institute and Tobi Delbruck at the University of Zurich describe how the synchronization of spikes in the firing of neurons, is a very useful conduit of information.

This is because a group of spikes that fire almost at the same moment can carry much more information than can a comparably sized group that activates in an unsynchronized fashion.

The development of computer models of the nervous system, and new results from experimental data have encouraged new efforts to explore how timing permits communication among neurons.  Each cell in the visual cortex may be activated by a specific physical feature of an object (like color or orientation) but, the authors explain,

When several of these cells switch on at the same time, their combined activation constitutes a suspicious coincidence because it may only occur at a specific time for a unique object.  Apparently the brain takes such synchrony to mean that the signals are worth noting because the odds of such coordination occurring by chance are slim.  (emphasis my own)

The suspicious coincidence idea rests on a judgment of likelihood which I always find to be an intriguing biological process.

This research has helped inspire the development of a new kind of camera that mimics the way parts of he retina encode images for our brain.  Instead of recording the average light intensities of every pixel that covers the visual scene in 40 milliseconds, the new camera senses only the parts of a scene that change when any pixel detects a change in brightness from the recorded value.  This encoding of change gives the camera what the author’s call, “sparse yet information-packed output.”

The key to this idea does seem to be about packing in information and encoding change. But the complexity of the brain’s computation of images seems enormous.  It also happens, for example, that stimulating the region surrounding a particular neuron’s receptive field will affect the precision of the spike timing.  When input from surrounding areas is removed, in order to target the neurons triggered by inputs from a receptive field, it has been observed that precision in the timing of spikes is lost.

Attention itself may be rooted in sequences of synchronized spikes, as these may serve to mark the importance of a particular perception or memory passing through our awareness.

The title of the article is The Language of the Brain, since the subject of the authors’ research is how information gets passed around in the brain, how cells in the brain are communicating.  The transfer of information is becoming a more and more interesting player in modern science.  Some consider it the fundamental stuff of the universe And in biology, it participates in building life.  In the context of this article, information, ordered and transmitted, produces our visual experience.

What I find interesting about these more recent findings is that we are beginning to see that the kind of ordering that will create meaning in our experience is as abstract as any mathematical map.  The actions that describe living processes seem to be very complex sets of ordering principles.

In a talk given in 2005, Gregory Chaitin said some interesting things about mathematics, information, complexity and biology.  For example, he said:

….when dealing with complex systems such as those that occur in biology, thinking about information processing is also crucial. As I believe Seth Lloyd said, the most important thing in understanding a complex system is to determine how it represents information and how it processes that information, i.e., what kinds of computations are performed.

The inexhaustible creativity of biological systems does seem to resemble the inexhaustible creativity of mathematics.  And Chaitin finds them linked.  In fact in his concluding statements he offers this:

I believe that this is actually the central question in biology as well as in mathematics, it’s the mystery of creation, of creativity:

“Where do new mathematical and biological ideas come from?”
“How do they emerge?”

Normally one equates a new biological idea with a new species, but in fact every time a child is born, that’s actually a new idea incarnating; it’s reinventing the notion of “human being,” which changes constantly.

He doesn’t try to answer the question, but encourages its consideration.  I hope to write soon about his most recent publication, Proving Darwin: Making Biology Mathematical, where he explores a new way to think about biology and mathematics that highlights the mathematical structures on which the biological world rests.

 

Bees, ants, space and algorithm

In 2011, Science Daily reported on a study done at Queen Mary University of London and published in Biology Letters.  The study examined the foraging strategies of bumblebees and found that “after extensive training (80 foraging bouts and at least 640 flower visits), bees reduced their flight distances and prioritized shortest possible routes.” The bees were able to solve complex routing problems through experience and, it seems, without requiring a sophisticated cognitive representation of space. Science Daily made these observations:

Surprisingly, the bees almost never followed a nearest-neighbor strategy (in which the bee would fly to the nearest unvisited flower until all flowers are visited).  Instead they prioritized following the shortest possible route by learning and memorizing individual flower locations.

Dr. Lihoreau, who headed the team, was quoted as saying: “Despite having tiny brains, bees effectively used gradual optimization (comparing several different routes), to solve this famously complex routing problem which still baffles mathematicians 80 years after it was first posed.”

What strikes me first about these studies is the absence of the need for a “cognitive representation of space.”  It happens in physics and mathematics that a set of parameters, which don’t define a physical space, may be put in a kind of spatial relationship in order to be better understood.  But here, it is a physical space that comes to be analyzed algorithmically, with the iteration of a series of judgments.  The ‘space,’ then, while not actually constructed, is understood.

A more recent study, led again by Mathieu Lihoreau, examined the bees’ foraging with the aid of radar tracking and motion-sensitve cameras.

How, then, did the bees optimise their routes? Based on our detailed analysis of bee movement patterns, we implemented a simple iterative improvement heuristic, which, when applied to our experimental situation, matched the behaviour of real bees exceptionally well. The proposed heuristic demonstrates that stable efficient routing solutions can emerge relatively rapidly (in fewer than 20 bouts in our study) with only little computational demand. Our hypothetical model implies that a bee keeps in memory the net length of the shortest route experienced so far and compares it to that of the current route traveled. If the novel route is found to be shorter, the bee is more likely to repeat the flight vectors comprising this route. Hence, through a positive feedback loop certain flight vectors are reinforced in memory, while others are “forgotten”, allowing the bee to select and stabilize a short (if not optimal) route into a trapline. These assumptions are compatible with well-established observations that bees compute and memorise vector distances between locations using path integration. For instance, bees visiting the same feeders over several bouts learn flight vectors encoding both direction and travel distance to each site, by associating specific visual scenes (such as salient landmarks or panoramas) with a motor command.

The optimisation process we describe is analogous to the iterative improvement approach developed in “ant colony optimisation” heuristics, which has been increasingly used to explore solutions to combinatorial problems in computer sciences. The rationale of these swarm intelligence heuristics is based on a model describing how ants collectively find short paths between a feeding location and their nest using chemical signals. “Memory” in ant colony optimisation algorithms has no neurobiological basis but instead takes the form of pheromone trails marking established routes. The shortest route becomes more attractive due to increases in pheromone concentration as multiple ants forage simultaneously along it and continue to lay pheromone, while longer routes are abandoned because of pheromone evaporation. (emphasis my own)

A bit more that a year ago, I blogged about this vector approach to navigation as it was seen in ants.

When Science Magazine reported on the bee study, they also took note of the fact that the bees were solving what mathematicians call the traveling salesman problem – calculating the shortest possible route given a theoretical arrangement of cities.  The problem was given mathematical definition in the 1800’s by W. R. Hamilton and Thomas Kirkman.  The general form of the problem was studied in the 1930’s by Karl Menger who makes the observation that “The rule that one first should go from the starting point to the closest point, then to the point closest to this, etc., in general does not yield the shortest route.”  The bees, it seems, know this instinctively.

In the 1950’s and 1960’s the problem became more popular, new methods for solving it were developed and the shortest routes involving 49 cities were solved.  By the 1980’s, instances with up to 2392 cities were solved.  To look directly at the possibilities for a solution would mean to try all permutations of possible routes and select the shortest.  But the running time for this approach makes it impractical for even 20 cities.  It was in 1997 that Marco Dorgio described a method for generating ‘good solutions’ using a simulation of ant colony navigation.  It happens that the traveling salesman problem has application, not only in logistics, but in microchips and even DNA sequencing where the concept city represents DNA fragments, and the concept distance, a similarity measure between DNA fragments.

It’s probably important that in computer science the ant colony optimization algorithm  (ACO) is a probalistic technique for solving certain kinds of computational problems.  And I find that there is, in more than one sense, an interesting coincidence of things here.  There is the encoding of distance and direction in creatures who don’t have what we would call spatial cognition, the very effective understaning of spatial relationships in an iterative, trial and error way, and the application of an insect’s foraging strategy to modern problems in mathematics and computer science.  We might want to say that we can now describe bee and ant behavior algotithmically, but I would say they are describing algorithms with their behavior.

Pollock, fractal expressionism and a mathematical thought

In a blog back in January, I referenced a talk given by David Deutsch in which he made the argument that, while empiricism has been the basis of science, empiricism alone is inadequate because scientific theories explain the seen in terms of the unseen.

What we see, in all these cases, bears no resemblance to the reality that we conclude is responsible – only a long chain of theoretical reasoning and interpretation connects them.

This ‘theoretical reasoning and interpretation’ has its own structure, a mostly mathematical one.  I am becoming more and more intrigued by the analogs that can be found among the structures in nature, the structures we see in the underlying action of sensory mechanisms and even in the structure or shape taken by the observable behavior of an organism because, not surprisingly, mathematics touches all of them.  These analogous structures come to mind today because my attention was brought to what has been discussed, for a number of years now, about the fractal patterns in paintings by Jackson Pollock, one of the pioneers of abstract expressionism.

In an article that appeared in Physics World magazine in October, 1999, physicist Richard P. Taylor argued that the patterns in paintings produced by Jackson Pollock in the late 1940’s and early 1950’s are fractal.  Then, in 2011, Taylor co-authored a paper that appeared in the journal Frontiers in Human Neuroscience entitled Perceptual and Physiological Responses to Jackson Pollock’s Fractals.

This paper is a more thorough analysis of Taylor’s idea with a much broader narrative. Alongside an image of the Long Island house where Pollock lived when he began his ‘pouring’ technique the authors note:

In contrast to his previous urban life in Manhattan, Pollock perfected his pouring technique surrounded by the complex patterns of nature. Right: Trees are an example of a natural fractal object. Although the patterns observed at different magnifications don’t repeat exactly, analysis shows them to have the same statistical qualities.

The suggestion is that Pollock, inspired by what he saw around him, had an insight about what was there, which he then reproduced for us.  And this is what the artist does.  But if Pollack was trying to reproduce the fractal nature of what he perceived, we have here another instance of an individual ‘seeing’ a deep and complex pattern without the use of any analytic tools (like mathematics).   And this inevitably tells me something about mathematics.

The paper also makes some observations of Pollock’s physical action when painting, and of the evolution of his paintings (which suggests a clear directedness in his efforts).

The question of how Pollock combined the blobs into an integrated, multi-colored visual fractal led us to investigate his painting technique in detail. We described Pollock’s style as “Fractal Expressionism” (Taylor et al., 1999b; Taylor, 2011) to distinguish it from computer-generated fractal art. Fractal Expressionism indicates an ability to generate and manipulate fractal patterns directly. In many ways, this ability to paint such complex patterns represents the limits of human capabilities. Our analysis of film footage taken at his peak in 1950 reveals a remarkably systematic process (Taylor et al., 2002). He started by painting localized islands of trajectories distributed across the canvas, followed by longer extended trajectories that joined the islands, gradually submerging them in a dense fractal web of paint. This process was very swift with the fractal dimension rising sharply…

…he perfected this technique over 10years. Art theorists categorize the evolution of Pollock’s pouring technique into three phases (Varnedoe and Karmel, 1998). In the “preliminary” phase of 1943–1945, his initial efforts were characterized by low D values. An example is the fractal pattern of the painting Untitled from 1945, which has a D value of 1.10. During his “transitional phase” from 1945 to 1947, he started to experiment with the pouring technique and his D values rose sharply. In his “classic” period of 1948–1952, he perfected his technique and D values rose more gradually to the value of D=1.7. During his classic period he also painted Untitled which has an even higher D value of 1.89. However, he immediately erased this pattern (it was painted on glass), prompting the speculation that he regarded this painting as too complex and immediately scaled back to paintings with D=1.7. This suggests that his 10years of refining the pouring technique were motivated by a desire to generate fractal patterns with D~1.7.

The D-value or dimension of a fractal is a measure of the amount of fine structure in the fractal pattern.

The paper also addresses the question of why or whether the D-value that Pollock seems to move toward in his paintings has particular aesthetic appeal.  Our eyes move in fractal patterns and I have blogged about this beforeExperimental collaborations between psychologists and neuroscientists have found that images matching the fractal dimension of the eye’s searching movement are ones that are most aesthetically pleasing.

But the paths created by the motion of our eyes look very much like the path of a flying insect searching of food.   And studies seem to indicate that the brain forages memory in much the same way.

This one mathematical thought seems to run through our existence on multiple levels. The ‘monstrous functions,’ used in the 19th century to demonstrate the break between mathematics and visible reality are now the fractals we use to describe physical and biological complexity.  But it is mathematics that has characterized the pattern in such a way as to be able to see it.  And we see it outside of us, inside of us – in the trees and the way we remember, in the structure of our lungs and the movement of our eyes (which tells us something about mathematics).

 

 

Finger counting, finger gnosia and cerebral structures

In June The Guardian posted an interesting piece on finger counting and numbers.  The main content of the article concerns the work of cognitive scientists Andrea Bender and Sieghard Beller which explores the cultural diversity in finger counting.  It tells us that if asked to use you hands to count to 10, these variations will likely happen:

If you’re European, there’s a good chance you started with closed fists, and began counting on the thumb of the left hand. If you’re from the Middle East, you probably also started with a closed fist, but began counting with the little finger of the right hand.

Most Chinese people, and many North Americans, also use the closed-fist system, but begin counting on an index finger, rather than the thumb. The Japanese typically start from an open-hand position, counting by closing first the little finger, and then the remaining digits.

But the piece takes note of other things I found even more interesting.  For example:

There is a mental link between hands and numbers, but that link doesn’t come from humans learning to use their hands as a counting aid. It goes back much further in our evolution. Marcie Penner-Wilger and Michael L. Anderson propose that the part of our brain that originally evolved to represent our fingers has been recruited to represent our concept of number, and that these days it performs both functions.

fMRI scans show that brain regions associated with finger sense are activated when we perform numerical tasks, even if we don’t use our fingers to help us complete those tasks. And studies show that young children with good finger awareness are better at performing quantitative tasks than those with less finger sense.

Even as adults, the way we mentally picture numbers in space – the SNARC effect – is related to the hand on which we begin finger counting.

Michael Anderson’s idea is briefly outlined here.  He distinguishes his idea from two competing theories about why mathematical ability and finger awareness (or finger gnosia – knowing which finger has been touched lightly without looking) seem to be related. He refers to these two theories as the localist view and the functional view.  The localist view is that finger gnosia predicts math ability “because the two abilities are supported by neighboring brain regions which “tend to have correlated developmental trajectories.”  The connection is not causal.  According to the functional view, however, the two abilities are related because “the fingers are used to represent quantities and perform counting and arithmetic procedures.”  In this way the representation of numbers and of fingers “become entwined.”

Anderson’s idea is that the neural circuit supporting finger gnosia has been redeployed in support of magnitude representation and now serves both functions.  The why for this redeployment is that a circuit that supports magnitude representation needs “a register for storing the number to be manipulated.”   And this requires a series of switches that can be independently activated, like the kind that represent whether and which fingers have been touched.  The tasks are structurally similar.  I’m not sure I really understand the register idea, but I get the structural similarity.  Some of the evidence for this idea is that it has been found that the region associated with the representation of fingers is activated during adults’ arithmetic performance and also that damage to the area identified has been found to disrupt performance on both finger gnosia and number magnitude tasks.  Anderson points out that his view does not rest on the use of the actual fingers in calculation.

If Anderson is correct, this observation says something interesting about how the brain finds this structural similarity and then uses it.  It also suggests yet another way to look at how preexisting sensory structures will influence how we see and investigate mathematical structure since, as Bueti and Walsh argue in a 2009 paper, our discrete numerical abilities may have hitched an evolutionary ride on our motor experience with continuous ones like speed and distance, or time and space.

 

 

 

Julian Barbour, from metaphysics to mathematics to us

Julian Barbour is a theoretical physicist with a clear interest in tackling foundational issues and the errors of judgment that can lead physics theories astray.  One of these candidates for a mistaken judgment is time itself, and in 1999  Barbour authored the book The End of Time published by the Oxford University Press. He wrote an essay on time for one of the Foundational Questions Institute’s essay contests and answered some questions about his ideas in an Edge interview when the book was first published.

Back in July and I wrote a blog about how some modern thoughts point back to Leibniz’s view of the universe.  And Julian Barbour has found new value in Leibniz’s philosophy as well.  The value comes, in part, from the fact that key aspects of his philosophy can be translated into mathematical models.  Barbour published a paper in The Harvard Review of Philosophy entitled The Deep and Suggestive Principles of Leibnizian Philosophy.  The paper can be found at this website under the heading: Papers on Maximal Variety.

It is, in fact, the way Leibniz accounted for the infinite variation in our universe, that Barbour uses to suggest a new direction for modern theories in physics. And this line of thought will lead, again, to the significance of perception in the creation of these theories.

Barbour first takes note of the transition from the notion that the universe is ordered, to the insight that its order changes from instant to instant.

Before the scientific revolution, the instinctive reaction of thinkers to the existence of perceived structure was to find a direct reason for that structure. This is reflected above all in the Pythagorean notion of the well-ordered cosmos: the cosmos has the structure it does because that is the best structure it could have….Kepler and Galileo were no less entranced by the beauty of the world than was Pythagoras, and they formulated their ideas in the overall conceptual framework of the well-ordered cosmos. However, both studied the world so intently that they actually identified aspects of motion (precise laws of planetary motion and simple laws of falling bodies and projectiles) that fairly soon led to the complete overthrow of such a notion of cosmos. The laws of the new physics were found to determine not the actual structure of the universe, but the way in which structure changes from instant to instant.

Barbour raises the question of whether modern science lacks a key idea, like a “structure-creating principle that has hitherto escaped us,” which might also address problems with time since as he notes,

the highly flexible and relational manner in which time is treated in Einstein’s theory of gravity is extremely difficult to reconcile with the role that time plays in quantum mechanics, since in the latter, time is essentially the external, absolute time that Newton introduced. In fact, some researchers in the field doubt whether time has any role at all to play in quantum cosmology, arguing that time is an emergent phenomenon.

Barbour points out that Leibniz critiqued Newton’s notion of an absolute space and time by asking the question:  How would God decide to put the universe ‘here’ rather than ‘there’?  The notion of absolute place must be incorrect.  Leibniz argued that space is no more than the order of coexisting things, whose place is understood solely by their positions relative to each other. Barbour finds in Leibniz’s philosophy “the seeds of a structure-creating first principle—and much more.”  Barbour believes that while Leibniz’s philosophy can look “quite fantastical,”

it is the one radical alternative to Cartesian-Newtonian materialism ever put forward.  It possesses enough definiteness to be cast in mathematical form—and hence to serve as a potential framework for natural science.

For Leibniz, Descartes’ idea was flawed.  It couldn’t address the variety of what we see. And Barbour points out that it is roughly Descartes’ perspective to which physics lines up, although “on a much more secure empirical basis.”

Through much of the last century, one of the main goals of physics was to find the fundamental particles of nature. Even at the time when quantum mechanics was discovered, in 1925–1926, physicists believed that all matter was composed of only two fundamental particles—the electron and the proton. This picture does indeed look like a minimal extension of Cartesian reductionism. But during the course of the century, the number of so-called fundamental particles grew in a somewhat disconcerting manner, though our understanding of the way in which they interacted also progressed impressively…… if the superstring enthusiasts are correct, we are almost back to Descartes.

Leibniz’s thoughts went in another direction.  His fundamental entity was a monad not an atom.  In the atomic idea, while there were different classes of atoms, within each class only their positions and speeds, in space and time, distinguished them.  Since for Leibniz, space and time did not have an independent existence, position in space and time could not be used to distinguish objects.

Leibniz held that the entire world consists of nothing but distinct individuals, and that the sole essence of these individuals is to have perceptions (not all of which they are distinctly aware of)…. The most radical element in the Monadology, postulated rather than explained or made directly plausible, is the claim that the perceptions of any one monad—its defining attributes—are nothing more and nothing less than the relations it bears to all the other monads. The monads exist by virtue of self-mirroring of each other; they all define each other.

Barbour points out that, aside from their being intrinsically interesting, Leibniz’s ideas have the potential to contribute to modern theories because they are ‘relational’ like general relativity and quantum mechanics, and Leibniz seems to provide the way to give the idea a mathematical structure.  Barbour has explored these mathematical structures with Lee Smolin and the models can be found in their paper.

But Leibniz’s monads are metaphysical points that are real (like physical points) and exact (like mathematical points).  The cosmos that emerges from this unfamiliar world will not be what we expect.  Barbour’s maximal variety models were developed with the aim of creating new physical theories.  And he finds that the models have two “intriguing aspects.”

 

First, if this model is ever transformed into some kind of fundamental description of the universe, physics will come to resemble biology: all of the entities in a maximal-variety configuration are created in a kind of ecological balance between competing individuals. Each is trying to be as individualistic as possible, but in a curious way this selfish behavior is necessary if anything is to exist at all (for to exist is to become differentiated and hence to emerge from the mist of nothingness).

There it is again, “physics will come to resemble biology,” (which, in my opinion, mathematics already does).  But then pushing a bit further, it seems clear that any significant progress in ‘ideas,’ or models in physics will have to take us into account.

The second aspect warrants a lengthier discussion. Consciousness in a material world is so baffling that idealism has always seemed more cogent than materialism. But hitherto nothing significant in the way of mathematical support to rival the triumphs of physics based on the hypothesis of an external world has been forthcoming….. To make idealism plausible, one needs laws that act directly and transparently on the raw stuff of consciousness: perceptions. (emphasis my own).

There are many things about the way Leibniz characterizes monads that are provocative.  And this one, which Barbour quotes is one of them:

And just as the same town, when looked at from different sides, appears quite different and is, as it were, multiplied in perspective, so also it happens that because of the infinite number of simple substances [monads], it is as if there were as many different universes, which are however but different perspectives of a single universe in accordance with the different points of view of the monads. And this is the means of obtaining as much variety as possible, but with the greatest order possible; that is to say, it is the means of obtaining as much perfection as possible.

 

 

The Irrationality of Mathematics?

When I write, I often choose my words very carefully in order to remove any opportunity the reader might have to make a quick judgment about the content of what I am saying.  I’m hoping they will keep thinking about what I am saying.  The unexpected pairing of words often accomplishes this, and in this spirit, I remember telling my husband (a particle physicist) that I thought that mathematics was probably the most irrational of the sciences.  I don’t remember how he responded, so I may not have accomplished very much.  But I thought about it again today when I looked into the recently published book Thinking, Fast and Slow, by Daniel Kahneman.  The first chapter of the book appeared on scientificamerican.com this past June.  Kahneman has written what he calls “a psychodrama with two characters.”  The characters are mental processes he identifies as System 1 and System 2 which Kahneman uses to describe the relationship between two ways that the brain works.

• System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
• System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.

The narrative begins like this:

When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do. Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book. I describe System 1 as effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps. (emphasis my own)

It was this last sentence that got my attention.  The automatic activities that are attributed to System 1 are things as fundamental as detecting that one object is more distant than another, or orienting the source of a sudden sound, as well as the uniquely human actions developed by prolonged practice like, reading words on large billboards, driving a car or even finding a strong move in chess.  But the mental actions of System 1 are involuntary and include innate skills that we share with other animals. It cannot be turned off at will. The involuntary action of System 1 will mobilize the voluntary attention of System 2, described as deliberate, effortful, and orderly. The control of attention is shared by the two systems. System 1 and System 2 are not actually a pair of little agents in our head.  They are what Kahneman calls “useful fictions” that help explain how the mind is working.  I find it interesting that the extent to which an individual’s pupils are dilated indicates the extent to which System 2 is in use.

Jim Holt reviewed the book for the New York Times in November, 2011. He summarized the roles of System 1 and System 2 in this way:

More generally, System 1 uses association and metaphor to produce a quick and dirty draft of reality, which System 2 draws on to arrive at explicit beliefs and reasoned choices. System 1 proposes, System 2 disposes. So System 2 would seem to be the boss, right? In principle, yes. But System 2, in addition to being more deliberate and rational, is also lazy. And it tires easily. (The vogue term for this is “ego depletion.”) Too often, instead of slowing things down and analyzing them, System 2 is content to accept the easy but unreliable story about the world that System 1 feeds to it. “Although System 2 believes itself to be where the action is,” Kahneman writes, “the automatic System 1 is the hero of this book.” System 2 is especially quiescent, it seems, when your mood is a happy one.

The fast reads of System 1 create many of our mistakes of judgment, biases and illusions that are not easily overcome.  And while the more willful deliberations of System 2 can override these judgments, it can’t take over for System 1.  Holt’s review raises an interesting question. He points out that Kahneman “never grapples philosophically with the nature of rationality,” or what rationalilty is there to accomplish.  It can’t undo the biases and illusions created by System 1’s fast action, and it is impractical for us to reflect on every impression System 1 creates.  But System 2 does draw on the action of System 1 to formalize beliefs and make choices.  It should be clear, however, that Systems 1 and 2 are not isolatable systems with interacting aspects or parts.  There is no one part of the brain where they live.

This System 1/System 2 scheme caused me to think about mathematics on multiple levels. I can imagine System 1 and System 2 paralleling the relationship between an intuition and a proof in mathematics.  It does seem that much of the action in mathematics comes from hunches and perceived possibilities, which are then formally explored.  It may be that the quick actions of System 1, the ones that make associations and metaphors actually drive the creation of new ideas or perceived possibilities in mathematics, while the laborious rigor of proof gives us the way to talk about it, or to make our fast read of the situation useful. The interesting thing about this possibility is that it puts the perception of mathematical possibilities outside what we usually think of as the rational side of our nature.   I can imagine that our more immediate experiences of magnitudes related to space, time, and quantity, after prolonged practice, were transformed (by associations and metaphors) into math ideas that got the attention of our ‘lazier’ System 2.  Studies in cognitive science already suggest that mathematics may be built on the brain circuitry that encodes these perceived magnitudes.  And as Kahneman says, “The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps.

This ‘orderly series of steps’ may be related to the inductive reasoning of logic and mathematics, what we normally think of as the content of mathematics.  But if the ideas in mathematics are generated by actions more like the actions of System 1, then the source of mathematics (as Poincare pointed out) would remain obscure and, as I have wanted to suggest, not fully rational.

 

 

 

 

Birds and the number 0

I’ve been working on an article that has me thinking about neuroscientifc studies on the cerebral representations of magnitude and it happened to be brought to my attention today that Irene Pepperberg spoke at the 2012 Francis Crick Memorial Conference on Consciousness in Animals.

Pepperberg is famous for having worked for many years with an African gray parrot who appeared to be able to understand concepts such as ‘bigger,’ ‘different,’ ‘same,’ numerals 1 to 6, the names of colors, and he could speak his answers to questions.  The famous bird’s name was Alex and I didn’t know anything about him until today.  During her talk at the conference (which is available on their website) I heard the story of when Alex used to word “none” to mean zero things. Pepperberg pointed out that it was a use of the word “none” that he hadn’t been taught.  Brandeis University reported on this event in 2005.

Strikingly, Alex, the 28-year-old parrot who lives in a Brandeis lab run by comparative psychologist and cognitive scientist Dr. Irene Pepperberg, spontaneously and correctly used the label “none” during a testing session of his counting skills to describe an absence of a numerical quantity on a tray.  This discovery prompted a series of trials in which Alex consistently demonstrated the ability to identify zero quantity by saying the label “none.”

In a NY Times story, written 10 days after the bird died in 2007, the scene is more fully described:

A bigger leap came in an experiment about numbers, in which the parrot was shown groups of two, three and six objects.  The objects within each set were colored identically, and Alex was asked, “What color three?”

“Five,” he replied perversely (he was having a bad attitude day), repeating the answer until the experimenter finally asked, “O.K., Alex, tell me, “What color five?”

“None,” the parrot said.

Bingo.  There was no group of five on the tray.

Alex had previously used the label “none” to when asked to describe the similarity or difference between two objects where there was none.  But he had never been taught to use it to represent zero quantity.

Pepperberg’s work with Alex contributes to the growing evidence that, although the avian brain is different from the mammalian cortex, it is capable of higher order cognitive processing.  For me, the significance of observations like these is not simply that they change our ideas about animal consciousness, but that in so doing, they demonstrate, in unexpected ways, that meaning is not something superimposed on sensory processes, but rather something that emerges from within them.   It makes a different kind of sense that mathematics is one of the keys to human culture if mathematics is seen as something that actually grows out of some very fundamental cognitive processes.  It brings to mind some of the work I have read belonging to José Ferreirós, philosopher and historian of math and science. I find his take on the development of modern mathematics very ‘cognitively’ oriented.   He’s written often about the significance of Riemann’s nineteenth century work and, in one of his papers, suggests that Riemann’s conceptual approach to mathematics is actually more consistent with current views of biological and cultural evolution than the more traditional views held by many of his contemporaries.   According to Ferreirós, for Riemann,

 

…all knowledge arises from the interplay of  “experience” broadly conceived (Erfahrung) and “refection” in the sense of reconceiving and rethinking (Nachdenken); it begins in everyday experiences and proceeds to propose conceptual systems which aim to clarify experience going beyond the surface of appearances.  Reason in the old sense is found nowhere…

 

An interesting note about the conference is that participants signed what they have named The Cambridge Declaration on Consciousness in Non-Human Animals.  The full text of the declaration can also be found on the website.  One of the points of the declaration was this:

 

Birds appear to offer, in their behavior, neurophysiology, and neuroanatomy a striking case of parallel evolution of consciousness.  Evidence of near human-like levels of consciousness has been most dramatically observed in African grey parrots.  Mammalian and avian emotional networks and cognitive microcircuitries appear to be far more homologous than previously thought.  Moreover, certain species of birds have been found to exhibit neural sleep patterns similar to those of mammals, including REM sleep and, as was demonstrated in zebra finches, neurophysiological patterns, previously thought to require a mammalian neocortex.  Magpies in particular have been shown to exhibit striking similarities to humans, great apes, dolphins, and elephants in studies of mirror self-recognition.