Categories

Arithmetic, Generalization and Order: Harnessing Infinity

Today, I was working on a piece I’m writing about 19th century developments in mathematics and I saw something interesting.  In the piece, I draw particular attention to a few things.  One of these is the precision Weierstrass brought to the concept of a limit, removing all references to motion or geometry, and giving it a very focused arithmetic definition.  At the same time, Riemann found the foundation of geometry in large general notions like manifold or manifoldness. His general notions supported the non-Euclidean geometry that had already developed from investigations of Euclid’s parallel postulate. Finally I wrote about Cantor’s insights into infinities – his observation that, for example, the set of rational numbers is equivalent to the set of natural numbers (the whole numbers we count with) while the set of real numbers is not countable and therefore a larger infinite collection.  The set theoretic ideas that Cantor finally shaped provide the ground for nearly all of modern mathematics.

These are very sketchy references to just a few of the prolific accomplishments of this period, a time during which a more conceptual approach to problems became favored.  But each of them had a significant impact on the future of the discipline. What I found interesting today is that they each wrestle with some of the fundamental features that define mathematics – arithmetic, generalization and order.  And they each also point back to (and solve) difficulties created by the notion of infinity.

The arithmetic Weierstrass brought to calculus relieved any lingering discomfort there may have been with hazy geometric definitions and infinitesimal quantities. While Riemann had no interest in the axiomatic issues in geometry, his conceptual manifold gave new meaning to the non-Euclidean geometries that were born of attempts to investigate the parallel postulate (the one that requires that parallel lines will not intersect no matter how far they are extended). There is no way to actually look at this infinite extension and there is no way to prove the statement using other axioms and definitions.  (About 30 years before Riemann’s famous lecture, Janos Bolyai and Lobachevsky independently confirmed that entirely self-consistent “non-Euclidean geometries” could be created in which the parallel postulate did not hold).  And Cantor could see that there was some order (and arithmetic) to be found in otherwise unmanageable infinite collections.

Riemann acknowledged the influence of the philosopher Herbart who argued that space is not a container that holds the world around us, but more a kind of organizing principle of cognition.  Riemann’s manifold, as a collection of objects, also points to the idea of sets, important to the future of the discipline.

Infinite lengths, or infinite collections, or an infinite number of times, cannot actually be seen, but they can be precisely imagined.  Perhaps this is because they grow out of experiences (like repetition) that keep getting worked on by overlapping and complex cognitive mechanisms. Like the energy of wind and water, that seems to just be there, without direction or intent (and particularly unresponsive to our needs), an imagined infinity, after careful observation, can be captured, harnesses and directed.

In his history of set theory (Labyrinth of Thought), José Ferreirós Dominguez writes about Riemann’s most general ideas:

By establishing the theory of magnitudes upon the foundation of manifolds, Riemann transgressed the limits of the traditional conception of mathematics, turning it into a discipline of unlimited extent and applicability, since it embraced all possible objects.

These ideas would also lay the groundwork for topology, where a new kind of geometric intuition, together with set-theoretic ideas, again recasts the idea of continuity.

A nice essay on the significance of Riemann’s conceptual mathematics can also be found here.

Optical Realities: Mathematics and Visual Processes

I was reading up on some nineteenth century philosophy and science for a book project of mine and I found an essay by Timothy Lenoir called The Eye as Mathematician. It is a discussion of the construction of Helmholtz’s theory of vision.  The title suggests that the eye is acting like a mathematician.  My disposition is to think that mathematics is acting like the eye.

One of the passages I found striking was this from Helmholtz:

I maintain, therefore, that it cannot possibly make sense to speak about any truth of our perceptions other than practical truth. Our perceptions of things cannot be anything other than symbols, naturally given signs for things, which we have learned to use in order to control our motions and actions. When we have learned how to read those signs in the proper manner, we are in a condition to use them to orient our actions such that they achieve their intended effect; that is to say, that new sensations arise in an expected manner.

Read in a particular light (perceptions as symbols, as naturally given signs for things that we learn to read in the proper manner) this description reassociates fleshy things with abstract things. What is known as Helmholtz’s constructivist theory of vision was inspired by what Lenoir describes as “Herbart’s conception of the symbolic character of space and of the deep connection between motion and the construction of visual space.”  Herbart was an early 19th century philosopher whose influence was acknowledged by Riemann at the beginning of his famous 1854 lecture on the foundations of geometry.

The essay goes on to explain the way Helmholtz saw visual experience as “a symbolic shorthand for aggregates of sensory data:”

Perceptions of objects in space, for instance, link together information on direction, size, and shape, with that of color, intensity, and contrast. None of these classes of information is simply given; rather, they are the result of measurements carried out by the components of the visual system. Moreover, these data aggregates are not linked with one another by an internal logic given in experience; rather,the connections are constructed by trial, error, and repetition. The more frequently the same linkages of sensory data are effected, the more rapidly the linkages are carried out by the brain; for the conscious mind in this process, they come to have the same force of necessity as logical inference.

I was struck by a few things in these words.  One is the statement that the connections are constructed by trial, error, and repetition which actually lines up well with current statistical models of learning.  The other is that “for the conscious mind in this process, they come to have the same force of necessity as logical inference.”  Perhaps this is because the conscious mind, in developing logical inference, is actually borrowing something from the processes that occur outside of our awareness.

In the midst of my time in graduate school, I felt strongly that mathematics was, itself, a way to direct our eyes, or what we were able to see.  Before I had much training, I would say to my non-mathematician friends that different mathematical ideas were like grids you could hold up in front of your eyes that would change, in actual substance, what they saw, as if the mathematics itself was a visual process.  The grid suggests that I was thinking more in terms of changing coordinates than the conceptual shifts that I was beginning to see in advanced topics.  But the thought was only encouraged when I learned more about mathematics, physics and, more recently, theories of vision.

While neural imaging techniques have changed our understanding of how the body accomplishes making visual images, the insights of Herbart and Helmholtz, into the constructive nature of vision, still stand.  And their language certainly suggests that the body is doing something very much like mathematics – reading signs and symbols or constructing various spaces which Herbart calls ‘the symbol of the possible community of things standing in a causal relationship.’

In one of my earlier posts, I described Poincaré ‘s observation that visual space is not Euclidean.  One of the reasons for his claim is that, in his analysis of it,  visual space is determined by at least four independently varying parameters.  While the reasoning is somewhat different, I found more talk about visual dimensions on The VisionHelp Blog (which contains a good deal of information on vision processes, ailments, and therapies).  They referenced a Discovery show about wormholes which, given the unanswered question about the number of dimensions of space, featured an opening segment on vision.  In another one of their posts, a Poincaré or Bloch sphere illustrates how, in addition to the three primary vectors of stereopsis (one of our distance tools), there are actually many sub-vectors operating with movements of the eyeballs as we look at different angles.

The visual theorist Semir Zeki has made the observation that “our inquiry into the visual brain takes us into the very heart of humanity’s inquiry into its own nature,”………..as does our inquiry into mathematics!

Changing the Evolutionary Minded?

I found myself tied a bit to the theme of last week’s blog when my attention was brought to a very recent article in PLoS Biology called Darwin in Mind: New Opportunities for Evolutionary Psychology.  In it a team of biologists, psychologists and philosophers from the Netherlands, the United States and Scotland, suggest that the 30 year old interpretation of the evolution of the mind put forward by evolutionary psychologists can perhaps be remodeled now by drawing on recent work in related disciplines (like neuroscience, cognitive science, genetics, and developmental psychology).  The old view is characterized by a few beliefs:  that human behavior is unlikely to be adaptive in modern environments, that human cognition is task-specific, and that there is a universal human nature.

This perspective presumes that psychological mechanisms have evolved in response to the features of an ancestral environment (like the African Pleistocene Savanna) and that gene complexes cannot respond quickly to natural selection, creating an adaptive lag.  Among other things, the authors make the following observation:

Recent trends in developmental psychology and neuroscience have instead stressed the malleability of the human brain, emphasizing how experience tunes and regulates synaptic connectivity, neural circuitry and gene expression in the brain, leading to remarkable plasticity in the brain’s structural and functional organization. Neuroscientists have been aware since the 1980s that the human brain has too much architectural complexity for it to be plausible that genes specify its wiring in detail; therefore, developmental processes carry much of the burden of establishing neural connections.

I found ‘A Review of the Evolutionary Psychology Debates’ from 1999.  It was written by Melanie Mitchell at the Santa Fe Institute.  What I realized after reading it was how tricky it was to apply biological ideas to psychological theories.  Her paper quotes many of the authors of evolutionary psychology and I think this particular statement points to the problem:

to understand the relationship between biology and culture one must first understand the architecture of our evolved psychology…..

It is very difficult to keep the opinions and prejudices of our current culture out of an analysis of our  ‘evolved psychology.’ But we do want to understand the relationship between biology and culture.  And I think this happens only when we are willing to give the body its due, when we are willing to imagine that the way the body may direct itself is far from clear to us.  One of the observations Mitchell makes from the literature is this one (which I like very much):

Pinker asserts that “most of our evolution took place” on the African savanna.  By “our evolution” he means that of our most human-like ancestors—those with brains similar to ours. But, as Ahouse and Berwick point out, it would be equally true to say that “over 99 percent of our evolutionary history was spent in (and most of our genes arose in) a warm, salty sea.”  Our sea ancestors also had nervous systems, ancestral to our own. Would it not be plausible to speculate that some of the structures important to human psychology arose, at least in part, during that time and were originally adapted for those conditions? And could this have had as much of an effect on their ultimate structure and function as the time spent in the Pleistocene? It is hard to say for sure.

This is exactly the right correction to make on the evolutionary psychology perspective.  There is no easy way to understand how an ancient cell’s signaling processes developed into the nervous system with which we breathe, find nourishment, move and imagine.  And now I’ve said the word that leads me to mathematics – imagine.  I think that we have not been able to really refresh the way we see ourselves for a very long time.  Part of what motivates this blog is the exploration of how the effectiveness and conceptual reach of mathematics can bring new insight to understanding how the body does what it does.  Abstraction is the key in mathematics and current work in both cognitive science and neuroscience looks at how primitive an abstraction can be in learning and even in vision.  Neuroscientists have been able to determine that, in vision, specialized cells respond preferentially to straight lines at a particular angle.  About this, the visual theorist Semir Zeki says, “the cell abstracts verticality without being concerned with what is vertical.” Cognitive scientists like Joshua Tenenbaum have observed how quickly an abstraction can be formed (in both children and adults) and they now consider that we have an innate stock of abstract concepts.  The important point is that in both vision and learning the abstraction initiates the building of structure.  These abstractions are not thoughtful in the usual sense but inherent, natural and automatic.  Mathematics, then, may reflect something very basic to human life, something grounded in the very way we exist.

 

 

Bugs in the brain?

NPR recently hosted an interview with Dean Buonomano, neuroscientist and author of the book Brain Bugs: How The Brain’s Flaws Shape Our Lives.

I usually like evolutionary perspectives, and enjoy thoughts on how our experience, however abstract and complex it may seem, is somehow built on the biological stuff of our world. But the best of this kind of thinking will break down limitations that we may have imagined about ourselves, rather than establish them.

In the interview, Buonomano took note of the limitations of the brain’s memory systems and the problems that could arise from the associative nature of cognitive systems. He attributed the character of these systems to primitive survival needs that could not anticipate our modern lives, yet now shape them to a large extent.  I find this approach to cognitive science to be hampered by judgment – the kind of judgment that could obscure possible insights.  Of course, I want to bring attention to a particular conclusion which I think is misguided – the one about mathematics!  It was summarized by Davies in this way:

We’re not naturally good at quantitative thinking, for example. Buonomano says many of these weaknesses are a product of our evolution. Our ancestors needed to recognize a dangerous animal quickly but didn’t need to know whether there were 12 or 13 of them.

There are many problems with this judgment, not the least of which is that it encourages  a stubborn and growing malaise about mathematics. The emergence and development of mathematics is one of the more intriguing things about us and exploring the manner in which it is natural is a very worthwhile enterprise.  Some of the fruits of these efforts can be seen in the work of the neuroscientist Stanislas Dehaene and cognitive scientist Raphael Nunez.

Buonomano takes note of our talent for pattern recognition, our ability to use context, to grasp the whole, and the partnership of visual and linguistic systems.  All of these observations are interesting. He referenced a study I once blogged about. The study found that I am more likely to be favorably impressed with a new acquaintance if I’m holding a warm cup of coffee when I meet them than if I’m holding an iced tea. A study like this  certainly suggests that there is something unreliable about my evaluation, but it also highlights the correspondence between physical and thoughtful things. The impulse to think in terms of ‘bugs’ can be short-sited and shallow. In the book’s introduction Buonomano gives into the danger of this kind of judgment when he says:

The fact of the matter is your brain was simply not built to store unrelated bits of information such as lists of names and numbers.

Taking note of the patterns in mental processes can provide insights into how the brain (or the body) accomplishes some of what it does, but this is far from making any claim about what the brain was built to do.

Outer and Inner Limits of the Brain (or the body)

A recent Scientific American article on the physical limits of intelligence raised more questions for me than it answered with its intriguing analysis of neural mechanisms.  The point of the article is to consider that it may be physically impossible for humanity to become more ‘intelligent’ with further evolution.  I think we would all agree, intelligence is a pretty slippery concept.  I find it very unlikely that we could actually evaluate its future.  The way mathematics cracks open conceptual as well as observable possibilities in our experience introduces another consideration.  This is a kind of mental work that I don’t think could be measured by memory or even calculation skills. I was captivated, however, by what I believe is the real value of this work – specifically, the kind of detail of brain functions we have been able to discern (energy requirements, how signals happen, the distances signals travel – all given particular body-to-brain ratios and the shared needs of all of the body).   Perhaps the crux of the argument for our limited future is captured in these paragraphs:

IF COMMUNICATION BETWEEN NEURONS, and between brain areas, is really a major bottleneck that limits intelligence, then evolving neurons that are even smaller (and closer together, with faster communication) should yield smarter brains. Similarly, brains might become more efficient by evolving axons that can carry signals faster over longer distances without getting thicker. But something prevents animals from shrinking neurons and axons beyond a certain point. You might call it the mother of all limitations: the proteins that neurons use to generate electrical pulses, called ion channels, are inherently unreliable.

Ion channels are tiny valves that open and close through changes in their molecular folding. When they open, they allow ions of sodium, potassium or calcium to flow across cell membranes, producing the electrical signals by which neurons communicate. But being so minuscule, ion channels can get flipped open or closed by mere thermal vibrations. A simple biology experiment lays the defect bare. Isolate a single ion channel on the surface of a nerve cell using a microscopic glass tube, sort of like slipping a glass cup over a single ant on a sidewalk. When you adjust the voltage on the ion channel–a maneuver that causes it to open or close–the ion channel does not flip on and off reliably like your kitchen light does. Instead it flutters on and off randomly. Sometimes it does not open at all; other times it opens when it should not. By changing the voltage, all you do is change the likelihood that it opens.

I find this particular detail fascinating.  Our metaphors for the electrochemical activity in the brain are usually switch-like.   ‘Fluttering on and off randomly,’ is not what I would expect to hear about.  The accidental opening of an ion channel can cause an axon to deliver an unintended signal.   Smaller, closer neurons could push this accidental firing too far:

In a pair of papers published in 2005 and 2007, Laughlin and his collaborators calculated whether the need to include enough ion channels limits how small axons can be made. The results were startling. “When axons got to be about 150 to 200 nanometers in diameter, they became impossibly noisy,” Laughlin says. At that point, an axon contains so few ion channels that the accidental opening of a single channel can spur the axon to deliver a signal even though the neuron did not intend to fire [see box on page 41]. The brain’s smallest axons probably already hiccup out about six of these accidental spikes per second. Shrink them just a little bit more, and they would blather out more than 100 per second. “Cortical gray matter neurons are working with axons that are pretty close to the physical limit,” Laughlin concludes.

The article goes on to explain how this kind of problem is not unique to biology.  Engineers face similar limitations with transmission technologies.  But unlike engineers who can go back to the drawing board, “evolution cannot start from scratch: it has to work within the scheme and with the parts that have existed for half a billion years…”

Having said that the author concludes that, as happens with social insects, our communal intelligence (enhanced first by print and now by the electronic sharing of memory and experience) may be a better way for the human mind to expand.

I think it’s worth bringing attention back to the ‘neural recycling hypothesis’ proposed by Stanislas Dehaene, described in detail in this pdf. This document pays particular attention to reading and arithmetic and makes the following claim:

I conclude the paper by tentatively proposing the “neuronal recycling” hypothesis: the human capacity for cultural learning relies on a process of pre-empting or recycling preexisting brain circuitry. According to this third view, the architecture of the human brain is limited and shares many traits with other non-human primates. It is laid down under tight genetic constraints, yet with a fringe of variability. I postulate that cultural acquisitions are only possible insofar as they fit within this fringe, by reconverting pre-existing cerebral predispositions for another use. Accordingly, cultural plasticity is not unlimited, and all cultural inventions should be based on the pre-emption of pre-existing evolutionary adaptations of the human brain. It thus becomes important to consider what may be the evolutionary precursors of reading and arithmetic.

The extent to which the conceptual possibilities of mathematics have given us a way to see the quantum mechanical substructure of our universe could only be understood as some exploitation of our present day brain structure.  I would hazard the guess that the speed of transmitted signals will do little to reveal how we have accomplished this completely abstract vision of our world.  Please feel free to let me know what you think.

Overstepping the limits of conscious judgment

I’ve thought about mathematics as a reflection of hard-wired cognitive processes, or even as our own consciously rendered image of them.  In this light, mathematics’ conceptual weaves look particularly organic, even fleshy.  I’ve pursued this perspective because I find that it helps me see two things better:  mathematics itself and what qualifies as physical.  What I realized today is that it also contradicts reductionist views of human experience and what has come to be called biological determinism because we have yet to characterize its imaginative power.  It demonstrates the unexplainably, inexhaustible range of what can be thought.

All of this came to mind more clearly while I was reading a recent Scientific American blog by John Horgan. Horgan is writing in defense of Stephen Jay Gould’s “ferocious opposition to biological determinism,”  a view characterized by claims that “social and economic differences between different groups – primarily races, classes and sexes – arise from inherited, inborn distinctions…”

Horgan’s blog focuses on an article in PLoS Biology that critiques Gould’s critique of the work of Samuel George Morton, a 19th century physician, who compared the sizes of skulls, collected from around the world, and concluded that, on average, whites had larger skulls.  This was used as evidence that white’s and black’s did not share a common ancestry.  In his blog, Horgan also referenced the work of evolutionary biologist Jerry Coyne and neuroscientist Sam Harris, who Horgan says insist:

that free will is an illusion because our “choices” are actually all predetermined by neural processes taking place below the level of our awareness.

I’m always fascinated by the extent to which the body is built to do what it does and the extent to which it is living outside of our awareness.  But I have yet to find that seeing this in any way diminishes the significance of willful action.  In fact, what it calls into question, is the integrity or the soundness of conscious judgment (like Coyne’s that free will is an illusion).

When action becomes understood as interaction, as happened in the discovery of the classical laws of physics, it’s easy to be tempted by a deterministic view, or a purely mechanical view of cause and effect.  But physics has already found the error in this reasoning.  And, as I see it, mathematics consistently reveals the limits of conscious judgment by unveiling exotic, counter-intuitive, yet fully meaningful possibilities that consistently broaden scientific horizons.  It replaces judgment with necessity and, in the sciences, manages to correct perspectives that are largely molded by expectation, instead of the laborious introspection that is managed with mathematics.

I would also like to suggest that while mathematics can look almost spontaneous or emergent (with abstraction being fundamental to vision and learning) it is also directed.  We’re very far from understanding the nature of the will. But from the interaction of the will, the senses, and the world that shapes them, comes profoundly imaginative vision and insight, that are often first given shape in mathematics.

 

Modeling the baby’s view

In a recent post, I referred to a study at MIT that suggested that infants reason by mentally simulating possible scenarios in a given configuration (like different colored objects bouncing around in a container). They then figure out which outcome is most likely based on just a few physical principles (like whether the object nearest the exit of a container would bounce out first).  Another MIT study was just published in the June 24 issue of Science.  It reported that “16-month-old infants can, based on very little information, make accurate judgments of whether a failed action is due to their own mistake or to circumstances beyond their control.”

MIT’s own reporting on the study says the following:

Infants who saw evidence suggesting the agent [using the toy] had failed tried to hand the toy to their parents for help, suggesting the babies assumed the failure was their own fault. Conversely, babies who saw evidence suggesting that the toy was broken were more likely to reach for a new toy (a red one that was always within reach).

Much of the significance of these studies lies in the extent to which they support probabilistic inferential learning models.

Schulz says she was at first “blown away” that 16-month-olds could use very limited evidence (the distribution of outcomes across the experimenters’ actions) to infer the source of failure and decide whether to ask for help or seek another toy. That finding lends strong support to the probabilistic inferential learning model.

There is a growing interest in using probabilistic inferential models to investigate questions about cognitive development.  They provide an alternative theoretical perspective on human development by finding some middle ground between the opposing views of nativists and empiricists.   Nativists are of the opinion that we possess innate conceptual primitives while empiricists believe that there are only perceptual primitives and that learning uses essentially associative mechanisms. The middle ground is sometimes called rational constructivism, where learning mechanisms are thought to be rational, inferential and statistical.

All of this work is relevant to mathematics in at least two ways.  One has to do with how statistical models have come to be used to predict human behavior.  Probabilistic models make use of tools that were developed in statistics and computer science only over the last 20 years.  But also worth noting is what is implied by the fact that these models are such accurate predictors of behavior. It seems that our bodies are employing statistical methods in perception and learning.  This is a provocative idea. In a document intended for a special issue of the journal Cognition, Berkeley’s Fei Xu and Thomas L. Griffiths suggest this:

Perhaps in addition to a set of perceptual (proto-conceptual?) primitives, the infant also has the capacity to represent variables, to track individuals, to form categories and higher-order units through statistical analyses, and maybe even the representational capacity for logical operators such as and/or/all/some – these capacities enable the infant to acquire more complex concepts and new learning biases. (see Bonatti, 2009 and Marcus, 2001 for related discussions).

Reporting on the content of a workshop, Fei Xu and Tom Griffiths also make the following observation:

The hierarchical Bayesian approach provides a richer picture of learning than that assumed in many computational approaches, with the learner considering not just the solution to a particular problem but also forming generalizations about what solutions to these problems look like. In this way, a learner
can form “overhypotheses” that guide future inferences.

Variables, operators, generalizations about future solutions, all of this suggests that the seeds of mathematics can be found in how the body lives.

 

 

Suppressed Geometry?

There are countless ways to explore what may be called the two faces of mathematics – algebra and geometry.  Modern mathematical systems have their roots in both algebraic and geometric thinking.  Like the organs of the body which are built on the redirected sameness of cells, algebra and geometry live in all manner of relationship in modern mathematics.  But I found a particularly novel focus on their distinctness in an article by Peter Galison (published in an Autumn 2000 issue of Representations by the University of California Press).

The article looks at the intriguing conflict between two things – that Paul Dirac considered himself a geometer and that there are no visual, let alone geometric, presentations in his publications or lectures. Galison says of Dirac:

Lecturing in Varenna, also in the early 1970s, he recalled the “profound influence” that the power and beauty of projective geometry had on him. It gave results “apparently by magic; theorems in Euclidean geometry which you have been worrying about for a long time drop out by the simplest possible means” under its sway. Relativistic transformations of mathematical quantities suddenly became easy using this geometrical reformulation. “My research work was based in pictures. I needed to visualise things and projective geometry was often most useful e.g. in figuring out how a particular quantity transforms under Lorentz transf[ormation]. When I came to publish the results I suppressed the projective geometry as the results could be expressed more concisely in analytic form.”

The article considers the effect that social and psychological choices can have on science and mathematics and, in this case, the suppression of projective geometry.  Galison says of his idea:

My inclination, then, is to use the biographical-psychological story not as an end in itself, but rather as a registration of Dirac’s arc from Bristol to Cambridge, to an identification with Bohr’s and Heisenberg’s Continental physics. In that trajectory, Dirac was sequentially immersed in a series of territories in which particular strategies of demonstration were valued.

There are a few things I find interesting about this historical look at emerging thoughts.  One is the way mathematics can be seen as the way to greater things, illustrated by these remarks:

Projective geometry came to stand at that particular place where engineering and reason crossed paths, and so provided a perfect site for pedagogy.

or

In 1825, Dupin proclaimed in his textbook that geometry “is to develop, in industrials of all classes, and even in simple workers, the most precious faculties of intelligence, comparison, memory, reflection, judgment, and imagination…. It is to render their conduct more moral while impressing upon their minds the habits of reason and order that are the surest foundations of public peace and general happiness.”

or this remark from De Morgan in 1868:

“Geometry is intended, in education,. . . to [unmask] the tricks which reason plays on all but the cautious, plus the dangers arising out of caution itself.”

Another is how judgments of correctness are made within mathematics community, referenced with this observation:

Geometry did not, however, survive with the elevated status it had held in France at the highwater mark of the Polytechniciens’ dominance. Analysts displaced the geometers. Among their successors was Pierre Laplace, for whom pictures were anathema and algebra was dogma.

And finally is the effect of very personal manners of thought. In 1925, when looking at the proof sheets for an article from a young Werner Heisenberg, something in particular got Dirac’s attention:

In the course of his calculations Heisenberg had noted that there were certain quantities for which A times B was not equal to B times A. Heisenberg was rather concerned by this peculiarity. Dirac seized on it as the key to the departure of quantum physics from the classical world. He believed that it was precisely in the modification of this mathematical feature that Heisenberg’s achievement lay. It may well be, as Darrigol, Mehra, and Rechenberg have argued, that the very idea of a multiplication that depends on order came from Dirac’s prior explorations in projective geometry.

Galison wants to understand:

the historical production of a kind of reason that comes to count as private

or to answer the question:

how was geometry infolded to become, for Dirac, quintessentially an interior form of reasoning?

He offers this description of Dirac’s later career and exclusively private use of projective geometry:

When Dirac moved to Cambridge to begin studying physics, he took with him this projective geometry and used it to think. But that thinking had now to be conducted only on the inside of a subject newly self-conscious of its separation from the scientific world. Dirac’s maturity was characterized again by flight, this time to Heisenberg’s algebra, an antivisual calculus that at once broke with the visual tradition in physics and with the legacy of an older school of visualizable, intuition-grounded descriptive geometry. With an austere algebra and Heisenberg’s quantum physics, Dirac stabilized his thought through instability: working through a now infolded projective geometry joined by carefully hidden passageways to the public sphere of symbols without pictures.

This is a very nice, novel perspective on the evolution of ideas that necessarily promotes the feeling that mathematics is deeply human and sometimes very personal.

 

Ants, Instincts and Vectors

I happened upon an article in Plus about the vector analysis that ants seem to be using to find their way home.  Studies exploring insect navigation tools are relevant, not only to building robot navigation tools, but also to understanding the extent to which cognitive structures exist in other living things (and, perhaps, how they exist).  Unlike what we can do with other primates, or other mammals, we can’t participate in much more than the travel patterns of an insect.  But a careful look at these travel patterns has revealed some very mathematical instinctual behavior.

The article in Plus does a nice job of explaining what studies have shown about how a foraging ant finds its way home.  A kind of vector analysis of the path home is created by the ant.

Ants use a mechanism called path integration, which requires them to measure distances and direction.

The sum of the first $i$ vectors gives you a vector which points from the origin $(0,0)$ to the current location. The negative of the vector points from the current location straight back to the nest. So to know your way back home, you don’t need to remember all the vectors you travelled along — you simply add the current one to the last total and take the negative.

The vectors are built by neural circuits that can register distance and direction information.

Ants can approximate distances by counting their steps and use the position of the Sun as a compass to keep track of the direction of each segment of an outward foraging route. Through evolution, ants have developed neural circuits in their brain which can take information about distance and direction and produce an output which is an approximation of the appropriate vector maths. The result is a continuously updated home vector.

The path is built and corrected in an iterative way. The ant can repeatedly correct small errors using visual information about its surroundings.  The ant’s vision does not have enough resolution to use actual objects in its field of view (the way we do when we use landmarks), but it makes a different kind of use of visual data.  Apparently it can take a snap shot of its view of home and has a way to quantify the changes in subsequent views that were caused by the ant’s movement.   It can determine the difference between two views and then move in the direction that reduces the quantitative value of this difference.  When the difference is zero, the ant is home.

There is a website that collects papers and news on insect and robot navigation.

It may be unexpected that such complex circuitry governs ant travel. But it’s probably just our attempt to outline or model aspects of something we can’t quite grasp (namely the way a creature belongs to its world) that makes it look that way.  What I find noteworthy is that this mathematics is biological.  And it suggests to me that there may be a way to recast the persistent mind/body dualism, or even the debate over whether mathematics exists outside of us, in the world somehow.  Without our translation, modeling or formalization of it, it exists, at the very least, in interaction, of the body with the world.

 

Bayesian Models from the Eye to the Cosmos

My last post caused me to survey some things related to Bayesian statistics as they relate to mathematics and cognition.  First, I want to say that despite the fact that I have been looking more closely at 19th century developments in mathematics, I didn’t know until today that Laplace, in 1814, described a system of inductive reasoning, based on probabilities, that would today be recognized as Bayesian or that in the 1860’s Hermann Helmholtz modeled the brain’s ability to shape the flux of sensory data into our perceived world, probabilistically.

A survey of the growing applications of Bayesian probabilities leads through vastly different landscapes, from inside of us (in how the nervous system accomplishes perceiving the world) to how we learn and are able to make some very accurate every-day predictions, and finally to how we investigate the enormously far-away fundamental fabric of our universe. Bayesian probabilities are distinguished by the fact that they change with evidence or growing information. Looking at their applications makes it seem like all things human have some mathematical wrapping.

The everyday example was given in the research article: Optimal Predictions in Everyday Cognition. In a recent study, individuals were asked to make interval estimates, like how long a movie might run, or how much money it will gross, how long someone might live.   With respect to life span, participants in the study were asked: how long might the person you just met live?  (given their age when you met them).  A Bayesian predictor computes the probability using Bayes’s rule, that the probability that the person will live to a particular age given their age when you met them is proportional to the product of:

(the likelihood of meeting someone at say age 65 who will live to be say 85)

and

(the likelihood of living to 85).

The second factor of this product is called the prior probability and can be said to reflect our expectations.   A good prediction about the person you just met would be the median of the distribution produced by the Bayesian calculations.  The article describes the study in good detail.  Researchers found that people’s judgments were very close to the optimal predictions calculated by a Bayesian model.  The article concludes:

Assessing the scope and depth of the correspondence between probabilities in the mind and those in the world presents a fundamental challenge for future work.

The same kind of modeling has been applied to the very far questions of cosmology and our innermost puzzles of visual perception.  Variations on these methods have been used to understand how the body maximizes the use of its expectations, or how it minimizes the discrepancy between actual features of the world and representations of those features. And it is from this connecting-information-to-probabilities that we get our current models of the universe.  (An example of the kind of fine tuning that has to be brought to the calculation “when need to argue from a state of maximum ignorance” is described in the physics paper, Getting the Measure of the Flatness Problem).

In the paper I referenced last week, How the Mind Grows, from Joshua Tenenbaum, one of its more striking observations is that we organize the features of our world by building on very early, quickly formed abstractions that seem to be based on fundamental ‘similarity metrics’ or the way we determine the relevant properties that a class of objects will have in common.  In a talk on the same topic, Tenenbaum draws attention to the fact that important scientific insights are often the result of RE-organizing key features, perhaps by reinterpreting the similarity metric.  Mendeleev’s periodic table, for example, was an organizational change.  He created it before the notion of an atomic number was developed or before there was any hint of modern quantum mechanical ideas.  (Both the paper and the talk can be accessed from his web page under the heading Representative readings an talks).

In mathematics, organization, structure, class, and similarity are at the heart of the matter.   It’s no wonder it looks alive to me.