Categories

“The future of mathematics is more a spiritual discipline…”

I did some following up on the work of Vladimir Voevodsky and for anyone who might ask, “what’s actually going on in mathematics,” Voevodsky’s work adds, perhaps, even more to the mystery. Not that I mind. The mystery emerges from the limitless depths (or heights) of thought that are revealed in mathematical ideas or objects. It is this that continues to captivate me. And the grounding of these ideas, provided by Voevodsky’s work on foundations, reflects the intrinsic unity of these highly complex and purely abstract entities, suggesting a firm rootedness to these thoughts – an unexpected and enigmatic rootedness that calls for attention.

Voevodsky gave a general audience talk in March of 2014 at the Institute for Advanced Studies at Princeton, where he is currently Professor in the School of Mathematics. In that talk he described the history of much of his work and how he became convinced that to do the kind of mathematics he most wanted to do, he would need a reliable source to confirm the validity of the mathematical structures he builds.

As I was working on these ideas I was getting more and more uncertain about how to proceed. The mathematics of 2-theories is an example of precisely that kind of higher-dimensional mathematics that Kapranov and I had dreamed about in 1989. And I really enjoyed discovering new structures there that were not direct extensions of structures in lower “dimensions”.

But to do the work at the level of rigor and precision I felt was necessary would take an enormous amount of effort and would produce a text that would be very difficult to read. And who would ensure that I did not forget something and did not make a mistake, if even the mistakes in much more simple arguments take years to uncover?

I think it was at this moment that I largely stopped doing what is called “curiosity driven research” and started to think seriously about the future.

It soon became clear that the only real long-term solution to the problems that I encountered is to start using computers in the verification of mathematical reasoning.

Voevodsky expresses the same concern in a Quanta Magazine article by Kevin Hartnett.

“The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes,” Voevodsky said. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error.

So, at the heart of this discussion seems to be a quest for useful math-assistant computer programs. But both the problems mathematicians like Voevodsky face, and the computer assistant solutions he explored, highlight something intriguing about mathematics itself.

Hartnett does a nice job making the issues relevant to Voevodsky’s innovations accessible to any interested reader. He reviews Bertrand Russell’s type theory, a formalism created to circumvent the paradoxes of Cantor’s original set theory – as in the familiar paradox created by things like the set of all sets that don’t contain themselves. (If the set does contain itself then it doesn’t contain itself) This kind of problem is avoided in Russel’s type theory by making a formal distinction between collections of elements and collections of other collections. In turns out that within type theory, equivalences among sets are understood in much the same way as equivalences among spaces are understood in topology.

Spaces in topology are said to be homotopy equivalent if one can be deformed into the other without tearing either. Hartnett illustrates this using letters of the alphabet:

The letter P is of the same homotopy type as the letter O (the tail of the P can be collapsed to a point on the boundary of the letter’s upper circle), and both P and O are of the same homotopy type as the other letters of the alphabet that contain one hole — A, D, Q and R.

The same kind of equivalence can be established between a line and a point, or a disc and a point, or a coffee mug and a donut.

Given their structural resemblance, type theory handles the world of topology well. Things that are homotopy equivalent can also be said to be of the same homotopy type. But the value of the relationship between type theory and homotopic equivalences was greatly enhanced when Voevodsky learned Martin-Löf type theory (MLTT), a formal language developed by a logician for the task of checking proofs on a computer. Voevodsky saw that this computer language formalized type theory and, by virtue of type theory’s similarity to homotopy theory, it also formalized homotopy theory.

Again, from Hartnett:

Voevodsky agrees that the connection is magical, though he sees the significance a little differently. To him, the real potential of type theory informed by homotopy theory is as a new foundation for mathematics that’s uniquely well-suited both to computerized verification and to studying higher-order relationships.

There is a website devoted to homotopy type theory where it is defined as follows:

Homotopy Type Theory refers to a new interpretation of Martin-Löf’s system of intensional, constructive type theory into abstract homotopy theory.  Propositional equality is interpreted as homotopy and type isomorphism as homotopy equivalence. Logical constructions in type theory then correspond to homotopy-invariant constructions on spaces, while theorems and even proofs in the logical system inherit a homotopical meaning.  As the natural logic of homotopy, constructive type theory is also related to higher category theory as it is used e.g. in the notion of a higher topos.

Voevodsky’s work is on a new foundation for mathematics and is also described there:

Univalent Foundations of Mathematics is Vladimir Voevodsky’s new program for a comprehensive, computational foundation for mathematics based on the homotopical interpretation of type theory. The type theoretic univalence axiom relates propositional equality on the universe with homotopy equivalence of small types. The program is currently being implemented with the help of the automated proof assistant Coq.  The Univalent Foundations program is closely tied to homotopy type theory and is being pursued in parallel by many of the same researchers.

In one of his talks, Voevodsky suggested that mathematics as we know it studies structures on homotopy types. And he describes a mathematics so rich in abstract complexity, “it just doesn’t fit in our heads very well. It somehow requires abilities that we don’t posses.”  Computer assistance would be expected to facilitate access to these high levels of complexity and abstraction.

But mathematics is, as I see it, the abstract expression of human understanding – the possibilities for thought, for conceptual relationships. So what is it that’s keeping us from being able to manage this level of abstraction?   Voevodsky seems to agree that it is comprehension that gives rise to mathematics. He’s quoted in a New Scientist article by Jacob Aron:

If humans do not understand a proof, then it doesn’t count as maths, says Voevodsky. “The future of mathematics is more a spiritual discipline than an applied art. One of the important functions of mathematics is the development of the human mind.”

While Aaron seems to suggest that computer companions to mathematicians could potentially know more than the mathematicians they assist, this view is without substance. It is only when the mathematician’s eye discerns something that we call it mathematics.

Mike Shulman has a few posts related to homotopy type theory on The n-Category Cafe site beginning with one entitled Homotopy Type Theory, I followed by IIIII, and IV.  There’s also one from June 2015 – What’s so HoTT about Formilization?
And here’s a link to Voevodsky’s Univalent Foundations.

Finding hidden structure by way of computers

An article in a recent issue of New Scientist highlights the potential partnership between computers and mathematicians. It begins with an account of the use of computers in a proof that would do little, it seems, to provide greater understanding, or greater insight into the content of the idea the proof explores. The computer program merely exhausts the counter examples of a theorem, thereby proving it true (a task far too impractical to attack by hand). Reviewing this kind of proof, however, requires checking computer code, and this is something that referees in mathematics are not likely to want to do. And so efforts have been made to make the checking easier by employing something called a ‘proof assistant.’ The article doesn’t do much to clarify how the ‘proof assistant’ works, and says just a little about how it makes things easier. But a question that comes to mind quickly for me is whether such a proof could reveal new bridges between different sub-disciplines of mathematics, the way the traditional effort has been known to do.

A discussion of the work of prominent mathematician Vladimir Voevodsky follows.  This work takes us back to foundational questions and clearly addresses those bridges. While mathematics is grounded in set theory,  set theory can permit more than one definition of the same mathematical object. Voevodsky decided to address the problem that this creates for computer generated proofs.

…if two computer proofs use different definitions for the same thing, they will be incompatible. “We cannot compare the results, because at the core they are based on two different things,” says Voevodsky.

Voevodsky swaps sets for types described as “a stricter way of defining mathematical objects in which every concept has exactly one definition.

“This lets mathematicians formulate their ideas with a proof assistant directly, rather than having to translate them later. In 2013 Voefodsky and colleagues published a book explaining the principles behind the new foundations. In a reversal of the norm, they wrote the book with a proof assistant and then “unformalized” it to produce something more human-friendly.

There’s a very well-written description of the history and recent successes of Voevodsky’s work in a Quanta Magazine piece from May 2015. Voevodsky’s new formalism is called the univalent foundation of mathematics. The Quanta article describes how these ideas grew from existing formalisms in reasonable detail. But, what I find most interesting is the surprising consistency among particular ideas from computer science, logic and mathematics.

This consistency and convenience reflects something deeper about the program, said Daniel Grayson, an emeritus professor of mathematics at the University of Illinois at Urbana-Champaign. The strength of univalent foundations lies in the fact that it taps into a previously hidden structure in mathematics.

“What’s appealing and different about [univalent foundations], especially if you start viewing [it] as replacing set theory,” he said, “is that it appears that ideas from topology come into the very foundation of mathematics.”

One of the youngest sub-disciplines finds its way into the foundation, a very appealing and suggestive idea. Finding hidden structure is what always looks magical about mathematics. And it is, fundamentally, what human cognition is all about.

There’s a nice report on one of Voevodsky’s talks in a Scientific American Guest Blog from 2013 by Julie Rehmeyer that includes a video of the talk itself.

This topic requires a closer look, which I expect to do with a follow-up to this post.

Thinking without a brain

OctopusCan the presence of intelligent behavior in other creatures (creatures that don’t have a nervous system comparable to ours) tell us something about what ideas are, or how thought fits into nature’s actions? It has always seemed to us humans that our ideas are one of the fruits of what we call our ‘intelligence.’  And the evolutionary history of this intelligence is frequently traced back through the archeological records of our first use of things like tools, ornamentation, or planning.  It is often thought that our intelligence is just some twist of nature, something that just happened. But once set in motion, it strengthened our survival prospects, and gave us an odd kind of control of our comfort and well-being. We tend to believe that ‘thoughts’ are a private human experience, not easily lined up with nature’s actions. Thoughts build human cultures, and one of the high points of thought is, of course, mathematics. Remember, it was the scarecrow’s reciting the Pythagorean Theorem that told us he had a brain.  Even though he got it wrong.

When an animal is able to learn something and apply that learning to a new circumstance we generally concede that this is also intelligent behavior. A good deal of research has been done on animals like chimpanzees, dolphins, and apes, where the ability to learn symbolic representations or sophisticated communication skills mark intelligent behavior. But these observations don’t significantly change our sense that intelligence is some quirk of the brain, and only in humans has this quirk gone through the development that gives birth to ideas and culture, and puts us in our unique evolutionary place.

But when intelligent behavior is observed in a bumble bee, for example, we have to think a little more. The bumble bee’s evolution isn’t particularly related to our own, and their brains are not like ours. More than one million interconnected neurons occupy less than one cubic millimeter of brain tissue in the bee. The density of neurons is about ten times greater than in a mammalian cerebral cortex. Research published in Nature (in 2001) is described in a Scientific American piece in 2008 by Christof Koch.

The abstract of the Nature paper includes this:

…honeybees can interpolate visual information, exhibit associative recall, categorize visual information, and learn contextual information. Here we show that honeybees can form ‘sameness’ and ‘difference’ concepts. They learn to solve ‘delayed matching-to-sample’ tasks, in which they are required to respond to a matching stimulus, and ‘delayed non-matching-to-sample’ tasks, in which they are required to respond to a different stimulus; they can also transfer the learned rules to new stimuli of the same or a different sensory modality. Thus, not only can bees learn specific objects and their physical parameters, but they can also master abstract inter-relationships, such as sameness and difference.

And Koch makes this observation:

Given all of this ability, why does almost everybody instinctively reject the idea that bees or other insects might be conscious? The trouble is that bees are so different from us and our ilk that our insights fail us.

In 2015, Koch coauthored a paper with Giulio Tononi, the focus of which was consciousness. There he argues:

Indeed, as long as one starts from the brain and asks how it could possibly give rise to experience—in effect trying to ‘distill’ mind out of matter, the problem may be not only hard, but almost impossible to solve. But things may be less hard if one takes the opposite approach: start from consciousness itself, by identifying its essential properties, and then ask what kinds of physical mechanisms could possibly account for them.  (emphasis added)

Potential clues to different kinds of physical mechanisms are described in a very recent Scientific American article that reports on the successful unraveling of the octopus genome.

Among the biggest surprises contained within the genome—eliciting exclamation point–ridden e-mails from cephalopod researchers—is that octopuses possess a large group of familiar genes that are involved in developing a complex neural network and have been found to be enriched in other animals, such as mammals, with substantial processing power. Known as protocadherin genes, they “were previously thought to be expanded only in vertebrates,” says Clifton Ragsdale, an associate professor of neurobiology at the University of Chicago and a co-author of the new paper. Such genes join the list of independently evolved features we share with octopuses—including camera-type eyes (with a lens, iris and retina), closed circulatory systems and large brains.

Having followed such a vastly different evolutionary path to intelligence, however, the octopus nervous system is an especially rich subject for study. “For neurobiologists, it’s intriguing to understand how a completely distinct group has developed big, complex brains,” says Joshua Rosenthal of the University of Puerto Rico’s Institute of Neurobiology. “Now with this paper, we can better understand the molecular underpinnings.”

In 2012, Scientific American reported on the signing of the Cambridge Declaration on Consciousness.

The weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness,” the scientists wrote. “Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.

And from the Declaration:

Furthermore, neural circuits supporting behavioral/electrophysiological states of attentiveness, sleep and decision-making appear to have arisen in evolution as early as the invertebrate radiation, being evident in insects and cephalopod mollusks (e.g., octopus).

Specific mention of the octopus was based on the collection of research that documented their intentional action, their use of tools, and their sophisticated spatial navigation and memory. Christof Koch was one of the presenters of the declaration and was quoted as saying, “The challenge that remains is to understand how the whispering of nerve cells, interconnected by thousands of gossamer threads (their axons), give rise to any one conscious sensation.”

My friend and former agent, Ann Downer, has a new book due out in September with the provocative title, Smart and Spineless: Exploring Invertebrate Intelligence. It was written for young adults and is a wonderful way to correct an old perspective for growing thinkers.

These many insights suggest that what we call intelligence is not something that happens to some living things, but is, perhaps, somehow intrinsic to life and manifest in many forms. Koch suggests that we begin a study of consciousness by identifying its essential properties and mathematics can likely help with this. It does so already with Giulio Tononi’s Integrated Information Theory of Consciousness.  But mathematics is a grand scale investigation of pure thought – of the abstract relationships that are often related to language, learning, and spatial navigation (to name just a few). As a fully abstract investigation of such things, it could help direct the search for the essential properties of awareness and cognition. And the chance that we will find the ubiquitous presence of such properties in the world around us may breath new life into how we understand nature itself.

Physics, Plato and epistemology

In a recent Scientific American article, the late physicist Victor Stenger, along with authors James A. Lindsay and Peter Boghossian argue that, while not acknowledged as such, some interpretations of quantum mechanics are implicitly platonic (with a lower-case p).

We will use platonism with a lower-case “p” here to refer to the belief that the objects within the models of theoretical physics constitute elements of reality, but these models are not based on pure thought, which is Platonism with a capital “P,” but fashioned to describe and predict observations.

The authors suggest that while early 20th century physicists like Einstein, Bohr, Schrödinger, Heisenberg, and Born considered the philosophical implications of their discoveries, after World War II, the next generation of scientists judged this effort unproductive. Most physicists, they say, now agree that observation is the only reliable source of knowledge, and that only testable ideas are useful (hence the falling out of favor of string theory). But the authors also argue that this younger generation of physicists “went ahead and adopted philosophical doctrines, or at least spoke in philosophical terms, without admitting it to themselves.” They justify this, in part, with a reference to physicist David Tong who claims in a 2012 Scientific American article that the particles to which experiments refer are illusions.

Physicists routinely teach that the building blocks of nature are discrete particles such as the electron or quark. That is a lie. The building blocks of our theories are not particles but fields: continuous, fluidlike objects spread throughout space.

This view is explicitly philosophical,” the authors say, “and accepting it uncritically makes for bad philosophical thinking.”

I enjoyed this twist on the partnership of the observable with the abstract – namely their using the mathematics that captures the data to ‘reveal’ a platonic view.  It’s not clear to me that this is a fair characterization of Tong’s observation, but it is an interesting one. The authors do distinguish between realists (those who find the mathematical objects to be representative of reality) and instrumentalists (those who claim that reality just constrains what may be observed, but need not correspond to the mathematical models used) and their critique is mostly aimed at the realists. But the article is largely responding to recent criticisms of philosophy heard from physicists like, Lawrence Krauss and Neil deGrasse Tyson.   The authors suggest that many physicists have chosen a philosophical perspective and that there are problems associated with their not acknowledging this.

The direct, platonic, correspondence of physical theories to the nature of reality, as Weinberg, Tong and possibly Krauss have done, is fraught with problems: First, theories are notoriously temporary. We can never know if quantum field theory will not someday be replaced with another more powerful model that makes no mention of fields (or particles, for that matter). Second, as with all physical theories, quantum field theory is a model—a human contrivance. We test our models to find out if they work; but we can never be sure, even for highly predictive models like quantum electrodynamics, to what degree they correspond to “reality.” To claim they do is metaphysics.

I understand the admonition, but here’s the part in which I am most interested:

Many physicists have uncritically adopted platonic realism as their personal interpretation of the meaning of physics. This is not inconsequential because it associates a reality that lies beyond the senses with the cognitive tools humans use to describe observations.

In order to test their models all physicists assume that the elements of these models correspond in some way to reality. But those models are compared with the data that flow from particle detectors on the floors of accelerator labs or at the foci of telescopes (photons are particles, too). It is data—not theory—that decides if a particular model corresponds in some way to reality. If the model fails to fit the data, then it certainly has no connection with reality. If it fits the data, then it likely has some connection. But what is that connection? Models are squiggles on the whiteboards in the theory section of the physics building. Those squiggles are easily erased; the data can’t be.

(emphasis added)

What is the relationship between those squiggles and reality, or even between the data and the mathematics that turns the data into the signature of an event?  These are questions filled with meaning.  And one of the most important points made is this one:

All of the prominent critics of philosophy whose views we have discussed think very deeply about the source of human knowledge.

Physics is as much concerned with how knowledge is acquired as it is about the nature of physical reality. The senses are extended with the use of detectors – mechanical (and sometimes very large) sensory mechanisms that we’ve learned to build. And this sensory data can only be understood when run through analysis programs that are grounded in mathematics, and whose meaning is expressed mathematically. If the detectors extend the senses, perhaps the mathematics extends cognition.  The fact that the data can now significantly challenge our conceptual abilities should be a fact that contributes to both epistemological discussions and physics discussions. And epistemological discussions inevitably lead to questions about cognition.

Certainly one cannot have a productive discussion about the nature of reality without the data that physics provides. And I agree that this limitation does not apply to other areas of philosophy like ethics, aesthetics, and politics. But epistemology is something to which the sciences can make a contribution, and this may very well spring from philosophers of science.

Here’s another point well taken:

…those who have not adopted platonism outright still apply epistemological thinking in their pronouncements when they assert that observation is our only source of knowledge.

Mathematics consistently raises the question, “what does it mean to know something.” A teacher of mine once lamented the fact that we can’t allow children to rediscover mathematics because there isn’t enough time, because now so much is known. What is it that’s known? The partnership, in physics, of mathematics with observables that lie beyond the range of the senses should fuel epistemological discussions, and not only ones inspired by mathematics and physics, but ones that could also inform them.

There is some interest among physicists about current research in cognitive science. Cosmologist Max Tegmark, for example, has taken an interest in Giulio Tononi’s integrated information theory of consciousness. In a recent TED talk, I believe Tegmark proposed that the only difference between a structure that exists mathematically and one that also exists physically is how the information is instantiated. This is consistent with something that David Deutsch’s once said – that the brain faithfully embodies the mathematical relationships and causal structure of things like a quasars, and does so more and more precisely over time.  He made the following observation about brains and quasars:

Physical objects that are as unlike each other as they could possibly be can, nevertheless, embody the same mathematical and causal structure and do it more and more so over time.

These are thoughts that touch equally on epistemology and the nature of reality.

Computations can be very natural

A recent post on Mind Hacks challenged the perspective outlined in a NY Times op-ed by psychologist Gary Marcus with the title Face It, Your Brain Is a Computer.  The title of Marcus’ piece may be misleading. The brain/computer analogy that he proposes is more a strategy than a theory. But the rejection of brain/computer analogies seems almost reflexive (as one can see from the comments on the Mind Hacks post).  They can be quickly judged wrong, devoid of the vitality and creativity of life, and misguidedly materialistic.  Information processing ideas, however, are increasingly present in the physical and life sciences and do not have the character of reductionist thinking.

Marcus describes his strategy as having two steps:

finding some way to connect the scientific language of neurons and the scientific language of computational primitives (which would be comparable in computer science to connecting the physics of electrons and the workings of microprocessors);

and finding some way to connect the scientific language of computational primitives and that of human behavior (which would be comparable to understanding how computer programs are built out of more basic microprocessor instructions).

The computational primitives of electronic systems include their instructions (like add, branch, plot) and actions related to the collection and storage of data (like fetch and store or compare and swap). An example of the application of these ideas to an analysis of brain function can be seen in some of the work of L. Andrew Coward whose recent papers can be found here.  This kind of research is motivated, in part, by the needs of artificial intelligence designers. Conference proceedings from the 5th Annual International Conference on Biologically Inspired Cognitive Architectures make a number of related papers available here.

The critique of this perspective that is described on Mind Hacks points to an interesting question:

The idea that the mind and brain can be described in terms of information processing is the main contention of cognitive science but this raises a key but little asked question – is the brain a computer or is computation just a convenient way of describing its function?

Here’s an example if the distinction isn’t clear. If you throw a stone you can describe its trajectory using calculus. Here we could ask a similar question: is the stone ‘computing’ the answer to a calculus equation that describes its flight, or is calculus just a convenient way of describing its trajectory?

After a few other objections to the brain/computer analogy, the point is made that “the concept of computation is a tool.”   But there is no indication that there is a growing perspective which sees ‘information’ as the fundamental aspect of everything, despite the fact that this perspective introduces a number of ideas relevant to understanding the brain, and on multiple levels.

Physicist David Deutsch, for example, is currently involved in what he has named constructor theory. I wrote about this work in a Scientific American guest blog.  Constructor theory is meant to get at what Deutsch calls the “substrate independence of information,” which necessarily involves a more fundamental level of physics than particles, waves and space-time.  For Deutsch, information is physical, instantiated in various forms, and transformed by various processes.  And he suspects that this ‘more fundamental level’ may be shared by all physical systems.

Leibniz disassociated ‘substance’ from ‘material’ and reasoned that the world was not fundamentally built from material. He argued that fundamentals must be indivisible, and material is not.  In The Lightness of Being, physicist Frank Wilczek describes the debate about fundamentals in this way:

Philosophical realists claim that matter is primary, brains (minds) are made from matter, and concepts emerge from brains. Idealists claim that concepts are primary, minds are conceptual machines and conceptual machines create matter.

It would seem from this that which ever direction one chooses, what has been learned from the development of computer hardware and software, and the ideas associated with what we call ‘computation,’ are helping to direct and inform research. And while the realist view looks reductionist, the idealist view is certainly not.

In a Closer to Truth interview,  Gregory Chaitin responds to the question, “is information fundamental?” He admits that the inspiration for these ideas may be the computer, but a theory, itself, can be thought of as a computation. The theory is the input and the physical universe is the output. A theory is good when it is a compression, when what you put into the computer is simpler or smaller than what you get out.  It’s then that you understand, and that understanding can be mathematical or physical.

Virginia Chaitin has written on what she calls interdisciplinary, where “the original frameworks, research methods and epistemic goals of individual disciplines are combined and recreated yielding novel and unexpected prospects for knowledge and understanding.”  This kind of paradigm-shifting interdisciplinary effort involves adopting a new conceptual framework, borrowing the very way that understanding is defined within a particular discipline, as well as the way it is explored and the way it is expressed. The results, as she says, are the “migrations of entire conceptual neighborhoods that create a new vocabulary.”  Perhaps the strategy that Marcus proposes can be seen in this light.

The growing interest in information-driven worlds is evident in the conference being organized in The Netherlands (October 7 – 9). It has been named ‘The Information Universe’ Conference and its welcome page says the following:

The main ambition of this conference is to explore the question “What is the role of information in the physics of our Universe?“. This intellectual pursuit may have a key role in improving our understanding of the Universe at a time when we “build technology to acquire and manage Big Data“, “discover highly organized information systems in nature“ and “attempt to solve outstanding issues on the role of information in physics“. The conference intends to address the “in vivo“ (role of information in nature) and “in vitro“ (theory and models) aspects of the Information Universe.

The discussions about the role of information will include the views and thoughts of several disciplines: astronomy, physics, computer science, mathematics, life sciences, quantum computing, and neuroscience. Different scientific communities hold various and sometimes distinct formulations of the role of information in the Universe indicating we still lack understanding of its intrinsic nature. During this conference we will try to identify the right questions, which may lead us towards an answer.

Ideas related to information and information processing are enjoying wide application. And the objections to using computational models to understand the brain are often grounded in the kind of reductionist view that, I would argue, is outdated and fading from current research efforts. It betrays a mistaken view of what mathematics can offer to theories in physics, cognitive science, consciousness studies, evolution and epistemology.  The current inclination to broaden the meaning of information, and associated processes, has the potential to shed new light on what the brain might actually be doing, or what it’s place in nature might be.

M.C. Escher’s visual inquiries

The Amazing World of MC Escher is a new exhibit at the Scottish National Gallery of Modern Art in Edinburgh. It will be there from June 27 to September 27. The exhibit prompted a nice piece on Escher in The Guardian. Author Steven Poole mentions, but does not much explore, the relationship between Escher’s work and the work of mathematicians. But just a little bit of research on the topic suggests that there may be quite a lot to say about this. An article on Escher, in a 2010 issue of Notices of the American Mathematical Society, by mathematician Doris Schattschneider, gives a fairly thorough account of the extent to which some of Escher’s work qualifies as mathematical research.

Many prints provide visual metaphors for abstract mathematical concepts; in particular, Escher was obsessed with the depiction of infinity. His work has sparked investigations by scientists and mathematicians. But most surprising of all, for several years Escher carried out his own mathematical research, some of which anticipated later discoveries by mathematicians.

Escher was a young boy in the early 20th century. He grew up in Holland. His father was a civil engineer and his four older brothers eventually became scientists. Yet, Schattschneider reports, Escher said of himself that he was an “extremely poor” student of arithmetic and algebra, having “great difficulty with the abstractions of figures and letters.” His skill with geometry was a little better, but not one in which he excelled. He did, however, excel as a draftsman. And it was drawing that became both his method of inquiry as well as his expression of discovery.

The questions he asked seemed to be about the essential qualities or properties of images – how a flat surface is made three dimensional, mistakes of perspective, impossible perspectives, filling the plane or tiling. Some aspect of his exploration involved perception itself, but his inquiry is also seen as one into the geometries of space. It was this visual exploration that brought him to the heart of some mathematical questions. And this is what I find most interesting, that it would all be done in the seeing and the drawing.  The mathematics that Escher explored was known to him only in action, and expressed only in images.

Escher chose a career in graphic art over architecture and was successful.  He illustrated books, designed tapestry, and painted murals, while being, primarily, a printmaker. His subjects were often landscapes, buildings, or room interiors, within which he might explore the spatial effects of different, sometimes conflicting, vantage points.  And there are early examples of his interest in the effect of filling the plane with interlocking, representative shapes.

But in 1936 he visited Alhambra and, as Poole says in The Guardian, “Escher really became Escher.”

That year he went to the Alhambra Palace in Granada, Spain, and carefully copied some of its geometric tiling. His work gradually became less observational and more formally inventive. As Escher later explained, it also helped that the architecture and landscape of his successive homes in Switzerland, Belgium and Netherlands were so boring: he “felt compelled to withdraw from the more or less direct and true-to-life illustrating of my surroundings”, embracing what he called his “inner visions.”

Escher became engaged in a series of thoughtful questions about shapes, tiling, and filling the plane, that he methodically explored entirely within the visual images he produced. He produced illusions, impossible figures, and a variety of regular divisions of the Euclidean plane into potentially infinite groupings of creatures like fish, reptiles, or birds.  In his experience, fundamental human attributes (the perception of depth and space), the perspective achieved in drawing, and non-visual ideas like infinity, become very naturally bridged.

An exhibit of Escher’s work was viewed by mathematicians attending the International Congress of Mathematicians in 1954 in Amsterdam. The exhibit was arranged by mathematician N.G de Bruijn. Schattschneider reproduced the catalog note from de Bruijn:

Probably mathematicians will not only be interested in the geometrical motifs; the same playfulness which constantly appears in mathematics in general and which, to a great many mathematicians is the peculiar charm of their subject, will be a more important element.  (emphasis added)

The exhibition led to a correspondence between Escher and geometer H.S.M. Coxeter. Escher became preoccupied with Coxeter’s illustration of a hyperbolic plane, and while remaining blinded to the mathematical content of this illustration, (which Coxeter provided) he nonetheless succeeded in finding his own direct understanding. Mathematician Thomas Wieting presents a nice account of this exchange in a 2010 issue of Reed Magazine.  And Wieting includes part of a letter Escher wrote to his son that expresses both his determination and his loneliness.

My great enthusiasm for this sort of picture and my tenacity in pursuing the study will perhaps lead to a satisfactory solution in the end. Although Coxeter could help me by saying just one word, I prefer to find it myself for the time being, also because I am so often at cross purposes with those theoretical mathematicians, on a variety of points. In addition, it seems to be very difficult for Coxeter to write intelligibly for a layman. Finally, no matter how difficult it is, I feel all the more satisfaction from solving a problem like this in my own bumbling fashion. But the sad and frustrating fact remains that these days I’m starting to speak a language which is understood by very few people. It makes me feel increasingly lonely. After all, I no longer belong anywhere. The mathematicians may be friendly and interested and give me a fatherly pat on the back, but in the end I am only a bungler to them. “Artistic” people mainly become irritated.

Weiting continues,

Escher’s enthusiasm and tenacity did indeed prove sufficient. Somehow, dur­ing the following months, he taught himself, in terms of the straightedge and the compass, to construct not only Coxeter’s figure but at least one variation of it as well. In March 1959, he completed the second of the woodcuts in his Circle Limit Series.

It’s worth noting that, in his own articles, Coxeter provided mathematical analyses of Escher’s work and pointed out that Escher had anticipated some of his own discoveries.

Wieting concludes:

Seeking a new visual logic by which to “capture infinity,” Escher stepped, without foreknowledge, from the Euclidean plane to the hyperbolic plane. Of the former, he was the master; in the latter, a novice. Nevertheless, his acquired insights yielded two among his most interesting works: CLIII, The Miraculous Draught of Fishes, and CLIV, Angels and Devils.

Wieting also gives us this, Escher’s own description of the success of CLIII.  I placed emphasis on his reference to this “round world,” and “the emptiness around it,” because it reveals something of the depth of the meaningfulness born of this math-art-philosophy piece.

In the colored woodcut Circle Limit III the shortcomings of Circle Limit I are largely eliminated. We now have none but “through traffic” series, and all the fish belonging to one series have the same color and swim after each other head to tail along a circular route from edge to edge. The nearer they get to the center the larger they become. Four colors are needed so that each row can be in complete contrast to its surroundings. As all these strings of fish shoot up like rockets from the infinite distance at right angles from the boundary and fall back again whence they came, not one single component reaches the edge. For beyond that there is “absolute nothingness.” And yet this round world cannot exist without the emptiness around it, not simply because “within” presupposes “without,” but also because it is out there in the “nothingness” that the center points of the arcs that go to build up the framework are fixed with such geometric exactitude.

A beautiful gallery of prints can be found here.

Collective behavior: flocks, magnets, neurons and mathematics

The analysis of collective behavior is quickly becoming cross-disciplinary.  I wrote a few years ago about a study that analyzed the coordination of starling flocks. That post was based on the work of Thierry Mora and William Bialek, presented in their paper Are Biological Systems Poised at Criticality. The paper was published in the Journal of Statistical Physics in 2011.

The mathematics of critical transitions describes systems that reach a state from which they are almost instantly transformed. Such a transformation could be liquid turning to gas or metals becoming magnetized. The authors of this paper found that the birds in a flock were connected in such a way that a flock turning in unison could be described, mathematically, as a phase transition.

In the past few years, new, larger scale experiments have made it possible to construct statistical mechanics models of biological systems directly from real data. We review the surprising successes of this “inverse” approach, using examples form families of proteins, networks of neurons, and flocks of birds. Remarkably, in all these cases the models that emerge from the data are poised at a very special point in their parameter space–a critical point. This suggests there may be some deeper theoretical principle behind the behavior of these diverse systems.

I hope that the last point, to which I added emphasis, will grow in relevance.

Also in 2011, Mora and Bialek were among the seven coauthors of the paper: Statistical Mechanics for Natural Flocks of Birds. This study focused on the alignment of flight direction in a flock.

Rather than affecting every other flock member, orientation changes caused only a bird’s seven closest neighbors to alter their flight. That number stayed consistent regardless of flock density, making the equations “topological” rather than critical in nature.

“The orientations are not at a critical point,” said Giardina. Even without criticality, however, changes rippled quickly through flocks — from one starling to seven neighbors, each of which affected seven more neighbors, and so on.

The closest statistical fit for this behavior comes from the physics of magnetism, and describes how the electron spins of particles align with their neighbors as metals become magnetized.

The paper’s abstract tells us that these models are mathematically equivalent to the quantum-mechanical Heisenberg model of magnetism.

An interesting observation here is that the interaction among birds is defined by a number of neighboring birds, not by the number of birds in a neighboring area. In other words, if it was a metric distance that governed their interaction, when the flock was more dense, the number of birds neighboring an individual bird would increases and so the number of birds interacting would also increase. But this seems not to be the case. The number of birds interacting is the quantity that stays constant. There is a very nice description of how this observation came about, and what it might mean here. From the paper:

The collective behaviour of large groups of animals is an imposing natural phenomenon, very hard to cast into a systematic theory [1]. Physicists have long hoped that such collective behaviours in biological systems could be understood in the same way as we understand collective behaviour in physics, where statistical mechanics provides a bridge between microscopic rules and macroscopic phenomena [2, 3]. A natural test case for this approach is the emergence of order in a flock of birds: out of a network of distributed interactions among the individuals, the entire flock spontaneously chooses a unique direction in which to fly [4], much as local interactions among individual spins in a ferromagnet lead to a spontaneous magnetization of the system as a whole [5]. Despite detailed development of these ideas [6{9], there still is a gap between theory and experiment. Here we show how to bridge this gap, by constructing a maximum entropy model [10] based on field data of large flocks of starlings [11{13]. We use this framework to show that the effective interactions among birds are local, and that the number of interacting neighbors is independent of flock density, confirming that interactions are ruled by topological rather than metric distance.

In a synopsis of a more recent paper, Michael Schirber explains a new refinement in the study of flocks.

Andrea Cavagna from the National Research Council (CNR) in Rome, Italy, and his colleagues have now explored this spin wave model in the continuous limit, where the birds can be thought of as fluid elements in a large hydrodynamic system. Both spin waves  and density waves  can occur, but in some cases they damp out before traveling very far. The researchers show that only spin waves propagate in small flocks, whereas density waves dominate for large flocks. In the intermediate region, no waves can propagate, which would make flocks of this size unsustainable. The results may have implications for other animal groups, such as fish schools and mammal herds.

In a New Scientist piece on the same study this point is made:

“I think it is interesting because it identifies purely physical mechanisms for the propagation of information across the flock,” says Cristina Marchetti of Syracuse University in New York. “More importantly, it imposes constraints on such a propagation, which imply constraints on the size of the flock.”

The theme that runs through all of these studies is the recognition that the behavior of all kinds of systems (physical, behavioral, biological) can look the same when viewed from certain perspectives. This ‘sameness’ is most often brought to light with mathematics. There are many things suggested by this, about nature and about mathematics. But, today, my inclination is to say this:  Mathematics itself looks like one of the many faces of nature when we imagine that mathematics itself is an evolving organization of things related to each other (in our own heads, if you will). Like the biological systems that produce organisms or the matter and energy systems that produce galaxies, mathematics produces something. Our conscious experience of mathematics, denoted and investigated by mathematicians, is as difficult to pin to the physical as consciousness itself. But mathematics, like delicate new tissue that runs through us and around us, does consistently provide the mechanism for seeing and understanding. And so, what we call ‘seeing’ and ‘understanding’ must also be some aspect of nature organizing itself.

Here’s a couple of more links:

Wired 11.08.11

Wired 03.13.12

Bees, art, consciousness and mathematics

Studies and insights into the nature of consciousness always get my attention. Inevitably I see mathematics in the discussion, tangentially or directly (as with Giulio Tononi’s qualia space). I’d like to outline, here, a particular train of thought that emerged after reading a couple of articles and a few papers.

The first of these, written by psychologist Nicholas Humphrey, appears in a current issue of Scientific American Mind.  Consciousness as Art is the title of the article. Humphrey took note of the debate among theoretical psychologists, where ideas seem to fall within one of two perspectives:

Some assert that the manifestly eerie and ineffable qualities of subjective experience can only mean that these nonphysical qualities are inherent in the fabric of the universe. Others, including me, are more suspicious. They argue that consciousness may be more like a conjuring show, whereby the physical brain is tricking people into believing in qualities that don’t really exist.

I’m not sure how any structure brought about by the relationship of our bodies with everything else can be said to not exist. While it may be difficult to find color outside of our own interaction with light, it would seem that deleting its existence wouldn’t help us understand things any better. The perspective of the illusionist is grounded in what many believe is an irreconcilable gap between the physical world and the worlds created by consciousness – the worlds of individual experience and ideas. I’m more interested in finding clues to how these worlds are united (and, I suspect that mathematics is one of our best clues). I suppose I belong to what Humphrey calls the realist camp:

In their view, if your sensations appear to have qualities that lie beyond the scope of physics, then they really do have such qualities. And these realists explain their reasoning by suggesting that the brain activity underlying sensations already has consciousness latent in it as an additional property of matter—a property as yet unrecognized by physics but one that you, the conscious subject, are somehow able to tap into.

I wouldn’t put it that way but, more interesting is what Humphrey proposed as an interesting way to get around the idea to which he subscribes – that the brain is tricking us.

…might it be more persuasive if we were to talk about qualia as art rather than illusion? I am not proposing an alternative theory to illusionism, but my hope is that shifting the emphasis in a positive direction may in fact make the illusionist theory more scientifically acute and at the same time more humanly agreeable.

Thus, this way of thinking about sensations allows us to look out for—and celebrate—the psychological growth that human beings derive from participating in the self-made show.

The chief scientific bonus of conceptualizing consciousness as art may prove to be precisely this: that it raises new questions for an evolutionist about the value and purpose of consciousness. If sensations are art, the artist behind them is actually not the individual brain as such. Rather the artist—the ultimate designer—must be the evolutionary forces of natural selection, which have contrived to put in place the genetic code for building the qualia-generating brain.

This is, I believe, a move in the right direction. Although Humphrey’s proposal for the evolutionary purpose of this is unnecessarily pragmatic. He considers that the evolutionary function of brain art is to “induce you to fall in love with yourself and to encourage you to think of “all humans as equally touched by magic,” to support, I gather, our survival.

There was a reference in this article to a 2008 piece by Christof Koch in which he discussed the work of Martin Giurfa of the University of Toulouse in France who, along with colleagues, published a paper in Nature with the title The concepts of ‘sameness’ and ‘difference’ in an insect.  Their abstract tells us this:

..research has indicated that bees are capable of cognitive performances that were thought to occur only in some vertebrate species. For example, honeybees can interpolate visual information, exhibit associative recall, categorize visual information, and learn contextual information. Here we show that honeybees can form ‘sameness’ and ‘difference’ concepts. They learn to solve ‘delayed matching-to-sample’ tasks, in which they are required to respond to a matching stimulus, and ‘delayed non-matching-to-sample’ tasks, in which they are required to respond to a different stimulus; they can also transfer the learned rules to new stimuli of the same or a different sensory modality. Thus, not only can bees learn specific objects and their physical parameters, but they can also master abstract inter-relationships, such as sameness and difference.

Koch highlights some of the specifics in his article:

Although bees can’t be expected to push levers, they can be trained to take either the left or the right exit inside a cylinder modified for the DMTS test. A color disk serves as a cue at the entrance of the maze, so that the bee sees it before entering. Once within the maze, the bee has to choose the arm displaying the color that matches (DMTS) or differs from (DNMTS) the color at the entrance. Bees perform both tasks well. They even generalize to a situation they have never previously encountered. That is, once they’ve been trained with colors, they “get it” and can now follow a trail of vertical stripes if a disk with vertical gratings is left at the entrance of the maze. These experiments tell us that bees have learned an abstract relation (sameness in DMTS, difference in DNMTS) irrespective of the physical nature of the stimuli. The generalization to novel stimuli can even occur from odors to colors.

Koch remarks that, although these experiments do not demonstrate that the bees are conscious, they do caution us to not too quickly reject this possibility.

Bees are highly adaptive and sophisticated creatures with a bit fewer than one million neurons, which are interconnected in ways that are beyond our current understanding, jammed into less than one cubic millimeter of brain tissue. The neural density in the bee’s brain is about 10 times higher than that in a mammalian cerebral cortex, which most of us take to be the pinnacle of evolution on this planet.

In a paper that Koch coauthored with Giulio Tononi they suggest an approach to the study of consciousness that is very promising.

Indeed, as long as one starts from the brain and asks how it could possibly give rise to experience—in effect trying to ‘distill’ mind out of matter, the problem may be not only hard, but almost impossible to solve. But things may be less hard if one takes the opposite approach: start from consciousness itself, by identifying its essential properties, and then ask what kinds of physical mechanisms could possibly account for them.

The paper describes Tononi’s integrated information theory of consciousness in great detail. The essential properties of consciousness that are proposed have a mathematical character:

Taking consciousness as primary, IIT first identifies axioms of experience, then derives a set of corresponding postulates about its physical substrate. The axioms of IIT are assumptions about our own experience that are the starting point for the theory. Ideally, axioms are essential (apply to all experiences), complete (include all the essential properties shared by every experience), consistent (lack contradictions) and independent (not derivable from each other).

Giurfa’s observations of honey bees identify cognitive abilities that are also associated with mathematics – as bees are observed to “master abstract inter-relationships, such as sameness and difference.” My point here is simply that mathematics may provide significant support to the investigation of cognition and/or consciousness in living things. Rudimentary mathematical forms serve as maps to cognitive structure and conscious experience in lives other than our own. And mathematics, as an efficacious cognitive event in our experience, can perhaps alter the terms of the debate about human consciousness between the realists and the illusionists.  Mathematics is uniquely important to both physical law and the pure creativity of exploring precisely defined abstract relationships.

Navigation cells, intent, and folded dimensions

I read a short article on scientificamerican.com reporting on a recent advance in the investigation of the neural systems that support navigation, or our sense of direction.  When I did some follow-up on the individual who led the study, I was surprised to find another interesting collaboration between scientists and artists. While the collaboration was centered on inquiries into perception, memory, and space, it touched on things related to mathematics – at least in its discussions of space, dimension and direction.  Both the study and the collaboration make some interesting points.  I’ll start with the study.

It was led by Hugo Spiers of University College London.  Spiers found something new in the action of head-direction cells – neural cells that fire when we face a certain direction. These cells have been known to play a role in our ability to navigate through our environment, working with place cells in the hippocampus (that establish our memory of specific locations and a kind of map of the environment) and grid cells in the adjacent entorhinal cortex (that somehow map where we are relative to where we have just been). What researchers were able to observe was that head cells also fired in response to the direction we wanted to go.

The entorhinal region displayed a distinct pattern of activity when volunteers faced each direction—consistent with how head-direction cells should behave. The researchers discovered, however, that the same pattern appeared whether the volunteers were facing a specific direction or just thinking about it. The finding suggests that the same mechanism that signals head direction also simulates goal direction.

It might help to describe the whole system as it is currently understood.  Spiers and co-author Caswell Barry provide a nice description of the interaction of the cells that function in navigation in a recent paper.

Electrophysiological investigations have revealed several distinct neural representations of self-location (see Figure 1 and for review [15]). Briefly, place cells found in hippocampal regions CA3 and CA1 signal the animal’s presence in particular regions of space; the cells’ place fields [16] (Figure 1a). Place fields are broadly stable between visits to familiar locations but remap whenever a novel environment is encountered, quickly forming a new and distinct representation 17 and 18]. Grid cells, identified in entorhinal cortex, and subsequently in the pre-subiculum and para-subiculum, also signal self-location but do so with multiple receptive fields distributed in a striking hexagonal array 19 and 20] (Figure 1b). Head direction cells, found throughout the limbic system, provide a complementary representation, signalling facing direction; with each cell responding only when the animal’s head is within a narrow range of orientations in the horizontal plane (e.g. [21], Figure 1c). Other similar cell types are also known, for example border cells which signal proximity to environmental boundaries [22] and conjunctive grid cells which respond to both position and facing direction [23]. It is likely that these spatial representations are a common feature of the mammalian brain, at the very least grid cells and place cells have been found in animals as diverse as bats, humans, and rodents [15].

What first struck me about the work reported in the Scientific American piece was that this navigation system, which looks fairly mechanical, has another layer at least – one that equates the direction faced with one’s intent to face it. The head cells respond to direction despite the fact that the head itself does not. From the paper on which the Scientific American piece was based:

In summary, we show that the human entorhinal/subicular region supports a neural representation of geocentric goal direction. We further show that goal direction shares a common neural representation with facing direction. This suggests that head-direction populations within the entorhinal/subicular region are recruited for the simulation of the direction to future goals. These results not only provide the first evidence for the presence of goal direction representations within the mammalian brain but also suggest a specific mechanism for the computation of this neural signal, based on simulation.

When I looked further into Spier’s research, I found links on his University College London website that provided info on work associated with art and architecture and his collaboration with artist Antoni Malinowski.  In an interview that Spiers conducted with Malinowski, Malinowski talked about his own work, distinguishing it from the work of architects. Architects, he said, deal with space diagrammatically. In contrast, he explained, he dealt with space in a reduced way. His subject is the interaction of dimensions – the three and four of space and time and the two of a flat surface. He proposed that dimensions are foldable and that when he worked, he folded four dimensions into two with brushstroke and paint. These are then ‘unfolded’ in the viewing. This sounds like an inquiry, an investigation of the nature and perception of dimension.

Malinowski describes how he works:

I create a situation where you do not know where you are, and you don’t know what it is. So you have to make an effort. I want to take you to a mental area. And in order to do so I have all those tools, which are colour, rather delicious, and wonderful. So you are drawn into them. And I construct it in such a way that you want to go there.
So as viewer you notice something and you go off… But it is all done in a language of painting it is not really definable.

A review of his work by Mark Rappolt says this:

his work escapes the canvas to cover a building’s walls, Malinowski exploits architecture not as a singular fixed entity, but as a plurality of possible worlds, as an illusory reality, a space of shifting sand. Perhaps in doing this he comes closer than many architects to an understanding of what space really is.  (emphasis added)

Malinowski is playing with perception and orientation, perhaps to reveal something about it. His work seems to surprise the viewer, but it’s telling us something about ourselves and how we make things sensible, something we can’t see in our day-to-day experience. Looking at the development of mathematics from it’s more familiar, more physical roots to its strange and powerful abstractions can do something similar. The investigation of what one means by ‘space’ in mathematics (Euclidean and non-Euclidean, the manifold, topological spaces, parameter spaces, etc.) has produced some of its most effective applications. Mathematics contains more than one definition of dimension, each of which produce its own results. And the vector, the mathematical description of direction, finds its way into the geometry of relativity, the phase evolution of a wave, the calculation of probabilities, the spin of fundamental particles, to name just a few. It seems clear to me that mathematics is a very thorough investigation of experience while also becoming diassociated from it. The work of building mathematics is much larger, intergenerational, shared, and more universal than Malinowski’s individual investigation of perception and orientation, but I find in his a similar inclination to pry open familiar experience to find something new.

Inquiries

A short article in the April 16 issue of New Scientist reported on an applied soft computing paper that proposes an improvement on what’s known as ‘particle swarm optimization (PSO).

Particle swarm optimization (PSO) is an optimization technique inspired by the social behavior of birds. Described as a simple and powerful algorithm, it can be used to optimize high dimensional functions (in other words, finding maximums and minimums of functions with many parameters). There is quite a bit of info on the website Code Project.  There they explain:

To understand the algorithm, it is best to imagine a swarm of birds that are searching for food in a defined area – there is only one piece of food in this area. Initially, the birds don’t know where the food is, but they know at each time how far the food is. Which strategy will the birds follow? Well, each bird will follow the one that is nearest to the food.

PSO adapts this behavior of birds searching for food in order to the search for the best solution-vector in a search space. A particle is a single solution. The algorithm defines the measure of best solutions and begins with particles at random positions. Through some number of iterations, individual particles adjust their velocity and position as they follow best solution particles.

The New Scientist article gives a more general description of this approach along with one of its limitations:

One way they can do this is by using groups of virtual creatures that wander through “parameter space”, looking for valleys that represent the lowest values. Mathematicians have taken inspiration from actual animals, from grey wolves to ants. One limitation, though, is that the animals sometimes fail to notice a deeper valley nearby.

The suggested improvement is to add parasites to the mix:

In their model, a swarm of animals searched for the lowest valleys, but was then joined by a second, parasitic population. This group searched for valleys, but also abducted the most successful animals and made them work for the parasite team.

The struggle resulted in a more varied collection of creatures allowing the parasitic algorithm to solve the problem twice as fast.

I thought about what this kind of thing could mean about the mathematics itself. Why would there be any relationship between a bird’s search for food and our interest in optimization solutions? We’re not just modeling the bird’s behavior, we’re using the bird’s behavior to solve our own problems. There is here an unexpected overlap between two kinds of inquiries. And this word, I think, is key – inquiry.

There is still some debate among cognitive scientists about whether our more primal experience of quantity is discrete, like the numbers that we count with, or continuous, like our sense of time. If, as many cognitive scientist argue, our first sense of quantity is continuous (like the real numbers) and, if it is true that numbers followed language, then the 19th century struggle to understand and define the continuum (represented by the real number line) can look like an investigation of number, an inquiry back into number’s source. And once I begin to think in terms of inquiries, I see them everywhere. Visual art is an inquiry into visual sensation. This is a view consistently presented by neuroscientist Semir Zeki. Mathematics is an inquiry into sensation as well as abstract relationship itself (logical, numerical, geometric, probabilistic, etc.)  The nature of these inquiries is, perhaps, a pure exploration of living interactions – the eye and light, the relationships that produce comprehension, movement and space.

The search for food is certainly an inquiry, as is swarming in the more general sense.  I would include my own earlier discussion of a plant’s calculation of the rate with which it will consume its stored food. Perhaps evolution itself is an inquiry into life’s possibilities.