“…an anchor in the cosmic swirl.”

Looking through some blog sites that I once frequented (but have recently neglected) I saw that John Horgan’s Cross Check had a piece on George Johnson’s book Fire in the Mind: Science, Faith, and the Search for Order. This quickly caught my attention because Horgan and Johnson figured prominently in my mind in the late 90’s. In the first paragraph Horgan writes:

Fire alarmed me, because it challenged a fundamental premise of The End of Science , which I was just finishing.

In the mid-nineties, I knew that Horgan was a staff writer for Scientific American and I had kept one of his pieces on quantum physics in my file of interesting new ideas. When I heard about The End of Science I got a copy and very much enjoyed it. I had begun writing, and was trying to create a new beginning for myself. This included my decision to leave New York (where I had lived my whole life) and Manhattan in particular, where I had lived for about seventeen years. In the end, it was Johnson’s book that gave my move direction. I wouldn’t just move to a place that was warmer, prettier, and easier. I decided to move to Santa Fe, New Mexico.

In his original review of Fire in the Mind, Horgan produced a perfect summary of the reasons I chose Santa Fe. He reproduced this review on his blog in response to the release of a new edition:

In New Mexico, the mountains’ naked strata compel historical, even geological, perspectives. The human culture, too, is stratified. Scattered in villages throughout the region are Native Americans such as the Tewa, whose creation myths rival those of modern cosmology in their intricacy. Exotic forms of Christianity thrive among both the Indians and the descendants of the Spaniards who settled here several centuries ago. In the town of Truchas, a sect called the Hermanos Penitentes seeks to atone for humanity’s sins by staging mock crucifixions and practicing flagellation.

Lying lightly atop these ancient belief systems is the austere but dazzling lamina of science. Slightly more than half a century ago, physicists at the Los Alamos National Laboratory demonstrated the staggering power of their esoteric formulas by detonating the first atomic bomb. Thirty miles to the south, the Santa Fe Institute was founded in 1985 and now serves as the headquarters of the burgeoning study of complex systems. At both facilities, some of the world’s most talented investigators are seeking to extend or transcend current explanations about the structure and history of the cosmos.

Santa Fe, it seemed, would not only be a nice place to live, it would be a good place to think. But I should stop reminiscing and get to the point, which has to do with Johnson’s book and a few related topics which Horgan pointed to in his suggestions for further reading.   Before I look at those suggestions, lets see why they were there.

Horgan characterizes Johnson’s book as “one that raises unsettling questions about science’s claims to truth.” Johnson puts forward a simple description of the view that characterizes Fire in the Mind in the Preface to the new edition.

Our brains evolved to seek order in the world. And when we can’t find it, we invent it. Pueblo mythology cannot compete with astrophysics and molecular biology in attempting to explain the origins of our astonishing existence. But there is not always such a crisp divide between the systems we discover and those we imagine to be true.

Horgan credits Johnson with providing, “an up-to-the-minute survey of the most exciting and philosophically resonant fields of modern research,” and goes on to say, “This achievement alone would make his book worth reading. His accounts of particle physics, cosmology, chaos, complexity, evolutionary biology and related developments are both lyrical and lucid.” But the issues raised, and battered about a bit by Horgan, have to do with what one understands science to be, and what one could mean by truth.  Johnson argues that there is a fundamental relationship between the character of pre-scientifc myths and scientific theories.  For Horgan, this brought Thomas Kuhn to mind and hence a reference to one of his posts from 2012, What Thomas Kuhn Really Thought about Scientific “Truth.”

While pre-scientific stories about the world are usually definitively distinguished from the scientific view, the impulse to explore them does occur in the scientific community.  I, for one, was  impressed some years ago when I saw that the sequence of events in the creation story I learned from Genesis somewhat paralleled scientific ideas (light appeared, then light was separated from darkness, sky from water, water from land, then creatures appeared in the water and sky followed by creatures on the land). The effectiveness of scientific theories, however, is generally accepted to be the consequence of the theories being correct. One of the things that inspires books like Johnson’s, however, is that science hasn’t actually diminished the mystery of our existence and our world. The stubborn strangeness of quantum-mechanical physics, the addition of dark matter and dark energy to the cosmos, the surprises in complexity theories, the difficulties understanding consciousness, all of these things stir up questions about the limits of science or even what it means to know anything.

Horgan’s also refers to the use of information theory to solve some of the physic’s mysteries, where information is treated as the fundamental substance of the universe. He links to a piece where he argues that this can’t be true.  But I believe Horgan is not seeing the reach of the information theses. According to some theorists, like David Deutsch, information is always ‘instantiated,’ always physical, but always undergoing transformation. It has, however, some substrate independence. Information as such includes the coding in DNA, the properties within quantum mechanical systems, as well as our conceptual systems. On another level, consciousness, is described by Giulio Tononi’s model as integrated information.

The persistence of mystery doesn’t cause me to wonder about whether scientific ideas are true or not. It leads me to ask more fundamental questions like –  What is science? How did it happen? Why or how was it perceived that mathematics was the key? I believe that these are the questions lying just beneath Johnson’s narrative.

The development of scientific thinking is an evolution, that is likely part of some larger evolution. It is real, it has meaning and it has consequences. I wouldn’t ask if it’s true. It is what we see when we hone particular skills of perception.  Mathematics is how we do it. Like the senses,  mathematics builds structure from data, even when those structures are completely beyond reach. When explored directly by the mathematician, he or she probes this structure-building apparatus itself.

I can’t help but interject here something from biologist Humberto Maturana, from a paper published in Cybernetics and Human Knowing  where he comments, “..reality is an explanatory notion invented to explain the experience of cognition.”

Relevant here is something else I found as I looked through Scientific American blog posts. An article by Paul Dirac from the May 1963 issue of Scientific American was reproduced in a 2010 post. It begins:

In this article I should like to discuss the development of general physical theory: how it developed in the past and how one may expect it to develop in the future. One can look on this continual development as a process of evolution, a process that has been going on for several centuries.

In the course of talking about quantum theory, Dirac describes Schrodinger’s early work on his famous equation.

Schrodinger worked from a more mathematical point of view, trying to find a beautiful theory for describing atomic events, and was helped by De Broglie’s ideas of waves associated with particles. He was able to extend De Broglie’s ideas and to get a very beautiful equation, known as Schrodinger’s wave equation, for describing atomic processes. Schrodinger got this equation by pure thought, looking for some beautiful generalization of De Broglie’s ideas, and not by keeping close to the experimental development of the subject in the way Heisenberg did.

Johnson ends his new Preface nicely:

As I write this, I can see out my window to the piñon-covered foothills where the Santa Fe Institute continues to explore the science of complex systems—those in which many small parts interact with one another, giving rise to a rich, new level of behavior. The players might be cells in an organism or creatures in an ecosystem. They might be people bartering and selling and unwittingly generating the meteorological gyrations of the economy. They might be the neurons inside the head of every one of us— collectively, and still mysteriously, giving rise to human consciousness and its beautiful obsession to find an anchor in the cosmic swirl.

The continuity of things

thI think often about the continuity of things – about the smooth progression of structure, that is the stuff of life, from the microscopic to the macrocosmic.  I was reminded, again, of how often I see things in terms of continuums when I listened online to a lecture given by Gregory Chaitin in 2008.  In that lecture (from which he has also produced a paper) Chaitin defends the validity and productivity of an experimental mathematics, one that uses the kind of reasoning with which a theoretical physicist would be comfortable. And here he argues:

Absolute truth you can only approach asymptotically in the limit from below.

For some time now, I have considered this asymptotic approach to truth in a very broad sense, where truth is just all that is. In fact, I tend to understand most things in terms of a continuum of one kind or another. And I have found that research efforts across disciplines increasingly support this view. It is consistent, for example, with the ideas expressed in David Deutsch’s The Beginning of Infinity, where knowledge is equated with information, whether physical (like quantum systems), biological (like DNA) or explanatory (like theory). From this angle, the non-explanatory nature of biological knowledge, like the characteristics encoded in DNA, is distinguished only by its limits. Deutch’s newest project, which he calls constructor theory, relies on the idea that information is fundamental to everything. Constructor theory is meant to get at what Deutsch calls the “substrate independence of information.” It defines a more fundamental level of physics than particles, waves and space-time.  And Deutsch expects that this ‘more fundamental level’ will be shared by all physical systems.

In constructor theory, it is information that undergoes consistent transformation – from the attribute of a creature determined by the arrangement a set of nucleic acids, to the symbolic representation of words on a page that begin as electrochemical signals in my brain, to the information transferred in quantum mechanical events. Everything becomes an instance on a continuum of possibilities.

I would argue that another kind of continuum can be drawn from Samir Zeki’s work on the visual brain. Zeki’s investigation of the neural components of vision has led to the study of what he calls neuroesthetics, which re-associates creativity with the body’s quest for knowledge. While neuroesthetics begins with a study of the neural basis of visual art, it inevitably touches on epistemological questions. The institute that organizes this work lists as its first aim:

-to further the study of the creative process as a manifestation of the functions and functioning of the brain.  (emphasis added)

The move to associate the execution and appreciation of visual art with the brain is a move to re-associate the body with the complexities of conscious experience. Zeki outlines some of the motivation in a statement on the neuroesthetics website.  He sees art as an inquiry through which the artist investigates the nature of visual experience.

It is for this reason that the artist is in a sense, a neuroscientist, exploring the potentials and capacities of the brain, though with different tools.

Vision is understood as a tool for the acquisition of knowledge.    (emphasis added)

The characteristic of an efficient knowledge-acquiring system, faced with permanent change, is its capacity to abstract, to emphasize the general at the expense of the particular. Abstraction, which arguably is a characteristic of every one of the many different visual areas of the brain, frees the brain from enslavement to the particular and from the imperfections of the memory system. This remarkable capacity is reflected in art, for all art is abstraction.

If knowledge is understood in Deutsch’s terms, then all of life is the acquisition of knowledge, and the production of art is a biological event.  But this use of the abstract, to free the brain of the particular, is present in literature as well, and is certainly operating in mathematics. One can imagine a continuum from retinal images, to our inquiry into retinal images, to visual art and mathematics and the productive entwining of science and mathematics.

Another Chaitin paper comes to mind here – Conceptual Complexity and Algorithmic Information. This paper focuses on the complexity that lies ‘between’ the complexities of the tiny worlds of particle physics and the vast expanses of cosmology, namely the complexity of ideas. The paper proposes a mathematical approach to philosophical questions by defining the conceptual complexity of an object X

to be the size in bits of the most compact program for calculating X, presupposing that we have picked as our complexity standard a particular fixed, maximally compact, concise universal programming language U.

Chaitin then uses this definition to explore the conceptual complexity of physical, mathematical, and biological theories. Particularly relevant to this discussion is his idea that the brain could be a two-level system. In other words, the brain may not only be working at the neuronal level, but also at the molecular level. The “conscious, rational, serial, sensual front-end mind” is fast and the action on this front is in the neurons. The “unconscious, intuitive, parallel, combinatorial back-end mind,” however, is molecular (where there is much greater computing and memory capacity).  If this model were correct, it would certainly break down our compartmental view of the body  (and the body’s experience).  And it would level the playing field, revealing an equivalence among all of the body’s actions that might redirect some of the questions we ask about ourselves and our world.



Shared paths to Infinity

thMy last post focused on the kinds of problems that can develop when abstract objects, created within mathematics, increase in complexity – like the difficulty of wrapping our heads around them, or of managing them without error. I thought it would be interesting to turn back around and take a look at how the seeds of an idea can vary.

I became aware only recently that a fairly modern mathematical idea was observed in the social organizations of African communities. Ron Eglash,  Professor at the Rensselaer Polytechnic Institute, has a multifaceted interest in the intersections of culture, mathematics, science, and technology. Sometime in the 1980’s Eglash made the observation that aerial views of African villages were fractals and he followed this up with visits to the villages to investigate the patterns.

th-1In a 2007 TED talk Eglash describes the content of the fractal patterns displayed by the villages. One of these villages, located in southern Zambia, is made up of a circular pattern of self-similar rings like the rings shown to the left. The whole village is a ring, and on that ring are the rings of individual families and, within each of those rings are the heads of families. In addition to the repetition of the rings that shape the village and the families, is the repetition of the sacred altar spot. There is a sacred altar placed in the same spot in each individual home. And in each family ring, the home of the head of the family is found in the sacred alter spot. In the ring of all families (or the whole village) the Chief’s ring is in the place of the sacred altar and, within the Chief’s ring, the ring for the Chief’s immediate family are in the place of the sacred altar. Within the home that is the chief’s immediate family, ‘a tiny village’ is in the place of the sacred altar. And within this tiny village live the ancestors. It’s a wonderful picture of an infinitely extending self-similar pattern.

Eglash is clear about the fact that these kinds of scaling patterns are not universal to all indigenous architectures, and also that the diversity of African cultures is fully expressed within the fractal technology:

…a widely shared design practice doesn’t necessarily give you a unity of culture — and it definitely is not “in the DNA.”…the fractals have self-similarity — so they’re similar to themselves, but they’re not necessarily similar to each other — you see very different uses for fractals. It’s a shared technology in Africa.

Certainly it is interesting that before the notion of a fractal in mathematics was formalized, purposeful fractal designs were being used by communities in Africa to organize themselves. But what I find even more provocative is that everything in the life of the village is subject to the scaling. Social, mystical, and spatial (geometric) ideas are made to correspond. This says something about the character of the mechanism being used (the fractals), as well as the culture that developed its use.

While it was brief, Eglash did provide a review of some early math ideas on recursive self-similarity, paying particular attention to the Cantor set  and the Koch curve. He made the observation that Cantor did see a correspondence between the infinities of mathematics and God’s infinite nature. But in these recursively produced village designs, that correspondence is embodied in the stuff of everyday life. It is as if the ability to represent recursive self-similarity and the facts of life itself are experienced together. The recursive nature of these village designs didn’t happen by accident. It was clearly understood. As Eglash says in his talk,

…they’re mapping the social scaling onto the geometric scaling; it’s a conscious pattern. It is not unconscious like a termite mound fractal.

Given that the development of these patterns happened outside mathematics proper, and predates mathematics’ formal representation of fractals, questions are inevitably raised about what mathematics is and this is exactly the kind of thing on which ethnomathematics focuses. Eglash is an ethnomathematician.  A very brief look at the some of the literature in ethnomathematics reveals a fairly broad range of interests, many of which are oriented toward more successful mathematics education, and many of which are strongly criticized.   But it seems to me that the meaning and significance of ethnomathematics has not been made precise.  In a 2006 paper, Eglash makes an interesting observation. He considers that the “reticence to consider indigenous mathematical knowledge,” may be related to the “Platonic realism of the mathematics subculture.”

For mathematicians in the Euro-American tradition, truth is embedded in an abstract realm, and these transcendental objects are inaccessible outside of a particular symbolic analysis.

Clearly there will be political questions (related to education issues) tied up in this kind of discussion about what and where mathematics is.  But, with respect to these African villages, I most enjoyed seeing a mathematical idea become the vehicle with which to explore and represent infinities.




“The future of mathematics is more a spiritual discipline…”

I did some following up on the work of Vladimir Voevodsky and for anyone who might ask, “what’s actually going on in mathematics,” Voevodsky’s work adds, perhaps, even more to the mystery. Not that I mind. The mystery emerges from the limitless depths (or heights) of thought that are revealed in mathematical ideas or objects. It is this that continues to captivate me. And the grounding of these ideas, provided by Voevodsky’s work on foundations, reflects the intrinsic unity of these highly complex and purely abstract entities, suggesting a firm rootedness to these thoughts – an unexpected and enigmatic rootedness that calls for attention.

Voevodsky gave a general audience talk in March of 2014 at the Institute for Advanced Studies at Princeton, where he is currently Professor in the School of Mathematics. In that talk he described the history of much of his work and how he became convinced that to do the kind of mathematics he most wanted to do, he would need a reliable source to confirm the validity of the mathematical structures he builds.

As I was working on these ideas I was getting more and more uncertain about how to proceed. The mathematics of 2-theories is an example of precisely that kind of higher-dimensional mathematics that Kapranov and I had dreamed about in 1989. And I really enjoyed discovering new structures there that were not direct extensions of structures in lower “dimensions”.

But to do the work at the level of rigor and precision I felt was necessary would take an enormous amount of effort and would produce a text that would be very difficult to read. And who would ensure that I did not forget something and did not make a mistake, if even the mistakes in much more simple arguments take years to uncover?

I think it was at this moment that I largely stopped doing what is called “curiosity driven research” and started to think seriously about the future.

It soon became clear that the only real long-term solution to the problems that I encountered is to start using computers in the verification of mathematical reasoning.

Voevodsky expresses the same concern in a Quanta Magazine article by Kevin Hartnett.

“The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes,” Voevodsky said. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error.

So, at the heart of this discussion seems to be a quest for useful math-assistant computer programs. But both the problems mathematicians like Voevodsky face, and the computer assistant solutions he explored, highlight something intriguing about mathematics itself.

Hartnett does a nice job making the issues relevant to Voevodsky’s innovations accessible to any interested reader. He reviews Bertrand Russell’s type theory, a formalism created to circumvent the paradoxes of Cantor’s original set theory – as in the familiar paradox created by things like the set of all sets that don’t contain themselves. (If the set does contain itself then it doesn’t contain itself) This kind of problem is avoided in Russel’s type theory by making a formal distinction between collections of elements and collections of other collections. In turns out that within type theory, equivalences among sets are understood in much the same way as equivalences among spaces are understood in topology.

Spaces in topology are said to be homotopy equivalent if one can be deformed into the other without tearing either. Hartnett illustrates this using letters of the alphabet:

The letter P is of the same homotopy type as the letter O (the tail of the P can be collapsed to a point on the boundary of the letter’s upper circle), and both P and O are of the same homotopy type as the other letters of the alphabet that contain one hole — A, D, Q and R.

The same kind of equivalence can be established between a line and a point, or a disc and a point, or a coffee mug and a donut.

Given their structural resemblance, type theory handles the world of topology well. Things that are homotopy equivalent can also be said to be of the same homotopy type. But the value of the relationship between type theory and homotopic equivalences was greatly enhanced when Voevodsky learned Martin-Löf type theory (MLTT), a formal language developed by a logician for the task of checking proofs on a computer. Voevodsky saw that this computer language formalized type theory and, by virtue of type theory’s similarity to homotopy theory, it also formalized homotopy theory.

Again, from Hartnett:

Voevodsky agrees that the connection is magical, though he sees the significance a little differently. To him, the real potential of type theory informed by homotopy theory is as a new foundation for mathematics that’s uniquely well-suited both to computerized verification and to studying higher-order relationships.

There is a website devoted to homotopy type theory where it is defined as follows:

Homotopy Type Theory refers to a new interpretation of Martin-Löf’s system of intensional, constructive type theory into abstract homotopy theory.  Propositional equality is interpreted as homotopy and type isomorphism as homotopy equivalence. Logical constructions in type theory then correspond to homotopy-invariant constructions on spaces, while theorems and even proofs in the logical system inherit a homotopical meaning.  As the natural logic of homotopy, constructive type theory is also related to higher category theory as it is used e.g. in the notion of a higher topos.

Voevodsky’s work is on a new foundation for mathematics and is also described there:

Univalent Foundations of Mathematics is Vladimir Voevodsky’s new program for a comprehensive, computational foundation for mathematics based on the homotopical interpretation of type theory. The type theoretic univalence axiom relates propositional equality on the universe with homotopy equivalence of small types. The program is currently being implemented with the help of the automated proof assistant Coq.  The Univalent Foundations program is closely tied to homotopy type theory and is being pursued in parallel by many of the same researchers.

In one of his talks, Voevodsky suggested that mathematics as we know it studies structures on homotopy types. And he describes a mathematics so rich in abstract complexity, “it just doesn’t fit in our heads very well. It somehow requires abilities that we don’t posses.”  Computer assistance would be expected to facilitate access to these high levels of complexity and abstraction.

But mathematics is, as I see it, the abstract expression of human understanding – the possibilities for thought, for conceptual relationships. So what is it that’s keeping us from being able to manage this level of abstraction?   Voevodsky seems to agree that it is comprehension that gives rise to mathematics. He’s quoted in a New Scientist article by Jacob Aron:

If humans do not understand a proof, then it doesn’t count as maths, says Voevodsky. “The future of mathematics is more a spiritual discipline than an applied art. One of the important functions of mathematics is the development of the human mind.”

While Aaron seems to suggest that computer companions to mathematicians could potentially know more than the mathematicians they assist, this view is without substance. It is only when the mathematician’s eye discerns something that we call it mathematics.

Mike Shulman has a few posts related to homotopy type theory on The n-Category Cafe site beginning with one entitled Homotopy Type Theory, I followed by IIIII, and IV.  There’s also one from June 2015 – What’s so HoTT about Formilization?
And here’s a link to Voevodsky’s Univalent Foundations.

Finding hidden structure by way of computers

An article in a recent issue of New Scientist highlights the potential partnership between computers and mathematicians. It begins with an account of the use of computers in a proof that would do little, it seems, to provide greater understanding, or greater insight into the content of the idea the proof explores. The computer program merely exhausts the counter examples of a theorem, thereby proving it true (a task far too impractical to attack by hand). Reviewing this kind of proof, however, requires checking computer code, and this is something that referees in mathematics are not likely to want to do. And so efforts have been made to make the checking easier by employing something called a ‘proof assistant.’ The article doesn’t do much to clarify how the ‘proof assistant’ works, and says just a little about how it makes things easier. But a question that comes to mind quickly for me is whether such a proof could reveal new bridges between different sub-disciplines of mathematics, the way the traditional effort has been known to do.

A discussion of the work of prominent mathematician Vladimir Voevodsky follows.  This work takes us back to foundational questions and clearly addresses those bridges. While mathematics is grounded in set theory,  set theory can permit more than one definition of the same mathematical object. Voevodsky decided to address the problem that this creates for computer generated proofs.

…if two computer proofs use different definitions for the same thing, they will be incompatible. “We cannot compare the results, because at the core they are based on two different things,” says Voevodsky.

Voevodsky swaps sets for types described as “a stricter way of defining mathematical objects in which every concept has exactly one definition.

“This lets mathematicians formulate their ideas with a proof assistant directly, rather than having to translate them later. In 2013 Voefodsky and colleagues published a book explaining the principles behind the new foundations. In a reversal of the norm, they wrote the book with a proof assistant and then “unformalized” it to produce something more human-friendly.

There’s a very well-written description of the history and recent successes of Voevodsky’s work in a Quanta Magazine piece from May 2015. Voevodsky’s new formalism is called the univalent foundation of mathematics. The Quanta article describes how these ideas grew from existing formalisms in reasonable detail. But, what I find most interesting is the surprising consistency among particular ideas from computer science, logic and mathematics.

This consistency and convenience reflects something deeper about the program, said Daniel Grayson, an emeritus professor of mathematics at the University of Illinois at Urbana-Champaign. The strength of univalent foundations lies in the fact that it taps into a previously hidden structure in mathematics.

“What’s appealing and different about [univalent foundations], especially if you start viewing [it] as replacing set theory,” he said, “is that it appears that ideas from topology come into the very foundation of mathematics.”

One of the youngest sub-disciplines finds its way into the foundation, a very appealing and suggestive idea. Finding hidden structure is what always looks magical about mathematics. And it is, fundamentally, what human cognition is all about.

There’s a nice report on one of Voevodsky’s talks in a Scientific American Guest Blog from 2013 by Julie Rehmeyer that includes a video of the talk itself.

This topic requires a closer look, which I expect to do with a follow-up to this post.

Thinking without a brain

OctopusCan the presence of intelligent behavior in other creatures (creatures that don’t have a nervous system comparable to ours) tell us something about what ideas are, or how thought fits into nature’s actions? It has always seemed to us humans that our ideas are one of the fruits of what we call our ‘intelligence.’  And the evolutionary history of this intelligence is frequently traced back through the archeological records of our first use of things like tools, ornamentation, or planning.  It is often thought that our intelligence is just some twist of nature, something that just happened. But once set in motion, it strengthened our survival prospects, and gave us an odd kind of control of our comfort and well-being. We tend to believe that ‘thoughts’ are a private human experience, not easily lined up with nature’s actions. Thoughts build human cultures, and one of the high points of thought is, of course, mathematics. Remember, it was the scarecrow’s reciting the Pythagorean Theorem that told us he had a brain.  Even though he got it wrong.

When an animal is able to learn something and apply that learning to a new circumstance we generally concede that this is also intelligent behavior. A good deal of research has been done on animals like chimpanzees, dolphins, and apes, where the ability to learn symbolic representations or sophisticated communication skills mark intelligent behavior. But these observations don’t significantly change our sense that intelligence is some quirk of the brain, and only in humans has this quirk gone through the development that gives birth to ideas and culture, and puts us in our unique evolutionary place.

But when intelligent behavior is observed in a bumble bee, for example, we have to think a little more. The bumble bee’s evolution isn’t particularly related to our own, and their brains are not like ours. More than one million interconnected neurons occupy less than one cubic millimeter of brain tissue in the bee. The density of neurons is about ten times greater than in a mammalian cerebral cortex. Research published in Nature (in 2001) is described in a Scientific American piece in 2008 by Christof Koch.

The abstract of the Nature paper includes this:

…honeybees can interpolate visual information, exhibit associative recall, categorize visual information, and learn contextual information. Here we show that honeybees can form ‘sameness’ and ‘difference’ concepts. They learn to solve ‘delayed matching-to-sample’ tasks, in which they are required to respond to a matching stimulus, and ‘delayed non-matching-to-sample’ tasks, in which they are required to respond to a different stimulus; they can also transfer the learned rules to new stimuli of the same or a different sensory modality. Thus, not only can bees learn specific objects and their physical parameters, but they can also master abstract inter-relationships, such as sameness and difference.

And Koch makes this observation:

Given all of this ability, why does almost everybody instinctively reject the idea that bees or other insects might be conscious? The trouble is that bees are so different from us and our ilk that our insights fail us.

In 2015, Koch coauthored a paper with Giulio Tononi, the focus of which was consciousness. There he argues:

Indeed, as long as one starts from the brain and asks how it could possibly give rise to experience—in effect trying to ‘distill’ mind out of matter, the problem may be not only hard, but almost impossible to solve. But things may be less hard if one takes the opposite approach: start from consciousness itself, by identifying its essential properties, and then ask what kinds of physical mechanisms could possibly account for them.  (emphasis added)

Potential clues to different kinds of physical mechanisms are described in a very recent Scientific American article that reports on the successful unraveling of the octopus genome.

Among the biggest surprises contained within the genome—eliciting exclamation point–ridden e-mails from cephalopod researchers—is that octopuses possess a large group of familiar genes that are involved in developing a complex neural network and have been found to be enriched in other animals, such as mammals, with substantial processing power. Known as protocadherin genes, they “were previously thought to be expanded only in vertebrates,” says Clifton Ragsdale, an associate professor of neurobiology at the University of Chicago and a co-author of the new paper. Such genes join the list of independently evolved features we share with octopuses—including camera-type eyes (with a lens, iris and retina), closed circulatory systems and large brains.

Having followed such a vastly different evolutionary path to intelligence, however, the octopus nervous system is an especially rich subject for study. “For neurobiologists, it’s intriguing to understand how a completely distinct group has developed big, complex brains,” says Joshua Rosenthal of the University of Puerto Rico’s Institute of Neurobiology. “Now with this paper, we can better understand the molecular underpinnings.”

In 2012, Scientific American reported on the signing of the Cambridge Declaration on Consciousness.

The weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness,” the scientists wrote. “Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.

And from the Declaration:

Furthermore, neural circuits supporting behavioral/electrophysiological states of attentiveness, sleep and decision-making appear to have arisen in evolution as early as the invertebrate radiation, being evident in insects and cephalopod mollusks (e.g., octopus).

Specific mention of the octopus was based on the collection of research that documented their intentional action, their use of tools, and their sophisticated spatial navigation and memory. Christof Koch was one of the presenters of the declaration and was quoted as saying, “The challenge that remains is to understand how the whispering of nerve cells, interconnected by thousands of gossamer threads (their axons), give rise to any one conscious sensation.”

My friend and former agent, Ann Downer, has a new book due out in September with the provocative title, Smart and Spineless: Exploring Invertebrate Intelligence. It was written for young adults and is a wonderful way to correct an old perspective for growing thinkers.

These many insights suggest that what we call intelligence is not something that happens to some living things, but is, perhaps, somehow intrinsic to life and manifest in many forms. Koch suggests that we begin a study of consciousness by identifying its essential properties and mathematics can likely help with this. It does so already with Giulio Tononi’s Integrated Information Theory of Consciousness.  But mathematics is a grand scale investigation of pure thought – of the abstract relationships that are often related to language, learning, and spatial navigation (to name just a few). As a fully abstract investigation of such things, it could help direct the search for the essential properties of awareness and cognition. And the chance that we will find the ubiquitous presence of such properties in the world around us may breath new life into how we understand nature itself.

Physics, Plato and epistemology

In a recent Scientific American article, the late physicist Victor Stenger, along with authors James A. Lindsay and Peter Boghossian argue that, while not acknowledged as such, some interpretations of quantum mechanics are implicitly platonic (with a lower-case p).

We will use platonism with a lower-case “p” here to refer to the belief that the objects within the models of theoretical physics constitute elements of reality, but these models are not based on pure thought, which is Platonism with a capital “P,” but fashioned to describe and predict observations.

The authors suggest that while early 20th century physicists like Einstein, Bohr, Schrödinger, Heisenberg, and Born considered the philosophical implications of their discoveries, after World War II, the next generation of scientists judged this effort unproductive. Most physicists, they say, now agree that observation is the only reliable source of knowledge, and that only testable ideas are useful (hence the falling out of favor of string theory). But the authors also argue that this younger generation of physicists “went ahead and adopted philosophical doctrines, or at least spoke in philosophical terms, without admitting it to themselves.” They justify this, in part, with a reference to physicist David Tong who claims in a 2012 Scientific American article that the particles to which experiments refer are illusions.

Physicists routinely teach that the building blocks of nature are discrete particles such as the electron or quark. That is a lie. The building blocks of our theories are not particles but fields: continuous, fluidlike objects spread throughout space.

This view is explicitly philosophical,” the authors say, “and accepting it uncritically makes for bad philosophical thinking.”

I enjoyed this twist on the partnership of the observable with the abstract – namely their using the mathematics that captures the data to ‘reveal’ a platonic view.  It’s not clear to me that this is a fair characterization of Tong’s observation, but it is an interesting one. The authors do distinguish between realists (those who find the mathematical objects to be representative of reality) and instrumentalists (those who claim that reality just constrains what may be observed, but need not correspond to the mathematical models used) and their critique is mostly aimed at the realists. But the article is largely responding to recent criticisms of philosophy heard from physicists like, Lawrence Krauss and Neil deGrasse Tyson.   The authors suggest that many physicists have chosen a philosophical perspective and that there are problems associated with their not acknowledging this.

The direct, platonic, correspondence of physical theories to the nature of reality, as Weinberg, Tong and possibly Krauss have done, is fraught with problems: First, theories are notoriously temporary. We can never know if quantum field theory will not someday be replaced with another more powerful model that makes no mention of fields (or particles, for that matter). Second, as with all physical theories, quantum field theory is a model—a human contrivance. We test our models to find out if they work; but we can never be sure, even for highly predictive models like quantum electrodynamics, to what degree they correspond to “reality.” To claim they do is metaphysics.

I understand the admonition, but here’s the part in which I am most interested:

Many physicists have uncritically adopted platonic realism as their personal interpretation of the meaning of physics. This is not inconsequential because it associates a reality that lies beyond the senses with the cognitive tools humans use to describe observations.

In order to test their models all physicists assume that the elements of these models correspond in some way to reality. But those models are compared with the data that flow from particle detectors on the floors of accelerator labs or at the foci of telescopes (photons are particles, too). It is data—not theory—that decides if a particular model corresponds in some way to reality. If the model fails to fit the data, then it certainly has no connection with reality. If it fits the data, then it likely has some connection. But what is that connection? Models are squiggles on the whiteboards in the theory section of the physics building. Those squiggles are easily erased; the data can’t be.

(emphasis added)

What is the relationship between those squiggles and reality, or even between the data and the mathematics that turns the data into the signature of an event?  These are questions filled with meaning.  And one of the most important points made is this one:

All of the prominent critics of philosophy whose views we have discussed think very deeply about the source of human knowledge.

Physics is as much concerned with how knowledge is acquired as it is about the nature of physical reality. The senses are extended with the use of detectors – mechanical (and sometimes very large) sensory mechanisms that we’ve learned to build. And this sensory data can only be understood when run through analysis programs that are grounded in mathematics, and whose meaning is expressed mathematically. If the detectors extend the senses, perhaps the mathematics extends cognition.  The fact that the data can now significantly challenge our conceptual abilities should be a fact that contributes to both epistemological discussions and physics discussions. And epistemological discussions inevitably lead to questions about cognition.

Certainly one cannot have a productive discussion about the nature of reality without the data that physics provides. And I agree that this limitation does not apply to other areas of philosophy like ethics, aesthetics, and politics. But epistemology is something to which the sciences can make a contribution, and this may very well spring from philosophers of science.

Here’s another point well taken:

…those who have not adopted platonism outright still apply epistemological thinking in their pronouncements when they assert that observation is our only source of knowledge.

Mathematics consistently raises the question, “what does it mean to know something.” A teacher of mine once lamented the fact that we can’t allow children to rediscover mathematics because there isn’t enough time, because now so much is known. What is it that’s known? The partnership, in physics, of mathematics with observables that lie beyond the range of the senses should fuel epistemological discussions, and not only ones inspired by mathematics and physics, but ones that could also inform them.

There is some interest among physicists about current research in cognitive science. Cosmologist Max Tegmark, for example, has taken an interest in Giulio Tononi’s integrated information theory of consciousness. In a recent TED talk, I believe Tegmark proposed that the only difference between a structure that exists mathematically and one that also exists physically is how the information is instantiated. This is consistent with something that David Deutsch’s once said – that the brain faithfully embodies the mathematical relationships and causal structure of things like a quasars, and does so more and more precisely over time.  He made the following observation about brains and quasars:

Physical objects that are as unlike each other as they could possibly be can, nevertheless, embody the same mathematical and causal structure and do it more and more so over time.

These are thoughts that touch equally on epistemology and the nature of reality.

Computations can be very natural

A recent post on Mind Hacks challenged the perspective outlined in a NY Times op-ed by psychologist Gary Marcus with the title Face It, Your Brain Is a Computer.  The title of Marcus’ piece may be misleading. The brain/computer analogy that he proposes is more a strategy than a theory. But the rejection of brain/computer analogies seems almost reflexive (as one can see from the comments on the Mind Hacks post).  They can be quickly judged wrong, devoid of the vitality and creativity of life, and misguidedly materialistic.  Information processing ideas, however, are increasingly present in the physical and life sciences and do not have the character of reductionist thinking.

Marcus describes his strategy as having two steps:

finding some way to connect the scientific language of neurons and the scientific language of computational primitives (which would be comparable in computer science to connecting the physics of electrons and the workings of microprocessors);

and finding some way to connect the scientific language of computational primitives and that of human behavior (which would be comparable to understanding how computer programs are built out of more basic microprocessor instructions).

The computational primitives of electronic systems include their instructions (like add, branch, plot) and actions related to the collection and storage of data (like fetch and store or compare and swap). An example of the application of these ideas to an analysis of brain function can be seen in some of the work of L. Andrew Coward whose recent papers can be found here.  This kind of research is motivated, in part, by the needs of artificial intelligence designers. Conference proceedings from the 5th Annual International Conference on Biologically Inspired Cognitive Architectures make a number of related papers available here.

The critique of this perspective that is described on Mind Hacks points to an interesting question:

The idea that the mind and brain can be described in terms of information processing is the main contention of cognitive science but this raises a key but little asked question – is the brain a computer or is computation just a convenient way of describing its function?

Here’s an example if the distinction isn’t clear. If you throw a stone you can describe its trajectory using calculus. Here we could ask a similar question: is the stone ‘computing’ the answer to a calculus equation that describes its flight, or is calculus just a convenient way of describing its trajectory?

After a few other objections to the brain/computer analogy, the point is made that “the concept of computation is a tool.”   But there is no indication that there is a growing perspective which sees ‘information’ as the fundamental aspect of everything, despite the fact that this perspective introduces a number of ideas relevant to understanding the brain, and on multiple levels.

Physicist David Deutsch, for example, is currently involved in what he has named constructor theory. I wrote about this work in a Scientific American guest blog.  Constructor theory is meant to get at what Deutsch calls the “substrate independence of information,” which necessarily involves a more fundamental level of physics than particles, waves and space-time.  For Deutsch, information is physical, instantiated in various forms, and transformed by various processes.  And he suspects that this ‘more fundamental level’ may be shared by all physical systems.

Leibniz disassociated ‘substance’ from ‘material’ and reasoned that the world was not fundamentally built from material. He argued that fundamentals must be indivisible, and material is not.  In The Lightness of Being, physicist Frank Wilczek describes the debate about fundamentals in this way:

Philosophical realists claim that matter is primary, brains (minds) are made from matter, and concepts emerge from brains. Idealists claim that concepts are primary, minds are conceptual machines and conceptual machines create matter.

It would seem from this that which ever direction one chooses, what has been learned from the development of computer hardware and software, and the ideas associated with what we call ‘computation,’ are helping to direct and inform research. And while the realist view looks reductionist, the idealist view is certainly not.

In a Closer to Truth interview,  Gregory Chaitin responds to the question, “is information fundamental?” He admits that the inspiration for these ideas may be the computer, but a theory, itself, can be thought of as a computation. The theory is the input and the physical universe is the output. A theory is good when it is a compression, when what you put into the computer is simpler or smaller than what you get out.  It’s then that you understand, and that understanding can be mathematical or physical.

Virginia Chaitin has written on what she calls interdisciplinary, where “the original frameworks, research methods and epistemic goals of individual disciplines are combined and recreated yielding novel and unexpected prospects for knowledge and understanding.”  This kind of paradigm-shifting interdisciplinary effort involves adopting a new conceptual framework, borrowing the very way that understanding is defined within a particular discipline, as well as the way it is explored and the way it is expressed. The results, as she says, are the “migrations of entire conceptual neighborhoods that create a new vocabulary.”  Perhaps the strategy that Marcus proposes can be seen in this light.

The growing interest in information-driven worlds is evident in the conference being organized in The Netherlands (October 7 – 9). It has been named ‘The Information Universe’ Conference and its welcome page says the following:

The main ambition of this conference is to explore the question “What is the role of information in the physics of our Universe?“. This intellectual pursuit may have a key role in improving our understanding of the Universe at a time when we “build technology to acquire and manage Big Data“, “discover highly organized information systems in nature“ and “attempt to solve outstanding issues on the role of information in physics“. The conference intends to address the “in vivo“ (role of information in nature) and “in vitro“ (theory and models) aspects of the Information Universe.

The discussions about the role of information will include the views and thoughts of several disciplines: astronomy, physics, computer science, mathematics, life sciences, quantum computing, and neuroscience. Different scientific communities hold various and sometimes distinct formulations of the role of information in the Universe indicating we still lack understanding of its intrinsic nature. During this conference we will try to identify the right questions, which may lead us towards an answer.

Ideas related to information and information processing are enjoying wide application. And the objections to using computational models to understand the brain are often grounded in the kind of reductionist view that, I would argue, is outdated and fading from current research efforts. It betrays a mistaken view of what mathematics can offer to theories in physics, cognitive science, consciousness studies, evolution and epistemology.  The current inclination to broaden the meaning of information, and associated processes, has the potential to shed new light on what the brain might actually be doing, or what it’s place in nature might be.

M.C. Escher’s visual inquiries

The Amazing World of MC Escher is a new exhibit at the Scottish National Gallery of Modern Art in Edinburgh. It will be there from June 27 to September 27. The exhibit prompted a nice piece on Escher in The Guardian. Author Steven Poole mentions, but does not much explore, the relationship between Escher’s work and the work of mathematicians. But just a little bit of research on the topic suggests that there may be quite a lot to say about this. An article on Escher, in a 2010 issue of Notices of the American Mathematical Society, by mathematician Doris Schattschneider, gives a fairly thorough account of the extent to which some of Escher’s work qualifies as mathematical research.

Many prints provide visual metaphors for abstract mathematical concepts; in particular, Escher was obsessed with the depiction of infinity. His work has sparked investigations by scientists and mathematicians. But most surprising of all, for several years Escher carried out his own mathematical research, some of which anticipated later discoveries by mathematicians.

Escher was a young boy in the early 20th century. He grew up in Holland. His father was a civil engineer and his four older brothers eventually became scientists. Yet, Schattschneider reports, Escher said of himself that he was an “extremely poor” student of arithmetic and algebra, having “great difficulty with the abstractions of figures and letters.” His skill with geometry was a little better, but not one in which he excelled. He did, however, excel as a draftsman. And it was drawing that became both his method of inquiry as well as his expression of discovery.

The questions he asked seemed to be about the essential qualities or properties of images – how a flat surface is made three dimensional, mistakes of perspective, impossible perspectives, filling the plane or tiling. Some aspect of his exploration involved perception itself, but his inquiry is also seen as one into the geometries of space. It was this visual exploration that brought him to the heart of some mathematical questions. And this is what I find most interesting, that it would all be done in the seeing and the drawing.  The mathematics that Escher explored was known to him only in action, and expressed only in images.

Escher chose a career in graphic art over architecture and was successful.  He illustrated books, designed tapestry, and painted murals, while being, primarily, a printmaker. His subjects were often landscapes, buildings, or room interiors, within which he might explore the spatial effects of different, sometimes conflicting, vantage points.  And there are early examples of his interest in the effect of filling the plane with interlocking, representative shapes.

But in 1936 he visited Alhambra and, as Poole says in The Guardian, “Escher really became Escher.”

That year he went to the Alhambra Palace in Granada, Spain, and carefully copied some of its geometric tiling. His work gradually became less observational and more formally inventive. As Escher later explained, it also helped that the architecture and landscape of his successive homes in Switzerland, Belgium and Netherlands were so boring: he “felt compelled to withdraw from the more or less direct and true-to-life illustrating of my surroundings”, embracing what he called his “inner visions.”

Escher became engaged in a series of thoughtful questions about shapes, tiling, and filling the plane, that he methodically explored entirely within the visual images he produced. He produced illusions, impossible figures, and a variety of regular divisions of the Euclidean plane into potentially infinite groupings of creatures like fish, reptiles, or birds.  In his experience, fundamental human attributes (the perception of depth and space), the perspective achieved in drawing, and non-visual ideas like infinity, become very naturally bridged.

An exhibit of Escher’s work was viewed by mathematicians attending the International Congress of Mathematicians in 1954 in Amsterdam. The exhibit was arranged by mathematician N.G de Bruijn. Schattschneider reproduced the catalog note from de Bruijn:

Probably mathematicians will not only be interested in the geometrical motifs; the same playfulness which constantly appears in mathematics in general and which, to a great many mathematicians is the peculiar charm of their subject, will be a more important element.  (emphasis added)

The exhibition led to a correspondence between Escher and geometer H.S.M. Coxeter. Escher became preoccupied with Coxeter’s illustration of a hyperbolic plane, and while remaining blinded to the mathematical content of this illustration, (which Coxeter provided) he nonetheless succeeded in finding his own direct understanding. Mathematician Thomas Wieting presents a nice account of this exchange in a 2010 issue of Reed Magazine.  And Wieting includes part of a letter Escher wrote to his son that expresses both his determination and his loneliness.

My great enthusiasm for this sort of picture and my tenacity in pursuing the study will perhaps lead to a satisfactory solution in the end. Although Coxeter could help me by saying just one word, I prefer to find it myself for the time being, also because I am so often at cross purposes with those theoretical mathematicians, on a variety of points. In addition, it seems to be very difficult for Coxeter to write intelligibly for a layman. Finally, no matter how difficult it is, I feel all the more satisfaction from solving a problem like this in my own bumbling fashion. But the sad and frustrating fact remains that these days I’m starting to speak a language which is understood by very few people. It makes me feel increasingly lonely. After all, I no longer belong anywhere. The mathematicians may be friendly and interested and give me a fatherly pat on the back, but in the end I am only a bungler to them. “Artistic” people mainly become irritated.

Weiting continues,

Escher’s enthusiasm and tenacity did indeed prove sufficient. Somehow, dur­ing the following months, he taught himself, in terms of the straightedge and the compass, to construct not only Coxeter’s figure but at least one variation of it as well. In March 1959, he completed the second of the woodcuts in his Circle Limit Series.

It’s worth noting that, in his own articles, Coxeter provided mathematical analyses of Escher’s work and pointed out that Escher had anticipated some of his own discoveries.

Wieting concludes:

Seeking a new visual logic by which to “capture infinity,” Escher stepped, without foreknowledge, from the Euclidean plane to the hyperbolic plane. Of the former, he was the master; in the latter, a novice. Nevertheless, his acquired insights yielded two among his most interesting works: CLIII, The Miraculous Draught of Fishes, and CLIV, Angels and Devils.

Wieting also gives us this, Escher’s own description of the success of CLIII.  I placed emphasis on his reference to this “round world,” and “the emptiness around it,” because it reveals something of the depth of the meaningfulness born of this math-art-philosophy piece.

In the colored woodcut Circle Limit III the shortcomings of Circle Limit I are largely eliminated. We now have none but “through traffic” series, and all the fish belonging to one series have the same color and swim after each other head to tail along a circular route from edge to edge. The nearer they get to the center the larger they become. Four colors are needed so that each row can be in complete contrast to its surroundings. As all these strings of fish shoot up like rockets from the infinite distance at right angles from the boundary and fall back again whence they came, not one single component reaches the edge. For beyond that there is “absolute nothingness.” And yet this round world cannot exist without the emptiness around it, not simply because “within” presupposes “without,” but also because it is out there in the “nothingness” that the center points of the arcs that go to build up the framework are fixed with such geometric exactitude.

Collective behavior: flocks, magnets, neurons and mathematics

The analysis of collective behavior is quickly becoming cross-disciplinary.  I wrote a few years ago about a study that analyzed the coordination of starling flocks. That post was based on the work of Thierry Mora and William Bialek, presented in their paper Are Biological Systems Poised at Criticality. The paper was published in the Journal of Statistical Physics in 2011.

The mathematics of critical transitions describes systems that reach a state from which they are almost instantly transformed. Such a transformation could be liquid turning to gas or metals becoming magnetized. The authors of this paper found that the birds in a flock were connected in such a way that a flock turning in unison could be described, mathematically, as a phase transition.

In the past few years, new, larger scale experiments have made it possible to construct statistical mechanics models of biological systems directly from real data. We review the surprising successes of this “inverse” approach, using examples form families of proteins, networks of neurons, and flocks of birds. Remarkably, in all these cases the models that emerge from the data are poised at a very special point in their parameter space–a critical point. This suggests there may be some deeper theoretical principle behind the behavior of these diverse systems.

I hope that the last point, to which I added emphasis, will grow in relevance.

Also in 2011, Mora and Bialek were among the seven coauthors of the paper: Statistical Mechanics for Natural Flocks of Birds. This study focused on the alignment of flight direction in a flock.

Rather than affecting every other flock member, orientation changes caused only a bird’s seven closest neighbors to alter their flight. That number stayed consistent regardless of flock density, making the equations “topological” rather than critical in nature.

“The orientations are not at a critical point,” said Giardina. Even without criticality, however, changes rippled quickly through flocks — from one starling to seven neighbors, each of which affected seven more neighbors, and so on.

The closest statistical fit for this behavior comes from the physics of magnetism, and describes how the electron spins of particles align with their neighbors as metals become magnetized.

The paper’s abstract tells us that these models are mathematically equivalent to the quantum-mechanical Heisenberg model of magnetism.

An interesting observation here is that the interaction among birds is defined by a number of neighboring birds, not by the number of birds in a neighboring area. In other words, if it was a metric distance that governed their interaction, when the flock was more dense, the number of birds neighboring an individual bird would increases and so the number of birds interacting would also increase. But this seems not to be the case. The number of birds interacting is the quantity that stays constant. There is a very nice description of how this observation came about, and what it might mean here. From the paper:

The collective behaviour of large groups of animals is an imposing natural phenomenon, very hard to cast into a systematic theory [1]. Physicists have long hoped that such collective behaviours in biological systems could be understood in the same way as we understand collective behaviour in physics, where statistical mechanics provides a bridge between microscopic rules and macroscopic phenomena [2, 3]. A natural test case for this approach is the emergence of order in a flock of birds: out of a network of distributed interactions among the individuals, the entire flock spontaneously chooses a unique direction in which to fly [4], much as local interactions among individual spins in a ferromagnet lead to a spontaneous magnetization of the system as a whole [5]. Despite detailed development of these ideas [6{9], there still is a gap between theory and experiment. Here we show how to bridge this gap, by constructing a maximum entropy model [10] based on field data of large flocks of starlings [11{13]. We use this framework to show that the effective interactions among birds are local, and that the number of interacting neighbors is independent of flock density, confirming that interactions are ruled by topological rather than metric distance.

In a synopsis of a more recent paper, Michael Schirber explains a new refinement in the study of flocks.

Andrea Cavagna from the National Research Council (CNR) in Rome, Italy, and his colleagues have now explored this spin wave model in the continuous limit, where the birds can be thought of as fluid elements in a large hydrodynamic system. Both spin waves  and density waves  can occur, but in some cases they damp out before traveling very far. The researchers show that only spin waves propagate in small flocks, whereas density waves dominate for large flocks. In the intermediate region, no waves can propagate, which would make flocks of this size unsustainable. The results may have implications for other animal groups, such as fish schools and mammal herds.

In a New Scientist piece on the same study this point is made:

“I think it is interesting because it identifies purely physical mechanisms for the propagation of information across the flock,” says Cristina Marchetti of Syracuse University in New York. “More importantly, it imposes constraints on such a propagation, which imply constraints on the size of the flock.”

The theme that runs through all of these studies is the recognition that the behavior of all kinds of systems (physical, behavioral, biological) can look the same when viewed from certain perspectives. This ‘sameness’ is most often brought to light with mathematics. There are many things suggested by this, about nature and about mathematics. But, today, my inclination is to say this:  Mathematics itself looks like one of the many faces of nature when we imagine that mathematics itself is an evolving organization of things related to each other (in our own heads, if you will). Like the biological systems that produce organisms or the matter and energy systems that produce galaxies, mathematics produces something. Our conscious experience of mathematics, denoted and investigated by mathematicians, is as difficult to pin to the physical as consciousness itself. But mathematics, like delicate new tissue that runs through us and around us, does consistently provide the mechanism for seeing and understanding. And so, what we call ‘seeing’ and ‘understanding’ must also be some aspect of nature organizing itself.

Here’s a couple of more links:

Wired 11.08.11

Wired 03.13.12