I read a short article on scientificamerican.com reporting on a recent advance in the investigation of the neural systems that support navigation, or our sense of direction. When I did some follow-up on the individual who led the study, I was surprised to find another interesting collaboration between scientists and artists. While the collaboration was centered on inquiries into perception, memory, and space, it touched on things related to mathematics – at least in its discussions of space, dimension and direction. Both the study and the collaboration make some interesting points. I’ll start with the study.
It was led by Hugo Spiers of University College London. Spiers found something new in the action of head-direction cells – neural cells that fire when we face a certain direction. These cells have been known to play a role in our ability to navigate through our environment, working with place cells in the hippocampus (that establish our memory of specific locations and a kind of map of the environment) and grid cells in the adjacent entorhinal cortex (that somehow map where we are relative to where we have just been). What researchers were able to observe was that head cells also fired in response to the direction we wanted to go.
The entorhinal region displayed a distinct pattern of activity when volunteers faced each direction—consistent with how head-direction cells should behave. The researchers discovered, however, that the same pattern appeared whether the volunteers were facing a specific direction or just thinking about it. The finding suggests that the same mechanism that signals head direction also simulates goal direction.
It might help to describe the whole system as it is currently understood. Spiers and co-author Caswell Barry provide a nice description of the interaction of the cells that function in navigation in a recent paper.
Electrophysiological investigations have revealed several distinct neural representations of self-location (see Figure 1 and for review ). Briefly, place cells found in hippocampal regions CA3 and CA1 signal the animal’s presence in particular regions of space; the cells’ place fields  (Figure 1a). Place fields are broadly stable between visits to familiar locations but remap whenever a novel environment is encountered, quickly forming a new and distinct representation 17 and 18]. Grid cells, identified in entorhinal cortex, and subsequently in the pre-subiculum and para-subiculum, also signal self-location but do so with multiple receptive fields distributed in a striking hexagonal array 19 and 20] (Figure 1b). Head direction cells, found throughout the limbic system, provide a complementary representation, signalling facing direction; with each cell responding only when the animal’s head is within a narrow range of orientations in the horizontal plane (e.g. , Figure 1c). Other similar cell types are also known, for example border cells which signal proximity to environmental boundaries  and conjunctive grid cells which respond to both position and facing direction . It is likely that these spatial representations are a common feature of the mammalian brain, at the very least grid cells and place cells have been found in animals as diverse as bats, humans, and rodents .
What first struck me about the work reported in the Scientific American piece was that this navigation system, which looks fairly mechanical, has another layer at least – one that equates the direction faced with one’s intent to face it. The head cells respond to direction despite the fact that the head itself does not. From the paper on which the Scientific American piece was based:
In summary, we show that the human entorhinal/subicular region supports a neural representation of geocentric goal direction. We further show that goal direction shares a common neural representation with facing direction. This suggests that head-direction populations within the entorhinal/subicular region are recruited for the simulation of the direction to future goals. These results not only provide the first evidence for the presence of goal direction representations within the mammalian brain but also suggest a specific mechanism for the computation of this neural signal, based on simulation.
When I looked further into Spier’s research, I found links on his University College London website that provided info on work associated with art and architecture and his collaboration with artist Antoni Malinowski. In an interview that Spiers conducted with Malinowski, Malinowski talked about his own work, distinguishing it from the work of architects. Architects, he said, deal with space diagrammatically. In contrast, he explained, he dealt with space in a reduced way. His subject is the interaction of dimensions – the three and four of space and time and the two of a flat surface. He proposed that dimensions are foldable and that when he worked, he folded four dimensions into two with brushstroke and paint. These are then ‘unfolded’ in the viewing. This sounds like an inquiry, an investigation of the nature and perception of dimension.
Malinowski describes how he works:
I create a situation where you do not know where you are, and you don’t know what it is. So you have to make an effort. I want to take you to a mental area. And in order to do so I have all those tools, which are colour, rather delicious, and wonderful. So you are drawn into them. And I construct it in such a way that you want to go there.
So as viewer you notice something and you go off… But it is all done in a language of painting it is not really definable.
A review of his work by Mark Rappolt says this:
his work escapes the canvas to cover a building’s walls, Malinowski exploits architecture not as a singular fixed entity, but as a plurality of possible worlds, as an illusory reality, a space of shifting sand. Perhaps in doing this he comes closer than many architects to an understanding of what space really is. (emphasis added)
Malinowski is playing with perception and orientation, perhaps to reveal something about it. His work seems to surprise the viewer, but it’s telling us something about ourselves and how we make things sensible, something we can’t see in our day-to-day experience. Looking at the development of mathematics from it’s more familiar, more physical roots to its strange and powerful abstractions can do something similar. The investigation of what one means by ‘space’ in mathematics (Euclidean and non-Euclidean, the manifold, topological spaces, parameter spaces, etc.) has produced some of its most effective applications. Mathematics contains more than one definition of dimension, each of which produce its own results. And the vector, the mathematical description of direction, finds its way into the geometry of relativity, the phase evolution of a wave, the calculation of probabilities, the spin of fundamental particles, to name just a few. It seems clear to me that mathematics is a very thorough investigation of experience while also becoming diassociated from it. The work of building mathematics is much larger, intergenerational, shared, and more universal than Malinowski’s individual investigation of perception and orientation, but I find in his a similar inclination to pry open familiar experience to find something new.
A short article in the April 16 issue of New Scientist reported on an applied soft computing paper that proposes an improvement on what’s known as ‘particle swarm optimization (PSO).
Particle swarm optimization (PSO) is an optimization technique inspired by the social behavior of birds. Described as a simple and powerful algorithm, it can be used to optimize high dimensional functions (in other words, finding maximums and minimums of functions with many parameters). There is quite a bit of info on the website Code Project. There they explain:
To understand the algorithm, it is best to imagine a swarm of birds that are searching for food in a defined area – there is only one piece of food in this area. Initially, the birds don’t know where the food is, but they know at each time how far the food is. Which strategy will the birds follow? Well, each bird will follow the one that is nearest to the food.
PSO adapts this behavior of birds searching for food in order to the search for the best solution-vector in a search space. A particle is a single solution. The algorithm defines the measure of best solutions and begins with particles at random positions. Through some number of iterations, individual particles adjust their velocity and position as they follow best solution particles.
The New Scientist article gives a more general description of this approach along with one of its limitations:
One way they can do this is by using groups of virtual creatures that wander through “parameter space”, looking for valleys that represent the lowest values. Mathematicians have taken inspiration from actual animals, from grey wolves to ants. One limitation, though, is that the animals sometimes fail to notice a deeper valley nearby.
The suggested improvement is to add parasites to the mix:
In their model, a swarm of animals searched for the lowest valleys, but was then joined by a second, parasitic population. This group searched for valleys, but also abducted the most successful animals and made them work for the parasite team.
The struggle resulted in a more varied collection of creatures allowing the parasitic algorithm to solve the problem twice as fast.
I thought about what this kind of thing could mean about the mathematics itself. Why would there be any relationship between a bird’s search for food and our interest in optimization solutions? We’re not just modeling the bird’s behavior, we’re using the bird’s behavior to solve our own problems. There is here an unexpected overlap between two kinds of inquiries. And this word, I think, is key – inquiry.
There is still some debate among cognitive scientists about whether our more primal experience of quantity is discrete, like the numbers that we count with, or continuous, like our sense of time. If, as many cognitive scientist argue, our first sense of quantity is continuous (like the real numbers) and, if it is true that numbers followed language, then the 19th century struggle to understand and define the continuum (represented by the real number line) can look like an investigation of number, an inquiry back into number’s source. And once I begin to think in terms of inquiries, I see them everywhere. Visual art is an inquiry into visual sensation. This is a view consistently presented by neuroscientist Semir Zeki. Mathematics is an inquiry into sensation as well as abstract relationship itself (logical, numerical, geometric, probabilistic, etc.) The nature of these inquiries is, perhaps, a pure exploration of living interactions – the eye and light, the relationships that produce comprehension, movement and space.
The search for food is certainly an inquiry, as is swarming in the more general sense. I would include my own earlier discussion of a plant’s calculation of the rate with which it will consume its stored food. Perhaps evolution itself is an inquiry into life’s possibilities.
Paperback and electronic versions of John Horgan’s 1996 book, The End of Science, have recently been published by Basic Books. Horgan wrote a bit about how the text was received in 1996 on his weekly Scientific American blog. I read the book in 1996 and wrote to Horgan about the impact it had on me. At the time, I was working to better understand my own fascination with mathematics which, while it is the thing that has brought meaning to centuries of empirical efforts, rarely comes up in popular discussion of science in general or cosmology in particular. In my letter to Horgan I said this:
There is, no doubt, a limit to the kind of empiricism we have employed this last century. But, I think we have yet to understand something about what it is that we have accomplished. I agree with David Bohm that science is essentially some extension of perception. But there is a mistake embedded in our notion of objectification and I think I have become involved in wanting to somehow dislodge it.
Studying mathematics, I told him, had had the effect of putting me in my place, making me careful not to believe myself too much or too easily, because I had seen something extraordinary at work. There would always be something just slightly out of my reach.
The role that mathematics plays in scientific thinking is, I believe, still largely underestimated. Horgan doesn’t expect major revisions in our current maps of reality, nor “insights into nature as cataclysmic as heliocentrism, evolution, quantum mechanics, relativity…” But mathematics has the potential to produce insights into the nature of science itself, to show us something about how we are extending perception, and what this might mean about what we are able to see. I often expect that reorienting ourselves within what we seem to know can produce pofound changes in our current maps of reality.
By way of example, I can point back to my last post which describes Virginia Chaitin’s notion of interdisciplinarity where, she explains, “frameworks, research methods and epistemic goals of individual disciplines are combined and recreated yielding novel and unexpected prospects for knowledge and understanding.”
She uses Gregory Chaitin’s work in metabiology (a mathematical biology) to illustrate the value of this kind of effort and demonstrates along the way how mathematics contributes to the creation of “a brand-new and more generous conceptual framework for the human being, which now evolves around the idea of a life-form motivated by a non-mechanical, lawless, subjective creativity instead of a life-form driven by a predetermined “winner or loser” survival dichotomy.” This can have major implications for how we view evolution in general and human evolution in particular.
Horgan provides links to some of his earlier pieces for further reading. One of them is an interview with Edward Witten called, Physics Titan Edward Witten Still Thinks String Theory “on the Right Track.” String theory is essentially mathematical in character and has been criticized for its lack of testability. Horgan excerpted from his 1996 publication:
I asked Witten how he responded to the claims of critics that superstring theory is not testable and therefore is not really physics at all. Witten replied that the theory had predicted gravity. “Even though it is, properly speaking, a post-prediction, in the sense that the experiment was made before the theory, the fact that gravity is a consequence of string theory, to me, is one of the greatest theoretical insights ever.”
He acknowledged, even emphasized, that no one has truly fathomed the theory, and that it might be decades before it yielded a precise description of nature. He would not predict, as others had, that string theory might bring about the end of physics. Nevertheless, he was serenely confident that it would eventually yield a profound new understanding of reality. “Good wrong ideas are extremely scarce,” he said, “and good wrong ideas that even remotely rival the majesty of string theory have never been seen.” When I continued to press Witten on the issue of testability, he grew exasperated. “I don’t think I’ve succeeded in conveying to you its wonder, its incredible consistency, remarkable elegance and beauty.” In other words, superstring theory is too beautiful to be wrong.
Then from his more recent interview:
Horgan: When I interviewed you in 1991, you said that “good wrong ideas that even remotely rival the majesty of string theory have never been seen.” Are you still confident that string theory (or its descendant, M theory) will turn out to be “right”?
Witten: I think I will stick with what I said in 1991. Since then, we have lived through the second superstring evolution and many surprising developments in which string theory has been used to get a better understanding of conventional theories in physics (and math). All this makes most sense if one assumes that what we are doing is on the right track.
Another link takes us to his tribute to biologist Lynn Margulis who, Horgan writes,
…challenged what she called “ultra-Darwinian orthodoxy” with several ideas. The first, and most successful, is the concept of symbiosis. Darwin and his heirs had always emphasized the role that competition between individuals and species played in evolution. In the 1960′s, however, Margulis began arguing that symbiosis had been an equally important factor–and perhaps more important–in the evolution of life.
I include this reference only because I enjoyed that Horgan championed someone who also challenged mainstream Darwinian thinking.
I have little doubt that mathematics will break some of our habits of thought by showing us something about how we build conceptual structures, or the nature of what we have come to call empiricism. Perhaps it can even shed some light on the relationship between mind and matter. I should add that I did thoroughly enjoy The End of Science when I read it. As was the case then, John Horgan seems to always provide me with the support I need for arguing with him.
A paper by Virginia Chaitin was recently brought to my attention. She is currently a Post-Doc at the Univeridade Federal do Rio de Janeiro, with research interests that include, among others, the history and philosophy of science, epistemology, and interdisciplinarity. The paper I just read, Metabiology, Interdisciplinarity and the Human Self-Image, focuses on the kind of interdisciplinarity that characterizes the development of metabiology in particular, and the impact that this new conceptual hybrid has on the broader questions of what is life, or what makes us human.
I’ve written a few times on Gregory Chaitin’s metabiology, with an eye toward its implications for mathematics as well as for evolution. His new take on evolution supports the arguments that I make here, on Mathematics Rising, about the nature of mathematics, and mathematics’ potential. The views I regularly express are grounded in my optimism that mathematics can play a significant role in breaking some of our habits of thought, on a large scale – an interdisciplinary scale. Virginia’s paper is this kind of argument, but she is making it more precisely:
…an epistemically fertile interdisciplinary area of study is one in which the original frameworks, research methods and epistemic goals of individual disciplines are combined and recreated yielding novel and unexpected prospects for knowledge and understanding. This is where interdisciplinary research really proves its worth.
What she proposes is not the kind of interdisciplinary work that we’re accustomed to, where the results of different research efforts are shared or where studies are designed with more than one kind of question in mind. The kind of interdisciplinary work that Chaitin is describing, involves adopting a new conceptual framework, borrowing the very way that understanding is defined within a particular discipline, as well as the way it is explored and the way it is expressed. The results, as she says, are the “migrations of entire conceptual neighborhoods that create a new vocabulary.”
Certainly the proliferation of scientific ideas about our world, our lives, our history and the history of our universe has happened, in no small way, by figuring out the right question to ask or, more to the point, by determining the kind of question to which an answer can be found. Once asked, the answers begin to build the scaffolding to which our understanding is attached. This conceptual structure inevitably, impacts future questions as well as our beliefs about what the answers imply. The concepts and methods of physics and astronomy, for example, have molded, in many ways, how we see the material of our world. Biology and evolution have had a strong influence on how we view life. Virginia’s paper is making a point about the fertility of interdisciplinarity in research as well as the effect it can have on these more general perspectives.
The precision of the paper rests in its analysis of the effects of metabiology. This new field of study, Chaitin argues,
…is a paradigm-shifting interdisciplinary research area that successfully combines methods, techniques and ideas from the following fields: algorithmic information theory, computability theory, metamathematics and evolutionary biology.
Integrating the elements of these disciplines in metabiology rests on a fundamental analogy between DNA and software, where DNA is the universal programming language.
We emphasize that metabiological evolution is not about adaptation or passing one’s genes to the next generation but about coming up with new genes that contain new algorithmic information content.
Creativity, rather than adaptation and survival, is proposed as a criterion for evolution. Metabiology, then, is not concerned with explaining the response of biological systems to selective pressure. Instead, by virtue of its conceptual structure, it proposes to explain biological creativity. The mathematics involved is new, and relies on the Turing oracle to prevent this algorithmic evolutionary process from getting stuck. It is a non-reductionist mathematics that can express an unending, unbounded evolutionary process. I wrote about another recent exploration of the Turing oracle late last year.
In metabiology, DNA becomes software, genes become subroutines, energy becomes information. But these are not instructive analogies, they are working analogies that produce a new perspective. They produce, as Chaitin says, “a metabiological conception of life as an intrinsically creative process.” This challenges the Darwinian view that survival governs evolution.
In the metabiological evolutionary process there is no clear analogy for reproduction; instead the life-form represented by the mutating algorithmic organism is constantly searching for and incorporating new information.
The value of this alternative is likely debated. I would argue that the mere presence of an alternative is valuable. But Chaitin sites an instance where algorithmic mutations contributed an insight to the new field of theoretical biology. And this is the hope for this kind of change of perspective – that it bring new science.
While it may be obvious, it’s important to note, that it is mathematics that has illuminated this possibility, and the equivalence suggested between biological creativity and mathematical creativity can say quite a lot more about mathematics. As Chaitin argues, taken altogether, these ideas allow for a metabiological kind of life for the human body.
This gives rise to a brand-new and more generous conceptual framework for the human being, which now evolves around the idea of a life-form motivated by a non-mechanical, lawless, subjective creativity instead of a life-form driven by a predetermined “winner or loser” survival dichotomy.
Again, the thoughtful (creativity as we usually understand it) and the physical (biology as we usually understand it) are united in a novel way.
Today, I involved myself in a debate that hasn’t gotten very loud yet and, perhaps for that reason, I felt like I was going around in circles a bit. The questions I began trying to answer were sparked by a Mind Hacks post entitled Radical embodied cognition: an interview with Andrew Wilson. Wilson’s ideas challenge a perspective that is fairly widely accepted. As Tom Stafford explains:
The computational approach is the orthodoxy in psychological science. We try and understand the mind using the metaphors of information processing and the storage and retrieval of representations. These ideas are so common that it is easy to forget that there is any alternative. Andrew Wilson is on a mission to remind us that there is an alternative – a radical, non-representational, non-information processing take on what cognition is.
Last June I participated in a symposium at the Cognitive Science Society’s annual conference. I wrote later that I was struck by the extent to which computational modeling, designed to mirror cognitive processes, governs investigative strategies. Modeling possibilities likely impact the kinds of questions that cognitive scientists ask. As I listened to some of the talks, I considered that these modeling strategies could begin to create conceptual or theoretical grooves from which it can become difficult to stray. And so this Mind Hacks post got my attention.
It doesn’t look like Wilson’s radical approach is just using a different language. To this possibility he responded:
If the radical hypothesis is right, then a lot of cognitive theories will be wrong. Those theories all assume that information comes into the brain, is processed by representations and then output as behaviour. If we successfully replace representations with information, all those theories will be telling the wrong story. ‘Interacting with information’ is a completely different job description for the brain than ‘building models of the world’. This is another reason why it’s ‘radical’.
While much of the work in cognitive science is built around the idea that thinking is best understood in terms of representational structures in the mind (or brain) on which computational processes operate, the brain is always interacting with information. It is the nature of this interaction that researchers try to understand. And so I looked at a couple of posts from Wilson and his colleague Sabrina Golonka on a site called Notes from Two Scientific Psychologists. Mathematics figured prominently in both of the posts that I looked at which raised new questions for me.
The first one was recommended by Wilson and written by Golonka. It had the title, What else could it be? The case of the centrifugal governor. The centrifugal governor is used to frame a discussion of a dynamical systems approach to cognition contrasted with the one that relies on the brain’s creation of representations of the world on which it acts. An 18th century engineering problem illustrates the point. The problem involved reconciling what should be the constant rotation of a wheel, fueled by the pumping action of pistons. The operation of a valve allows one to adjust the pressure of the steam generated by the engine, so that the speed of the wheel can be managed. Golonka begins by describing the algorithmic solution to keeping the rotation constant. The state of the system is consistently measured and a rule (algorithm) adjusts the valve in response to the measurement. There are two stages to this solution – the measurement and the adjustment – even when the time lag between the measurement and the correction is minimized. The dynamic solution, discovered and implemented in the 18th century, is to have the valve opening changed by the action of an object within the system that varies in response to some aspect of the system itself. The problem then reduces to connecting that object to the valve with the proper relation, i.e.,the one that produces the desired effect.
If we imagine ourselves trying to come up with a computational model for how this system works, without being able to see how it works, this illustration does highlight the way a computational or algorithmic model might actually obscure the underlying action. But the algorithmic model would still capture something about the action, namely the need for and the direction of the adjustment. It just wouldn’t account for how the adjustment is actually made.
Another post on the same site is about a fairly interesting area-measuring device built in 1854, called a planimeter. The subject of this post is taken from a 1977 paper by Sverker Runeson with the title On the possibility of “smart” perceptual mechanisms.
The planimeter can measure the area of any flat 2-dimensional shape without using lengths and widths or integrals.
This is a device that measures area directly, rather than measuring the ‘simpler’ physical unit length and then performing the necessary computation. Runeson uses this device as an example of a ‘smart’ mechanism, and proposes that perception might entail such mechanisms.
The device traces the perimeter of the shape. The area of the shape is proportional to the number of turns through which the measuring wheel rotates as it traces the path. It is the movement or lack of movement in the wheel that is recorded. The result can be justified mathematically, but the measurement is coming directly from the wheel. It is, actually, an opportunity to see the relationship between an action and the formal analytic structures of mathematics.
There are a few words that stand out in this framing of the debate – representation, action, and information. What one means by representation and what one means by information is fairly context driven. We generally understand representation as a particularly human phenomena. We find it in art, language and mathematics and not so much in the behavior of other animals (although bower birds come to mind as a possible counter example). We think in terms of representations – words, maps, models, diagrams, etc. Within cognitive science, however, the meaning of a mental representation is not precisely defined. I see no reason why representation can’t be understood as patterned action on the cellular level, as can information. The algorithmic solution for the centrifugal governor is a very specific programming idea, not sufficient to discount computer-like action. Brains are not computers, but computational methods, software, programming, etc., inevitably reflect something about nature and the brain. Further, the ‘smart perceptual mechanism,’ gets its meaning from its mathematical character. I would argue that mathematics could help define what one means by representation if we reassociate mathematics with action (perhaps as Humberto Maturana did with language). The power of mathematics comes from what we can see in the weave of relationships among its precise representations. The history of my blogs makes clear that I would argue that underlying these representations is perception and action. Modeling strategies, at their best, are a way to get at this action within the limitations of our language.
An article on physicsworld.com reported the discovery of variable stars whose periodic dimming and brightening frequencies have a ratio at or very near the golden ratio.
The objects were found in data from the Kepler space telescope by looking for stars with two characteristic pulsation frequencies that have a “golden ratio” of approximately 1.62. The discovery could shed light on the physics that drives variable stars and also help astronomers come up with better classification systems for these objects.
The Golden mean is a ratio that can be found by dividing a line into two parts in such a way that the longer part divided by the smaller part is also equal to the whole length divided by the longer part. More clearly, if the longer segment has length a and the shorter segment has length b, then:
a/b = (a+b)/a = 1.6180339887498948420 …=φ
The well-known Fibonacci sequence is nicely tied to this ratio. In the Fibonacci sequence, each term is the sum of the two previous terms. The sequence begins like this:
The ratio of 3 to 5 is 1.666 and the ratio of 13 to 21 is 1.625. Further into the sequence, the ratio of 144 to 233 is 1.618. These ratios get closer and closer to the irrational number that is the golden ratio, a number whose decimal expansion is infinite and non-repeating (like the decimal expansion of p).
A number of natural occurrences of this ratio, are often cited. The growth of certain flower petals follow the Fibonacci sequence, as does the progression of tree branches. A golden spiral gets wider (or further from its origin) by a factor of φ for every quarter turn it makes. Snail shells and nautilus shells follow this pattern, as does the cochlea of the inner ear. It can also be seen in the shape of certain spider’s webs.
Scientificamerican.com also reported on this discovery. And they balanced their story with this mathematician’s judgment:
“Many claims about natural phenomena and the golden ratio are exaggerated,” says mathematician and computer scientist George Markowsky of the University of Maine, Orono. “I refuse to accept anything off by 2 percent or more as evidence of the golden ratio. After all, around any real number there are infinitely many other real numbers. People don’t seem to write papers about the mystic properties of .6 (which is very close to .618….).”
But whether the ratio is or approximates the golden ratio, it’s presence does seem to signify that this variable star has some distinguishing characteristics. It is a type of periodic variable star called an ‘RR Lyrae’ variable. The Physics World piece tells us that the presence of the golden ratio in this dynamical system could indicate that the star behaves as a ‘strange non-chaotic attractor,’ or that the system is fractal and non-chaotic. The golden ratio is an irrational number that is often understood in geometric or growth terms. I find it interesting that while this special ratio is being perceived here in terms of timed brightness, or some measure of duration, the significance of its presence may still be the way it provides information about the star’s structure.
To study the dynamics of the star, Learned and Hippke joined forces with physicists at the University of Hawaii and the College of Wooster, including John Lindner. To verify that the star is indeed a strange non-chaotic attractor, the team did a “spectral scaling” analysis. First, the researchers did a Fourier transform of a time sequence of the brightness… creating a power spectrum with peaks at a large number of frequencies. Then, they counted the number of peaks above a threshold value, repeating the process over a wide range of threshold values. Finally, they plotted the number of peaks above the threshold as a function of the threshold.
They found that the number of peaks was pretty well constant until the threshold reached an inflection point (a point on a curve at which the sign of the curvature changes). When this occurred, the number dropped rapidly and obeyed a power law. According to the team, this behaviour is indicative of strange non-chaotic dynamics. Interestingly, Lindner points out that a similar analysis of the variability of the coastline of Norway yields the same power-law exponent of –1.5.
A post from John Horgan with the title Did Edgar Allan Poe Foresee Modern Physics and Cosmology? quickly got my attention. Horgan writes in response to an essay by Marilynne Robinson in the February 5 New York Review of Books where Poe’s book-length prose poem Eureka was brought to his attention. Eureka was written by Poe shortly before his death in 1849. Horgan tells us:
According to Robinson, Eureka has always been “an object of ridicule,” too odd even for devotees of Poe, the emperor of odd. But Robinson contends that Eureka is actually “full of intuitive insight”–and anticipates ideas remarkably similar to those of modern physics and cosmology.
Eureka, she elaborates, “describes the origins of the universe in a single particle, from which ‘radiated’ the atoms of which all matter is made. Minute dissimilarities of size and distribution among these atoms meant that the effects of gravity caused them to accumulate as matter, forming the physical universe. This by itself would be a startling anticipation of modern cosmology, if Poe had not also drawn striking conclusions from it, for example that space and ‘duration’ are one thing, that there might be stars that emit no light, that there is a repulsive force that in some degree counteracts the force of gravity, that there could be any number of universes with different laws simultaneous with ours, that our universe might collapse to its original state and another universe erupt from the particle it would have become, that our present universe may be one in a series.
Horgan acknowledges the resemblance, but challenges the soundness of Poe’s thoughts with an excerpt from Poe’s theory of creation.
“Let us now endeavor to conceive what Matter must be, when, or if, in its absolute extreme of Simplicity. Here the Reason flies at once to Imparticularity—to a particle—to one particle—a particle of one kind—of one character—of one nature—of one size—of one form—a particle, therefore, ‘without form and void’—a particle positively a particle at all points—a particle absolutely unique, individual, undivided, and not indivisible only because He who created it, by dint of his Will, can by an infinitely less energetic exercise of the same Will, as a matter of course, divide it. Oneness, then, is all that I predicate of the originally created Matter; but I propose to show that this Oneness is a principle abundantly sufficient to account for the constitution, the existing phenomena and the plainly inevitable annihilation of at least the material Universe.”
But this just made me more interested because that particle “of one kind,” “of one character,” “of one nature,” “positively a particle at all points…individual, undivided, and not divisible,” reminded me of Leibniz’s monad (1714). Britannica’s philosophy pages summarize Leibniz’s idea it nicely:
Since we experience the actual world as full of physical objects, Leibniz provided a detailed account of the nature of bodies. As Descartes had correctly noted, the essence of matter is that it is spatially extended. But since every extended thing, no matter how small, is in principle divisible into even smaller parts, it is apparent that all material objects are compound beings made up of simple elements. But from this Leibniz concluded that the ultimate constitutents of the world must be simple, indivisible, and therefore unextended, particles—dimensionless mathematical points. So the entire world of extended matter is in reality constructed from simple immaterial substances, monads, or entelechies.
It is true, as Horgan points out, that Eureka “does indeed evoke some modern scientific ideas, but in the same blurry way that Christian or Eastern theologies do.” But no attention is being given to the fact that, in that blurry resemblance, is the surprising presence of a quasi-mathematical conceptualization of things:
“The assumption of absolute Unity in the primordial Particle includes that of infinite divisibility. Let us conceive the Particle, then, to be only not totally exhausted by diffusion into Space. From the one Particle, as a center, let us suppose to be irradiated spherically—in all directions—to immeasurable but still to definite distances in the previously vacant space—a certain inexpressibly great yet limited number of unimaginably yet not infinitely minute atoms.”
This is a kind of mathematical thinking happening outside the disciplines of mathematics or science. It’s not precise. It’s not designed to do what mathematics does. But the words signify mathematical things. Why? It’s not clear where the inspiration for this impassioned/poetic/intuitional expression lies, and that’s exactly why it’s interesting. This is not the only example of a kind of literary mathematics. Another example that comes to mind was discussed in a piece from David Castelvecchi in 2012 – Dante’s Universe and Ours.
Dante’s universe, then, can be interpreted as an extreme case of non-Euclidean geometry, one in which concentric spheres don’t just grow at a different pace than their diameters, but at some point they actually stop growing altogether and start shrinking instead. That’s crazy, you say. And yet, modern cosmology tells us that that’s the structure of the cosmos we actually see in our telescopes…
Of course, Dante lived five centuries before any mathematicians ever dreamed of notions of curved geometries. We may never know if his strange spheres were a mathematical premonition or esoteric symbolism or simply a colorful literary device.
I suspect we won’t fully appreciate what’s happening within these literary mathematical ideas without a fuller appreciation of what mathematics is.
I’ve spent a good deal of time exploring how mathematics can be seen in how the body lives – the mental magnitudes that are our experience of time and space, the presence of arithmetic reasoning in pre-verbal humans and nonverbal animals, cells in the brain that abstract visual attributes (like verticality), the algebraic forms in language, and probabilistic learning, to name just a few.
But I believe that the cognitive structures on which mathematics is built (and which mathematics reflects) are deep, and interwoven across the whole range of human experience. Perhaps our now highly specialized domains of research are inhibiting our ability to see the depth of these structures. I thought this, again, when a particular study on the neural architecture underlying particular language abilities was brought to my attention. The study, published in the Journal of Cognitive Neuroscience, investigated the presence of this architecture in the newborn brain.
Breaking the linguistic code requires the extraction of at least two types of information from the speech signal: the relations between linguistic units and their sequential position. Further, these different types of information need to be integrated into a coherent representation of language structure. The brain networks responsible for these abilities are well-known in adults, but not in young infants. Our results show that the neural architecture underlying these abilities is operational at birth.
The focus of the study was on the infants’ ability to discriminate patterns in spoken syllables, specifically ABB patterns like “mubaba” from ABC patterns like “mubage” The experiments were also designed to determine if the infants could distinguish ABB patterns from AAB patterns. The former is about identifying the repetition, while the latter about identifying the position of the repetition. Changes in the concentration of oxygenated hemoglobin and deoxygenated hemoglobin were used as indicators of neural activity. Results suggest that the newborn brain can distinguish both ABB sequences and AAB sequences from a sequence without repetition (an ABC sequence). And neural activity was most pronounced in the temporal areas of the left hemisphere. Findings also suggested that newborns are able to distinguish the initial vs. final position of the repetition, with this response being observed more in frontal regions.
All of this seems to say that newborns are sensitive to sequential position in speech and can integrate this information with other patterns. This identification of pattern to meaning, or the meaningfulness of position, certainly resembles something about mathematics, where the meaningfulness of pattern and position is everywhere.
The connection between pattern, language and algebra is more directly addressed in a more recent paper: Phonological reduplication in sign language (Frontiers in Psychology 6/2014). Here the role of algebraic rules in American Sign Language is considered, where words are formed by shape and movement.
This is the statement of how we are to understand rule:
The plural rule generates plural forms by copying the singular noun stem (Nstem) and appending the suffix s to its end (Nstem + s). This simple description entails several critical assumptions concerning mental architecture…First, it assumes that the mind encodes abstract categories (e.g., noun stem, Nstem), and such categories are distinct from their instances (e.g., dog, letter). Second, men- tal categories are potentially open-ended—they include not only familiar instances (e.g., the familiar nouns dog, cat) but also novel ones. Third, within such category, all instances—familiar or novel—are equal members of this class. Thus, mental categories form equivalence classes. Fourth, mental processes manipulate such abstract categories—in the present case, it is assumed that the plural rule copies the Nstem category. Doing so requires that rules operate on algebraic variables, akin to variables from algebraic numeric operations (e.g., X→X+1)1. Finally, because rule description appeals only to this abstract category, the rule will apply equally to any of its members, irrespective of whether any given member is familiar or novel, and regardless of its similarity to existing familiar items.
The hypothesis that the language system encodes algebraic rules is supported by a lot of data, but the paper does include a discussion of the alternative associationist architectures, or connectionist networks, where generalizations don’t depend on abstract classes but rather on specific instances that become associated (like an association between rog-rogs and dog-dogs). The authors argue, however, that algebraic rules provide the best computational explanation for experimental observations of both speakers and signers.
We also note that our evidence for rules does not negate the possibility that some aspects of linguistic knowledge are associative, or even iconic (Ormel et al., 2009; Thompson et al., 2009, 2010, 2012). While these alternative representations and computational mechanisms might be ultimately necessary to offer a full account of the language system, our present results suggest that they are not sufficient. At its core, signers’ phonological knowledge includes productive algebraic rules, akin to the ones previously documented in spoken language phonology.
All of this suggests the presence of deeply rooted algebraic tendencies that we wouldn’t find by looking for hardwired or primitive mathematical abilities. Yet it seems that abstraction and equivalence, in some algebraic sense, just happens as the body lives. The infant is ready to recognize and integrate patterns that will enable linguistic abilities and the signer seems to be operating on equivalence classes with gestures. This should encourage us to look at the formalization of algebraic ideas, and our subsequent investigation of them in mathematics, in a new way. It’s as if we’re turning ourselves inside-out and successfully harnessing the productivity of abstraction and equivalence. While these are not the only mathematical things the body does, the fairly specific focus of these studies suggests that abstraction and generalization as actions run deep and broad in our make-up.
Each year, Edge.org asks contributors to respond to their annual question. In 2014, the question was: What scientific idea is ready for retirement? There were 174 interesting responses, but one that got my attention was written by Scott Sampson (author, Dinosaur Odyssey: Fossil Threads in the Web of Life). The idea that Sampson would like to see abandoned is our tendency to think of nature as a collection of objects. It is these objects that we believe we measure, test and study. Sampson identifies this perspective with the “centuries-old trend toward reductionism.”
Reductionist tendencies have been challenged on many fronts, often with an appeal to the notion of emergence – emergent structures, phenomena, or behavior. But our reliance on objectivity is fundamental to our appreciation of science and the task of refining it to reflect the value of many new insights is a formidable one. Yet, I would argue, a re-evaluation of scientific habits of mind is both necessary and inevitable. Sampson makes the point:
An alternative worldview is called for, one that reanimates the living world. This mindshift, in turn, will require no less than the subjectification of nature. Of course, the notion of nature-as-subjects is not new. Indigenous peoples around the globe tend to view themselves as embedded in animate landscapes replete with relatives; we have much to learn from this ancient wisdom.
Ancient wisdoms are difficult to translate into scientific perspectives. But a number of modern ideas share something with ancient world views nonetheless. These perspectives often demonstrate an emphasis on relationship over substance. And in no small way, they have been aided by the growth of mathematical ideas. The many possibilities for structure that mathematical relations provide have now been effectively employed in biology and cognitive science, as well as physics. Sampson ties an investigation of pattern and form to Leonardo da Vinci whose name always calls to mind the passionate commingling of art and science. And Sampson argues:
The science of patterns has seen a recent resurgence, with abundant attention directed toward such fields as ecology and complex adaptive systems. Yet we’ve only scratched the surface, and much more integrative work remains to be done that could help us understand relationships.
Perhaps even more directly connected to the reanimation or, as Sampson puts it, the subjectification of nature, is work recently reported on the lives of plants. An article in New Scientist (December 3, 2014) provides some of the history of this work as well as current findings.
… in 1900, Indian biophysicist Jagdish Chandra Bose began a series of experiments that laid the groundwork for what some today call “plant neurobiology”. He argued that plants actively explore their environments, and are capable of learning and modifying their behaviour to suit their purposes. Key to all this, he said, was a plant nervous system. Located primarily in the phloem, the vascular tissue used to transport nutrients, Bose believed this allowed information to travel around the organism via electrical signals.
Bose was also well ahead of his time. It wasn’t until 1992 that his idea of widespread electrical signaling in plants received strong support when researchers discovered that wounding a tomato plant results in a plant-wide production of certain proteins – and the speed of the response could only be due to electrical signals and not chemical signals traveling via the phloem as had been assumed. The door to the study of plant behaviour was opened.
The article quotes Daniel Chamovitz, (What A Plant Knows):
Plants are acutely aware of their environment,” says Chamovitz. “They are aware of the direction of the light and quality of the light. They communicate with each other with chemicals, whether we want to call this taste, or smell, or pheromones. Plants ‘know’ when they are being touched, or when they are being shook by the wind. They integrate all of this information precisely. And they do all of this integration in the absence of a neural system.
In June 2013, I wrote about researchers who claimed that plants do arithmetic. All of this work not only tells us something about plants, but it broadens our sense for what it means ‘to know,’ what knowing is, and how it happens.
Returning to Sampson, he made this point early in his essay:
To subjectify is to interiorize, such that the exterior world interpenetrates our interior world. Whereas the relationships we share with subjects often tap into our hearts, objects are dead to our emotions. Finding ourselves in relationship, the boundaries of self can become permeable and blurred. Many of us have experienced such transcendent feelings during interactions with nonhuman nature, from pets to forests.
“Interiorizing” is an interesting idea. And I think mathematics may have a role to play in understanding what this could mean on a large scale. Mathematics grows with pure introspection yet seems to be found everywhere around us. It may very well reflect an aspect of nature that is both internal and external in our experience, blurring the boundaries of self. Probability models are used in physics as well as cognitive science, complex systems theories have been applied in biology, economics and technology. In finding sameness among things that appear to be distinct, mathematics discourages separation and, as I see it, objectification.
Flipping through some New Scientist issues from this past year, I was reminded of an article in their July 19 issue that brought together a discussion of the brain and mathematics with particular emphasis on the effectiveness of employing the sometimes counter-intuitive notion of the infinity of the real numbers. The content of the article, Know it all, by Michael Brooks, explores the viability of Alan Turing’s idea of the “oracle” – a computer that could decide undecidable problems. It highlights the work of Emmett Redd and Steven Younger of Missouri State University who think that they see a path to the development of this “super-Turing” computer that would also bring new insight into how the brain works.
The limitations on even the most sophisticated computing tools is essentially a consequence of limited power of logic. Mathematician Kurt Gödel’s Incompleteness Theorem shows clearly that any system of logical axioms will always contain unprovable statements. Turing made the same observation about a universal computer built on logic alone. Such a computer will inevitably come up against ‘undecidable’ problems, regardless of the amount of processor power available. But Turing did imagine something else.
…An oracle as Turing envisaged it was essentially a black box whose unspecified contents would be able to solve undecidable problems. An “O-machine,” he proposed, would exploit whatever was in this black box to go beyond the bounds of conventional human logic – and so surpass the abilities of every computer ever build.
Brooks then tells us about a computer scientist working on neural networks – circuits designed to mimic the human brain. Hava Siegelmann wanted to prove the limits of neural networks, despite their great flexibility.
In a neural net, many simple processors are wired together so that the output of once can act as the input of others. These inputs are weighted to have more or less influence, and the idea is that the network “talks” to itself, using its outputs to alter its input weightings until it is performing tasks optimally – in effect, learning as it goes along just as the brain does.
Siegelmann eventually observed an unexpected possibility. She showed that, in theory, if a network was weighted with the infinite, non-repeating numbers in the decimal expansion of irrational numbers such as pi, it could transgress the limitations of a universal computer built on logic alone. And this relies, it seems, on the generation of randomness produced by the irrational number.
While Siegelmann published her proof in 1995, it was not enthusiastically welcomed by fellow computer scientists.
…she soon lost interest too. “I believed it was mathematics only, and I wanted to do something practical,” she says. I turned down giving any more talks on super-Turing computation.
Ah, “mathematics only…,” she says.
Redd and Younger, aware of Siegelmann’s work, saw their own work headed in the same direction.
… In 2010, they were building neural networks using analogue inputs that, unlike the conventional digital code of 0 (current on) and 1 (current off), can take a whole range of values between fully off and fully on. There was more than a whiff of Siegelmann’s endless irrational numbers in there. “There is an infinite number of numbers between 0 and 1,” says Redd.
This infinity of numbers between 0 and 1, was one of the first things to intrigue me about mathematics. What are we looking at when we look at this infinity of numbers, whose size is the same as the infinity of the whole line?
In 2011 they approached Siegelmann, by then director of the Biologically Inspired Neural & Dynamical Systems lab at the University of Massachusetts in Amherst, to see if she might be interested in a collaboration. She said yes. As it happened, she had recently started thinking about the problem again, and was beginning to see how irrational-number weightings weren’t the only game in town. Anything that introduced a similar element of randomness or unpredictability might do the trick, too. “Having irrational numbers is only one way to get super-Turing power,” she says.
The route the trio chose was chaos. A chaotic system is one whose response is very sensitive to small changes in its initial conditions. Wire up an analogue neural net in the right way, and tiny gradations in its outputs can be used to create bigger changes at the inputs, which in turn feed back to cause bigger or smaller changes, and so on. In effect, the system becomes driven by an unpredictable, infinitely variable noise.
The idea is met with some skepticism. Scott Aaronson, Professor of Electrical Engineering and Computer Science at MIT, argues that models involving infinities inevitably run into trouble.
People ignore the fact that the physical system cannot implement the idea with perfect precision.
Jérémie Cabessa of the University of Lausanne, Switzerland co-authored a paper with Siegelmann published in the International Journal of Neural Systems in September 2014 which supports the idea that “the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation. In Brook’s article, however, he’s skeptical that such a machine is buildable.
Again, it’s not that the maths doesn’t work – it is just a moot point whether true randomness is something we can harness, or whether it even exists.
Brooks tells us that Turing often speculated about the connection between intrinsic randomness and creative intelligence.
This is not the first pairing of randomness and creativity that I’ve seen. Gregory Chaitin’s work relies heavily on randomness. Metabiology, the field he has introduced, investigates randomly evolving computer software as it relates to “randomly evolving natural software” or DNA. And here, mathematical creativity is equated with biological creativity. And Chaitin has remarked (probably more than once) that he doesn’t believe that continuity really works for physics theories, a perspective echoed by Aaronson. Chaitin leans instead toward a discrete, digital, worldview.
But I find it important to take note here of the fact that the infinities of mathematics, so often problematic within physical theories, have, nonetheless, very effectively aided our imagination. The continuity of the real numbers is largely characterized by the irrational number and took years of devoted effort to be firmly established in mathematics. In this discussion, the irrational number also opened the door to the effect of randomness in neural networks. Mathematical notions of continuity have been the mind’s way, of bridging arithmetic and geometric ideas. These bridges allow conceptual structures to develop. The roots of these ideas are in our experiences of things like space, time and object, but they somehow give the intuition more room to grow. Just a few of the fruits of their development have brought the inaccessible subatomic and intergalactic worlds within reach. Even if the world turns out not mirror this continuity, the work of Siegelmann, Redd and Younger suggests that the mind might.