Categories

Journals

Poe’s cosmology

A post from John Horgan with the title Did Edgar Allan Poe Foresee Modern Physics and Cosmology? quickly got my attention. Horgan writes in response to an essay by Marilynne Robinson in the February 5 New York Review of Books where Poe’s book-length prose poem Eureka was brought to his attention. Eureka was written by Poe shortly before his death in 1849.  Horgan tells us:

According to Robinson, Eureka has always been “an object of ridicule,” too odd even for devotees of Poe, the emperor of odd. But Robinson contends that Eureka is actually “full of intuitive insight”–and anticipates ideas remarkably similar to those of modern physics and cosmology.

Eureka, she elaborates, “describes the origins of the universe in a single particle, from which ‘radiated’ the atoms of which all matter is made. Minute dissimilarities of size and distribution among these atoms meant that the effects of gravity caused them to accumulate as matter, forming the physical universe. This by itself would be a startling anticipation of modern cosmology, if Poe had not also drawn striking conclusions from it, for example that space and ‘duration’ are one thing, that there might be stars that emit no light, that there is a repulsive force that in some degree counteracts the force of gravity, that there could be any number of universes with different laws simultaneous with ours, that our universe might collapse to its original state and another universe erupt from the particle it would have become, that our present universe may be one in a series.

Horgan acknowledges the resemblance, but challenges the soundness of Poe’s thoughts with an excerpt from Poe’s theory of creation.

“Let us now endeavor to conceive what Matter must be, when, or if, in its absolute extreme of Simplicity. Here the Reason flies at once to Imparticularity—to a particle—to one particle—a particle of one kind—of one character—of one nature—of one size—of one form—a particle, therefore, ‘without form and void’—a particle positively a particle at all points—a particle absolutely unique, individual, undivided, and not indivisible only because He who created it, by dint of his Will, can by an infinitely less energetic exercise of the same Will, as a matter of course, divide it. Oneness, then, is all that I predicate of the originally created Matter; but I propose to show that this Oneness is a principle abundantly sufficient to account for the constitution, the existing phenomena and the plainly inevitable annihilation of at least the material Universe.”

But this just made me more interested because that particle “of one kind,” “of one character,” “of one nature,” “positively a particle at all points…individual, undivided, and not divisible,” reminded me of Leibniz’s monad (1714). Britannica’s philosophy pages summarize Leibniz’s idea it nicely:

Since we experience the actual world as full of physical objects, Leibniz provided a detailed account of the nature of bodies. As Descartes had correctly noted, the essence of matter is that it is spatially extended. But since every extended thing, no matter how small, is in principle divisible into even smaller parts, it is apparent that all material objects are compound beings made up of simple elements. But from this Leibniz concluded that the ultimate constitutents of the world must be simple, indivisible, and therefore unextended, particles—dimensionless mathematical points. So the entire world of extended matter is in reality constructed from simple immaterial substances, monads, or entelechies.

It is true, as Horgan points out, that Eureka “does indeed evoke some modern scientific ideas, but in the same blurry way that Christian or Eastern theologies do.”  But no attention is being given to the fact that, in that blurry resemblance, is the surprising presence of a quasi-mathematical conceptualization of things:

“The assumption of absolute Unity in the primordial Particle includes that of infinite divisibility. Let us conceive the Particle, then, to be only not totally exhausted by diffusion into Space. From the one Particle, as a center, let us suppose to be irradiated spherically—in all directions—to immeasurable but still to definite distances in the previously vacant space—a certain inexpressibly great yet limited number of unimaginably yet not infinitely minute atoms.”

This is a kind of mathematical thinking happening outside the disciplines of mathematics or science. It’s not precise.  It’s not designed to do what mathematics does.  But the words signify mathematical things.  Why?  It’s not clear where the inspiration for this impassioned/poetic/intuitional expression lies, and that’s exactly why it’s interesting.  This is not the only example of a kind of literary mathematics.  Another example that comes to mind was discussed in a piece from David Castelvecchi in 2012 – Dante’s Universe and Ours.

Dante’s universe, then, can be interpreted as an extreme case of non-Euclidean geometry, one in which concentric spheres don’t just grow at a different pace than their diameters, but at some point they actually stop growing altogether and start shrinking instead. That’s crazy, you say. And yet, modern cosmology tells us that that’s the structure of the cosmos we actually see in our telescopes…

Of course, Dante lived five centuries before any mathematicians ever dreamed of notions of curved geometries. We may never know if his strange spheres were a mathematical premonition or esoteric symbolism or simply a colorful literary device.

I suspect we won’t fully appreciate what’s happening within these literary mathematical ideas without a fuller appreciation of what mathematics is.

Pattern, language and algebra

I’ve spent a good deal of time exploring how mathematics can be seen in how the body lives – the mental magnitudes that are our experience of time and space, the presence of arithmetic reasoning in pre-verbal humans and nonverbal animals, cells in the brain that abstract visual attributes (like verticality), the algebraic forms in language, and probabilistic learning, to name just a few.

But I believe that the cognitive structures on which mathematics is built (and which mathematics reflects) are deep, and interwoven across the whole range of human experience. Perhaps our now highly specialized domains of research are inhibiting our ability to see the depth of these structures. I thought this, again, when a particular study on the neural architecture underlying particular language abilities was brought to my attention. The study, published in the Journal of Cognitive Neuroscience, investigated the presence of this architecture in the newborn brain.

Breaking the linguistic code requires the extraction of at least two types of information from the speech signal: the relations between linguistic units and their sequential position. Further, these different types of information need to be integrated into a coherent representation of language structure. The brain networks responsible for these abilities are well-known in adults, but not in young infants. Our results show that the neural architecture underlying these abilities is operational at birth.

The focus of the study was on the infants’ ability to discriminate patterns in spoken syllables, specifically ABB patterns like “mubaba” from ABC patterns like “mubage” The experiments were also designed to determine if the infants could distinguish ABB patterns from AAB patterns. The former is about identifying the repetition, while the latter about identifying the position of the repetition. Changes in the concentration of oxygenated hemoglobin and deoxygenated hemoglobin were used as indicators of neural activity. Results suggest that the newborn brain can distinguish both ABB sequences and AAB sequences from a sequence without repetition (an ABC sequence). And neural activity was most pronounced in the temporal areas of the left hemisphere. Findings also suggested that newborns are able to distinguish the initial vs. final position of the repetition, with this response being observed more in frontal regions.

All of this seems to say that newborns are sensitive to sequential position in speech and can integrate this information with other patterns. This identification of pattern to meaning, or the meaningfulness of position, certainly resembles something about mathematics, where the meaningfulness of pattern and position is everywhere.

The connection between pattern, language and algebra is more directly addressed in a more recent paper: Phonological reduplication in sign language (Frontiers in Psychology 6/2014). Here the role of algebraic rules in American Sign Language is considered, where words are formed by shape and movement.

This is the statement of how we are to understand rule:

The plural rule generates plural forms by copying the singular noun stem (Nstem) and appending the suffix s to its end (Nstem + s). This simple description entails several critical assumptions concerning mental architecture…First, it assumes that the mind encodes abstract categories (e.g., noun stem, Nstem), and such categories are distinct from their instances (e.g., dog, letter). Second, men- tal categories are potentially open-ended—they include not only familiar instances (e.g., the familiar nouns dog, cat) but also novel ones. Third, within such category, all instances—familiar or novel—are equal members of this class. Thus, mental categories form equivalence classes. Fourth, mental processes manipulate such abstract categories—in the present case, it is assumed that the plural rule copies the Nstem category. Doing so requires that rules operate on algebraic variables, akin to variables from algebraic numeric operations (e.g., X→X+1)1. Finally, because rule description appeals only to this abstract category, the rule will apply equally to any of its members, irrespective of whether any given member is familiar or novel, and regardless of its similarity to existing familiar items.

The hypothesis that the language system encodes algebraic rules is supported by a lot of data, but the paper does include a discussion of the alternative associationist architectures, or connectionist networks, where generalizations don’t depend on abstract classes but rather on specific instances that become associated (like an association between rog-rogs and dog-dogs). The authors argue, however, that algebraic rules provide the best computational explanation for experimental observations of both speakers and signers.

We also note that our evidence for rules does not negate the possibility that some aspects of linguistic knowledge are associative, or even iconic (Ormel et al., 2009; Thompson et al., 2009, 2010, 2012). While these alternative representations and computational mechanisms might be ultimately necessary to offer a full account of the language system, our present results suggest that they are not sufficient. At its core, signers’ phonological knowledge includes productive algebraic rules, akin to the ones previously documented in spoken language phonology.

All of this suggests the presence of deeply rooted algebraic tendencies that we wouldn’t find by looking for hardwired or primitive mathematical abilities. Yet it seems that abstraction and equivalence, in some algebraic sense, just happens as the body lives. The infant is ready to recognize and integrate patterns that will enable linguistic abilities and the signer seems to be operating on equivalence classes with gestures. This should encourage us to look at the formalization of algebraic ideas, and our subsequent investigation of them in mathematics, in a new way.  It’s as if we’re turning ourselves inside-out and successfully harnessing the productivity of abstraction and equivalence.  While these are not the only mathematical things the body does, the fairly specific focus of these studies suggests that abstraction and generalization as actions run deep and broad in our make-up.

Reanimating the living world

Each year, Edge.org asks contributors to respond to their annual question. In 2014, the question was: What scientific idea is ready for retirement? There were 174 interesting responses, but one that got my attention was written by Scott Sampson (author, Dinosaur Odyssey: Fossil Threads in the Web of Life). The idea that Sampson would like to see abandoned is our tendency to think of nature as a collection of objects. It is these objects that we believe we measure, test and study. Sampson identifies this perspective with the “centuries-old trend toward reductionism.”

Reductionist tendencies have been challenged on many fronts, often with an appeal to the notion of emergence – emergent structures, phenomena, or behavior. But our reliance on objectivity is fundamental to our appreciation of science and the task of refining it to reflect the value of many new insights is a formidable one. Yet, I would argue, a re-evaluation of scientific habits of mind is both necessary and inevitable. Sampson makes the point:

An alternative worldview is called for, one that reanimates the living world. This mindshift, in turn, will require no less than the subjectification of nature. Of course, the notion of nature-as-subjects is not new. Indigenous peoples around the globe tend to view themselves as embedded in animate landscapes replete with relatives; we have much to learn from this ancient wisdom.

Ancient wisdoms are difficult to translate into scientific perspectives. But a number of modern ideas share something with ancient world views nonetheless. These perspectives often demonstrate an emphasis on relationship over substance. And in no small way, they have been aided by the growth of mathematical ideas. The many possibilities for structure that mathematical relations provide have now been effectively employed in biology and cognitive science, as well as physics. Sampson ties an investigation of pattern and form to Leonardo da Vinci whose name always calls to mind the passionate commingling of art and science. And Sampson argues:

The science of patterns has seen a recent resurgence, with abundant attention directed toward such fields as ecology and complex adaptive systems. Yet we’ve only scratched the surface, and much more integrative work remains to be done that could help us understand relationships.

Perhaps even more directly connected to the reanimation or, as Sampson puts it, the subjectification of nature, is work recently reported on the lives of plants. An article in New Scientist (December 3, 2014) provides some of the history of this work as well as current findings.

… in 1900, Indian biophysicist Jagdish Chandra Bose began a series of experiments that laid the groundwork for what some today call “plant neurobiology”. He argued that plants actively explore their environments, and are capable of learning and modifying their behaviour to suit their purposes. Key to all this, he said, was a plant nervous system. Located primarily in the phloem, the vascular tissue used to transport nutrients, Bose believed this allowed information to travel around the organism via electrical signals.

Bose was also well ahead of his time. It wasn’t until 1992 that his idea of widespread electrical signaling in plants received strong support when researchers discovered that wounding a tomato plant results in a plant-wide production of certain proteins – and the speed of the response could only be due to electrical signals and not chemical signals traveling via the phloem as had been assumed. The door to the study of plant behaviour was opened.

The article quotes Daniel Chamovitz, (What A Plant Knows):

Plants are acutely aware of their environment,” says Chamovitz. “They are aware of the direction of the light and quality of the light. They communicate with each other with chemicals, whether we want to call this taste, or smell, or pheromones. Plants ‘know’ when they are being touched, or when they are being shook by the wind. They integrate all of this information precisely. And they do all of this integration in the absence of a neural system.

In June 2013, I wrote about researchers who claimed that plants do arithmetic. All of this work not only tells us something about plants, but it broadens our sense for what it means ‘to know,’ what knowing is, and how it happens.

Returning to Sampson, he made this point early in his essay:

To subjectify is to interiorize, such that the exterior world interpenetrates our interior world. Whereas the relationships we share with subjects often tap into our hearts, objects are dead to our emotions. Finding ourselves in relationship, the boundaries of self can become permeable and blurred. Many of us have experienced such transcendent feelings during interactions with nonhuman nature, from pets to forests.

“Interiorizing” is an interesting idea. And I think mathematics may have a role to play in understanding what this could mean on a large scale. Mathematics grows with pure introspection yet seems to be found everywhere around us. It may very well reflect an aspect of nature that is both internal and external in our experience, blurring the boundaries of self. Probability models are used in physics as well as cognitive science, complex systems theories have been applied in biology, economics and technology. In finding sameness among things that appear to be distinct, mathematics discourages separation and, as I see it, objectification.

Continuity, randomness and the Oracle

Flipping through some New Scientist issues from this past year, I was reminded of an article in their July 19 issue that brought together a discussion of the brain and mathematics with particular emphasis on the effectiveness of employing the sometimes counter-intuitive notion of the infinity of the real numbers. The content of the article, Know it all, by Michael Brooks, explores the viability of Alan Turing’s idea of the “oracle” – a computer that could decide undecidable problems. It highlights the work of Emmett Redd and Steven Younger of Missouri State University who think that they see a path to the development of this “super-Turing” computer that would also bring new insight into how the brain works.

The limitations on even the most sophisticated computing tools is essentially a consequence of limited power of logic. Mathematician Kurt Gödel’s Incompleteness Theorem shows clearly that any system of logical axioms will always contain unprovable statements. Turing made the same observation about a universal computer built on logic alone. Such a computer will inevitably come up against ‘undecidable’ problems, regardless of the amount of processor power available. But Turing did imagine something else.

…An oracle as Turing envisaged it was essentially a black box whose unspecified contents would be able to solve undecidable problems. An “O-machine,” he proposed, would exploit whatever was in this black box to go beyond the bounds of conventional human logic – and so surpass the abilities of every computer ever build.

Brooks then tells us about a computer scientist working on neural networks – circuits designed to mimic the human brain. Hava Siegelmann wanted to prove the limits of neural networks, despite their great flexibility.

In a neural net, many simple processors are wired together so that the output of once can act as the input of others. These inputs are weighted to have more or less influence, and the idea is that the network “talks” to itself, using its outputs to alter its input weightings until it is performing tasks optimally – in effect, learning as it goes along just as the brain does.

Siegelmann eventually observed an unexpected possibility. She showed that, in theory, if a network was weighted with the infinite, non-repeating numbers in the decimal expansion of irrational numbers such as pi, it could transgress the limitations of a universal computer built on logic alone. And this relies, it seems, on the generation of randomness produced by the irrational number.

While Siegelmann published her proof in 1995, it was not enthusiastically welcomed by fellow computer scientists.

…she soon lost interest too. “I believed it was mathematics only, and I wanted to do something practical,” she says. I turned down giving any more talks on super-Turing computation.

Ah, “mathematics only…,” she says.

Redd and Younger, aware of Siegelmann’s work, saw their own work headed in the same direction.

… In 2010, they were building neural networks using analogue inputs that, unlike the conventional digital code of 0 (current on) and 1 (current off), can take a whole range of values between fully off and fully on. There was more than a whiff of Siegelmann’s endless irrational numbers in there. “There is an infinite number of numbers between 0 and 1,” says Redd.

This infinity of numbers between 0 and 1, was one of the first things to intrigue me about mathematics. What are we looking at when we look at this infinity of numbers, whose size is the same as the infinity of the whole line?

In 2011 they approached Siegelmann, by then director of the Biologically Inspired Neural & Dynamical Systems lab at the University of Massachusetts in Amherst, to see if she might be interested in a collaboration. She said yes. As it happened, she had recently started thinking about the problem again, and was beginning to see how irrational-number weightings weren’t the only game in town. Anything that introduced a similar element of randomness or unpredictability might do the trick, too. “Having irrational numbers is only one way to get super-Turing power,” she says.

The route the trio chose was chaos. A chaotic system is one whose response is very sensitive to small changes in its initial conditions. Wire up an analogue neural net in the right way, and tiny gradations in its outputs can be used to create bigger changes at the inputs, which in turn feed back to cause bigger or smaller changes, and so on. In effect, the system becomes driven by an unpredictable, infinitely variable noise.

The idea is met with some skepticism. Scott Aaronson, Professor of Electrical Engineering and Computer Science at MIT, argues that models involving infinities inevitably run into trouble.

People ignore the fact that the physical system cannot implement the idea with perfect precision.

Jérémie Cabessa of the University of Lausanne, Switzerland co-authored a paper with Siegelmann published in the International Journal of Neural Systems in September 2014 which supports the idea that “the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation. In Brook’s article, however, he’s skeptical that such a machine is buildable.

Again, it’s not that the maths doesn’t work – it is just a moot point whether true randomness is something we can harness, or whether it even exists.

Brooks tells us that Turing often speculated about the connection between intrinsic randomness and creative intelligence.

This is not the first pairing of randomness and creativity that I’ve seen. Gregory Chaitin’s work relies heavily on randomness. Metabiology, the field he has introduced, investigates randomly evolving computer software as it relates to “randomly evolving natural software” or DNA.  And here, mathematical creativity is equated with biological creativity.  And Chaitin has remarked (probably more than once) that he doesn’t believe that continuity really works for physics theories, a perspective echoed by Aaronson.   Chaitin leans instead toward a discrete, digital, worldview.

But I find it important to take note here of the fact that the infinities of mathematics, so often problematic within physical theories, have, nonetheless, very effectively aided our imagination. The continuity of the real numbers is largely characterized by the irrational number and took years of devoted effort to be firmly established in mathematics.  In this discussion, the irrational number also opened the door to the effect of randomness in neural networks. Mathematical notions of continuity have been the mind’s way, of bridging arithmetic and geometric ideas. These bridges allow conceptual structures to develop. The roots of these ideas are in our experiences of things like space, time and object, but they somehow give the intuition more room to grow. Just a few of the fruits of their development have brought the inaccessible subatomic and intergalactic worlds within reach. Even if the world turns out not mirror this continuity, the work of Siegelmann, Redd and Younger suggests that the mind might.

Orientation through words and notation

I thought recently, again, about the relationship between the written word and mathematical notation, both being systems of marks that carry meaning. Both systems grow with usage, and both provide some steady refinement of what we are able to see. I’m not so much interested, here, in the relationship between mathematical proficiency and language proficiency, but more what cognitive processes they share that construct meaning and, more importantly, what might be their shared purposes. What can we say, for example, about the value of a good novel.

In a discussion of Michael Schmidt’s The Novel: A Biography which William Deresiewicz calls, How the Novel Made the Modern World, Deresiewicz makes the following remark:

The novel reaches in and out at once. Like no other art, not poetry or music on the one hand, not photography or movies on the other, it joins the self to the world, puts the self in the world, does the deep dive of interiority and surveils the social scope. That polarity, that tension…has proved endlessly generative.

The keys here are ‘reaching in and out at once,’ ‘joining the self to the world’ within a process endlessly ‘generative’ – generative of consistently changing views of ourselves, of the ways we organize ourselves, and of the possibilities our imagination can offer. The novel makes something new of familiar experience through the skillful arrangement of abstract, discrete and symbolic units of sound. The novel is not identified with these arrangements, but it completely relies on them.

Steven Pinker investigates the relationship between language and thought, and has described the conceptual schemes that are generalized with language. In a TED talk he gave in 2005, before the publication of The Stuff of Thought in 2007, Pinker argues that our use of language

seems to be based on a fixed set of concepts, which govern dozens of constructions and thousands of verbs — not only in English, but in all other languages — fundamental concepts such as space, time, causation and human intention, such as, what is the means and what is the ends? These are reminiscent of the kinds of categories that Immanuel Kant argued are the basic framework for human thought, and it’s interesting that our unconscious use of language seems to reflect these Kantian categories.

Shared constructions don’t care about perceptual qualities, such as color, texture, weight and speed, which virtually never differentiate the use of verbs in different constructions. This kind of generalization, Pinker says is:

a process of metaphorical abstraction that allows us to bleach these concepts of their original conceptual content — space, time and force — and apply them to new abstract domains, therefore allowing a species that evolved to deal with rocks and tools and animals, to conceptualize mathematics, physics, law and other abstract domains.

This characterization of what language does is essentially the argument in the Lakoff/Núñez book, Where Mathematics Comes From. But mathematics, like language, finds relationships within frameworks of thought that open the door to new thought. In mathematics, this happens by digging deeper into the how the mind creates any kind of structure – from the patterns in visual data, to the less understood integration of multi-modal experience.

In a 2009 publication, (Geometrein: Measuring the Environment as a Means of Orientation, in Ruedi Bauer, Andrea Gleiniger (eds.): Orientierung – Disorientierung, Lars Müller Publishing, p.282-290) Toni Kotnik makes the following observations of mathematics within a discussion of architecture and its relationship to the body orienting itself:

The limits of Euclidean geometry illustrate that the act of orientation, as an active measurement of surrounding space, determines a structure that is both geometric and metric in nature. Both structures can be understood as cognitive structures of knowledge generated by perception and experience – structures that, as individual patterns of action, shape the individual perception of the world…and, conversely, impinge on perception.

Kotnik then discusses Riemann’s famous lecture, “On the Hypotheses Which Lie at the basis of Geometry,” making the point that these ideas were strongly influenced by the work of philosopher Johan Friedrich Herbart.

According to Herbart, as people move through space, they have a variety of perceptions that do not have an immediate effect on consciousness. Herbart argues that these perceptions undergo a “graded fusion” in the mind… Riemann’s idea of manifolds renders Herbart’s psychological concepts more concrete and precise for mathematical application.

In his conclusion:

Euclid’s geometry and its universalized form in Riemann’s manifolds are examples of how mathematical concepts can be created by formalizing the act of orientation.

Here, again, mathematics is understood as the formalization of the body’s action.  Lets look at words again.  In a recent post  I discussed Peter Mendelsund’s book What We See When We Read. This takes us back to the novel.  In that post, I reproduced this of Mendelsund’s:

The world, as we read it, is made of fragments. Discontinuous points – discrete and dispersed.

(So are we. So too our coworkers; spouses; parents; children; friends..)

We know ourselves and those around us by our reading of them, by the epithets we have given them, by their metaphors, synechdoches, metonymies. Even those we love most in the world. We read them in their fragments and substitutions.

The world for us is a work in progress. And what we understand of it we understand by cobbling these pieces together – synthesizing them over time.

It is the synthesis that we know. (It is all we know.)

And all the while we are committed to believing in the totality – the fiction of seeing.

…Authors are curators of experience.

…reading mirrors the procedure by which we acquaint ourselves with the world. It is not that our narratives necessarily tell us something true about the world (though they might), but rather that the practice of reading feels like, and is like, consciousness itself; imperfect; partial; hazy; co-creative.

Writers reduce when they write, and readers reduce when they read. The brain itself is built to reduce, replace, emblemize…Verisimilitude is not only a false idol, but also an unattainable goal. So we reduce. And it is not without reverence that we reduce. This is how we apprehend the world. This is what humans do.

Picturing stories is making reductions. Through reductions, we create meaning.

Here the world itself is a work in progress, of synthesized discrete fragments, and reading mirrors the way we acquaint ourselves with the world.

I’ve been part of discussions among computer scientists and cognitive scientists where the question arises, to what extent is mathematics its formal representation?  Modeling mathematical relationships in both digital and analog forms suggests that the formal representation of mathematics is not, in itself, actually the mathematics. I think these are useful questions. But I will continue to argue that the meaning brought about by the discovery of mathematical structure relies on the infinite potential of notation in the same way that the meaning in story relies on the infinite potential of word constructions . Mathematics is only more abstract to the extent that in doing mathematics, one is driven to explore the depths of cognitive structures themselves, without reference to our experience. But it is the application of this exploration that has fine-tuned sensation, and brought greater depth to what we can know about our being in the world.

A mathematical philosophy – a digital view

I’ve become fascinated with Gregory Chaitin’s exploration of randomness in computing and his impulse to bring these observations to bear on physical, mathematical, and biological theories. His work inevitably addresses epistemological questions – what it means to know, to comprehend – and leads him to move (as he says in a recent paper) in the direction of “a mathematical approach to philosophical questions.” I do not have any expertise in computing (and do not assume the same about my readers) and so I am not in a position to clarify the formal content of his papers. However, the path Chaitin follows is from Leibniz to Hilbert to Gödel and Turing. With his development of algorithmic information theory, he has studied the expression of information in a program, and formalized an expression of randomness.

The paper to which I referred above, Conceptual Complexity and Algorithmic Information, is from this past June. It can be found on academia.edu. As is often the case, Chaitin begins with Leibniz:

In our modern reading of Leibniz, Sections V and VI both assert that the essence of explanation is compression.  An explanation has to be much simpler, more compact, than what it explains.

The idea of ‘compression’ has been used to talk about how the brain works to interpret a myriad of what one might call repeated sensory information, like the visual attributes of faces. Language, itself, has been described as cognitive compression. Chaitin reminds us of the Middle Ages’ search for the perfect language, that would give us a way to analyze the components of truth, and suggests that Hilbert’s program was a later version of that dream.  And while Hilbert’s program to find a complete formal system for all of mathematics failed, Turing had an idea that has provided a different grasp of the problem.  For Turing,

there are universal languages for formalizing all possible mathematical algorithms, and algorithmic information theory tells us which are the most concise, the most expressive such languages.

Compression is happening in the search for ‘the most concise.’  Chaitin then defines conceptual complexity, which is at the center of his argument.  The conceptual complexity of an object X is

…the size in bits of the most compact program for calculating X, presupposing that we have picked as our complexity standard a particular fixed, maximally compact, concise universal programming language U. This is technically known as the algorithmic information content of the object X denoted H(X)…In medieval terms, H(X) is the minimum number of yes/no decisions that God would have to make to create X.

He employs this idea, this “new intellectual toolkit,” in a brief discussion of mathematics, physics, and evolution, modeling evolution with algorithmic mutations. He also suggests an application of one of the features of algorithmic information theory, to Giulio Tononi’s integrated information theory of consciousness. As I see it, a mathematical way of thinking brings algorithmic information theory to life, which then appears to hold the keys to a clearer view of physical, biological and digital processes.

In his discussion of consciousness Chaitin suggests an important idea – that thought reaches down to molecular activity.

If the brain worked only at the neuronal level, for example by storing one bit per neuron, it would have roughly the capacity of a pen drive, far too low to account for human intelligence. But at the RNA/DNA molecular biology level, the total information capacity is quite immense.

In the life of a research mathematician it is frequently the case that one works fruitlessly on a problem for hours then wakes up the next morning with many new ideas. The intuitive mind has much, much greater information processing capacity than the rational mind. Indeed, it seems capable of exponential search.

We can connect the two levels postulated here by having a unique molecular “name” correspond to each neuron, for example to the proverbial “grand- mother cell.” In other words, we postulate that the unconscious “mirrors” the associations represented in the connections between neurons. Connections at the upper conscious level correspond at the lower unconscious level to enzymes that transform the molecular name of one neuron into the molecular name of another. In this way, a chemical soup can perform massive parallel searches through chains of associations, something that cannot be done at the conscious level.

When enough of the chemical name for a particular neuron forms and accumulates in the unconscious, that neuron is stimulated and fires, bringing the idea into the conscious mind.

And long-chain molecules can represent memories or sequences of words or ideas, i.e., thoughts.

This possibility is suggested in the light of a digital view of things. The paper concludes in this way:

We now have a new fundamental substance, information, that comes together with a digital world-view.

And – most ontological of all – perhaps with the aid of these concepts we can begin again to view the world as consisting of both mind and matter. The notion of mind that perhaps begins to emerge from these musings is mathematically quantified, which is why we declared at the start that this essay pretends to take additional steps in the direction of a mathematical form of philosophy.

The eventual goal is a more precise, quantitative analysis of the concept of “mind.” Can one measure the power of a mind like one measures the power of a computer?

Quantification as a goal can be misunderstood. To many it signifies a deterministic, controllable world. Chaitin’s idea of quantification is motivated by the exact opposite. His systems are necessarily open-ended and creative. Quantification is more the evidence of comprehension.

There is one more thing in this paper that I enjoyed reading.  It comes up when he introduces the brain to his discussion of complexity.  I’ll just reproduce it here without comment.

Later in this essay, we shall attempt to analyze human intelligence and the brain. That’s also connected with complexity, because the human brain is the most complicated thing there is in biology. Indeed, our brain is presumably the goal of biological evolution, at least for those who believe that evolution has a goal. Not according to Darwin! For others, however, evolution is matter’s way of creating mind.   (emphasis added)

 

Architecture, orientation and mathematics

Recently, I became intrigued with the discussions of topology that I found among architects and historians of architecture. I saw a few familiar threads running through these discussions – like the emergence and self-organizing principles of biology, together with the view that mathematics was not, primarily, a tool but more a point of view.

I was introduced to the term bioconstructivism in a 2012 paper by John Shannon Hendrix: “Topological Theory in Bioconstructivism.”

Bioconstructivism involves the engagement in architecture of generative models from nature. This is in the tradition of natura naturans in architecture, which is the imitation of the forming principles of nature, as opposed to natura naturata, the direct imitation or mimesis of the forms. According to Plotinus in the Enneads, it is the purpose of all the arts to not just present a “bare reproduction of the thing seen,” the natura naturata, but to “go back to the Ideas from which Nature itself derives”

…An important element of Bioconstructivism is autopoiesis or self-generation, taking advantage of digital modeling and computer programs to imitate the capacity for organisms in nature to organize themselves, or for unorganized or fluid material to consolidate itself, based on the inner active principle of the organism, an “essential force” or “formative drive” which contradicts the mechanistic theories of Galileo, Descartes and Newton. The monad of Leibniz, for example, can self-generate in “integrals” from pre-existing sets of variables, resulting in “continuous multiplicity.”

But I must admit that, from this paper, I didn’t get a clear sense for how topology was used or understood in either architectural design or critique. However, in a 2007 essay on this very topic (on the website RZ-A), I found the following:

Toni Kotnik (the author of ‘The Topology of Type’) believes that the only goal for introducing topology to architecture has been ‘to overcome dialectical strategies of homogeneity or heterogeneity, which dominated the architectural discussion throughout the 20th century’ but rather than giving a real answer to these debates ‘a superficial practice has been established in which every non-linear deformation of the usually used canon of forms gets classified as topological design both by the architect and by the critics.’ (17) He also thinks that the current architecture is not really based on topology but ‘on differentiable dynamical systems and popular spin-offs like chaos theory and fractal geometry’ and suggests that ‘a topological approach to architecture should not be seen as a form-generating tool but as an abstract form of thinking to structure sensorial and rational perceptions in a spatial way’.

I searched for “The Topology of Type,” but found instead, Kotnik’s piece “…there is geometry in architecture.” And here, I was excited to see, a provocative discussion of mathematics and architecture, broader and more interesting than the more specific discussions of various applications of topological ideas.

Kotnik begins with this:

Within contemporary architectural design the form – rule relationship is often understood as the application of geometric rules in a generative process of form-finding, that is rules are a logico-algebraic text out of which architectural form emerges through the manipulation of data. By looking at the etymological roots of mathematics another reading of geometry can be uncovered that relates geometry back to bodily experience and the question of spatial orientation. This enables the re-introduction of the body into contemporary discourse of digital architecture.

Kotnik makes the point that, historically, geometry has been viewed as something that architects use (or consume), not something that they produce. The introduction of digital computing in architecture, however, opens the door to “the emergence of architectural form out of the manipulation of data” which, Kotnik observes, could introduce scientific thinking and methodology into the design process. He moves, then, into the discussion of mathematics that I so much enjoyed, beginning with the etymology of the word mathematics.

“…mathematics has its roots in the Greek ta mathemata, which means what can be learned where learning, mathesis, is about the recognition of the unchanged, the stable, of the Being in a world of constant Becoming.” Kotnik argues that mathematic is “the human search for patterns as a means of orientation,” and so “in its original meaning, mathematics is about relating the body with the world around. As such, mathemata is about orientation.”   (emphasis added)

Kotnik briefly surveys the developments in mathematics within cultural histories. Euclidean geometry is seen as “an act of physical orientation based on the creation of an intellectual structure,” shaping the individual perception of the world. He calls attention to Riemann’s reformulation of the foundation of geometry, his concept of manifold, and the influence of philosopher Johann Friedrich Herbart is noted. Herbart understood that the variety of perceptions people have, as they move through space, undergo a “graded fusion,” that “glues” individual perceptions together to form a geometric image. Related to the argument I made in Cognition, brains and Riemann, Kotnik suggests

Riemann’s idea of manifolds renders Herbart’s psychological concepts more concrete and precise for mathematical application.

…Euclid’s geometry and its universalized form in Riemann’s manifolds are examples of how mathematical concepts can be created by formalizing the act of orientation.

This discussion of mathematics and architecture rests on the idea that mathematics is not primarily a tool, but almost an articulation of sensation and its consequences. Mathematics, perhaps, can be seen as providing alternative ways to organize sensation or, more to the point, as alternative ways to understand what we see.

I’ll conclude with this last bit from Kotnik:

Measuring space is a central activity for structuring our surroundings and an important orientation element in the human environment. However, this environment must not be understood in a limited way as the purely physical environment. It must be seen more broadly and comprehensively as a multilayered perceptual and experiential space. This expanded understanding of space and the incorporation of the subject into space transforms measurement into an act of individual orientation. The importance of such bodily measuring of space,of geometrein, is not only justified by philosophers like Heidegger, Merleau-Ponty or Deleuze but also by developments in contemporary neuroscience. What emerges is an understanding of architecture as a primarily emotional and perceptual experience grounded in biological values that have to be put forward in the design process.

The mathematical nature of self-locating

A 2011 TED talk in London was brought to my attention recently. The speaker, Neil Burgess  from University College London, spoke on the topic, “How your brain tells you where you are.” Burgess investigates the role of the hippocampus in spatial navigation and episodic memory. In the talk he describes the function of what are called place cells, boundary-detection cells, and grid cells. I wrote (also in 2011) about a Science Daily report on studies dedicated to understanding how hippocampal neurons represent the temporal organization of experience, in particular, how they bridge the gaps between events that are not continuous. But here I want to focus more on how the brain constructs spatial experience.

Electrophysiological investigations have identified neurons that encode the spatial location and orientation of an animal. Among these are those that have come to be called place cells, head direction cells, and grid cells. Burgess also talked about boundary detection cells in his 2011 TED talk, which seem to be coordinated with head direction. I was particularly struck by the clarity of some of the data images he presented. He showed his audience images produced by the firing of boundary detection cells in response to boundaries in the environment of a rat. In one of them we could see cell firings in response to one of the walls in the rat’s environment. In another, created after a second wall had been added to the environment, the firing was duplicated with respect the added wall. Burgess also presented the image of cells that fired when the rat was about midway between the walls, and one could see directly that when the rat’s box was expanded, the firing locations expanded. These boundaries needn’t be rectangular walls. They can be the drop at the edge of a table or the circular wall of a circular box.

Grid cells are creating representations a little differently, in a quasi-mathematical way, as their name suggests. Burgess tells about the rats again:

Now grid cells are found, again, on the inputs to the hippocampus, and they’re a bit like place cells. But now as the rat explores around, each individual cell fires in a whole array of different locations which are laid out across the environment in an amazingly regular triangular grid…So together, it’s as if the rat can put a virtual grid of firing locations across its environment — a bit like the latitude and longitude lines that you’d find on a map, but using triangles. And as it moves around, the electrical activity can pass from one of these cells to the next cell to keep track of where it is, so that it can use its own movements to know where it is in its environment.

Both boundary detection cells and grid cells reflect a sensory perception of the environment. But neurons also encode movement from proprioceptive information that can be used to measure the body’s displacement as we move (path integration). This is not movement defined by the environmental changes that occur, but from the body’s sensations of itself.
In a more recent paper Burgess and co-author C. Burgess  describe an interesting test of the errors that can be produced by the iterative neural processing of self-motion.

This process, known as path integration or dead reckoning, requires the animal to update its representation of self-location based on the cumulative estimate of the distance and direction it has traveled. It can be shown that an animal is utilising path integration by introducing a known error into its representation of direction or distance: in the case of the gerbils, if they are rotated prior to the return leg of the journey, and this is done slowly so that the vestibular system does not detect the motion, then the animals head towards the nest with an angular error equal to the amount they were rotated by.

So when we try to find our way back to something, like where we parked the car, we likely use boundary-detecting cells to remember distances and directions to buildings and boundaries, but we also remember the path we took, represented by the firing of grid cells and path integration. The interaction of these things seem to contribute to the pattern of neural firings that become associated with a particular place, a cognitive map of that place, formed by what are called place cells. There is a nice discussion of the history of the study of place cells which includes a number of images at BrainFacts.org. There the point is made that the ‘cognitive map’ defined by place cells is a ‘relation among neurons,’ not among points in space.

In brief, we can think of the “map” of a session in terms of space (the spatial relations of firing fields) and time (the tendency for pairs of cells to fire together or not). Since the speed of rats is restricted, these are essentially equivalent. An important concept is that the map is entirely in the brain. In this description, a map is defined by the relation among hippocampal neurons, not by the relationships between neurons and the environment. The linkage to the environment is critical, but does not define the map.
The temporal relations are important for two reasons. First, neurons in the brain do not know about space directly, but they know about time. Neurons can code the timing relations of the neurons that project to it, but not the spatial relations. In other words, within the brain, the map is a timing map that encodes the temporal overlap between cell pairs.

 

There are a few interesting things going on here. No doubt the grid cell idea and the vector-like measures of displacement that are encoded when we move around, trigger memories of mathematics. They are like our mathematical analyses of the 3-dimensional space of our experience. Place cells, on the other hand, are like another level of abstraction. They seem to have more in common with coding and non-spatial analyses, even though we don’t seem to know how they do what they do. The neurons that fire to represent a particular location have no spatial relationships among themselves. Neighboring place cells do not indicate neighboring environmental areas. And while correlated to sensory input, they are part of a non-sensory system. It is the integration of this system with the neural representations of boundaries, direction, and distance (among other things) that create our spacial awareness. Certainly sensory information is being subject to some kind of transformation. Spatial relations are translated into what look like purely temporal ones (the timing of neuron firing). The non-sensory system then stores a coded representation of a sensory one. Here again we see, not the mathematical modeling of brain processes but more their mathematical nature.

What we see when….

I recently listened to Krys Boyd’s interview with Peter Mendelsund, author of the new book What We See When We Read,  on North Texas’ public radio. Mendelsund is an award-winning book jacket designer. The interview had the effect of connecting his thoughts about reading to thoughts that I have had about mathematics. It wasn’t immediately obvious, even to me, why. But I think I’m beginning to understand.

An excerpt from the book was published in the Paris Review. This excerpt focuses on the incompleteness of the visual images that our minds create when we are reading, despite the fact that we experience them as clear or vivid. Mendelsund quotes William Gass who wrote on the character of Mr. Cashmore from Henry James’s The Awkward Age:

We can imagine any number of other sentences about Mr. Cashmore added … now the question is: what is Mr. Cashmore? Here is the answer I shall give: Mr. Cashmore is (1) a noise, (2) a proper name, (3) a complex system of ideas, (4) a controlling perception, (5) an instrument of verbal organization, (6) a pretended mode of referring, and (7) a source of verbal energy.

The quote is from the book Fiction and the Figures of Life, a collection of essays first published in 1979.  Following Gass a little further we find these remarks:

But Mr. Cashmore is not a person. He is not an object of perception, and nothing whatever that is appropriate to persons can be correctly said of him. There is no path from idea to sense (this is Descartes’ argument in reverse), and no amount of careful elaboration of Mr. Cashmore’s single eyeglass, his upper lip or jauntiness is going to enable us to see him.

Mendelsund adds this:

It is how characters behave, in relation to everyone and everything in their fictional, delineated world, that ultimately matters…

Though we may think of characters as visible, they are more like a set of rules that determines a particular outcome. A character’s physical attributes may be ornamental, but their features can also contribute to their meaning.

(What is the difference between seeing and understanding?)

He follows this with a very mathematical looking statement where the characters (along with some physical attributes), as well as particular events and their cultural environment, are represented by letters. Their interaction is somehow formalized in symbol.

These are all words that have been used with respect to mathematics – “not an object of perception,” “behavior that matters only in relation,” “a set if rules that determines a particular outcome…”

Mendelsund occasionally uses mathematical ideas to describe some of what may be happening in the reading (and the writing) of a story. There are the maps of novels, the graphs and contours of plot, the vectors in Kafka’s vision of New York City. And these observations:

Anna can be described as several discrete points (her hands are small; her hair is dark and curly) or through a function (Anna is graceful)

If we don’t have pictures in our minds when we read, then it is the interaction of ideas – the intermingling of abstract relationships – that catalyzes feeling in us readers. This sounds like a fairly unenjoyable experience, but, in truth, this is also what happens when we listen to music. This relational, nonrepresentational calculus is where some of the deepest beauty in art is found. Not in mental pictures of things but i the play of elements…

…But we don’t see “meaning.” Not is the way that we see apples or horses…

Words are like arrows – they are something and they also point toward something.

Any text can be seen as communication through words (symbol), that can be aided by pictures, but that only lightly relies on them. The reader builds an internally consistent world, grounded mainly in concepts, whose structure is communicated in symbol. And both structure and meaning are never fully completed. This certainly sounds a lot like mathematics. But more striking about Mendelsund’s work in particular, is his making direct use of his experience to explore profound philosophical questions.  What happens when we read tells us something about ourselves.

The world, as we read it, is made of fragments. Discontinuous points – discrete and dispersed.

(So are we.  So too our coworkers; spouses; parents; children; friends..)

We know ourselves and those around us by our reading of them, by the epithets we have given them, by their metaphors, synechdoches, metonymies.  Even those we love most in the world.  We read them in their fragments and substitutions.

The world for us is a work in progress.  And what we understand of it we understand by cobbling these pieces together – synthesizing them over time.

It is the synthesis that we know.  (It is all we know.)

And all the while we are committed to believing in the totality – the fiction of seeing.

…Authors are curators of experience.

…reading mirrors the procedure by which we acquaint ourselves with the world. It is not that our narratives necessarily tell us something true about the world (though they might), but rather that the practice of reading feels like, and is like, consciousness itself; imperfect; partial; hazy; co-creative.

Writers reduce when they write, and readers reduce when they read. The brain itself is built to reduce, replace, emblemize…Verisimilitude is not only a false idol, but also an unattainable goal. So we reduce. And it is not without reverence that we reduce. This is how we apprehend the world. This is what humans do.

Picturing stories is making reductions. Through reductions, we create meaning.

There is significant overlap here with how I see the doing and the making of mathematics.  Mathematics is the making of meaning through reduction and synthesis.  Emerging from some adjustment in the direction of the mind’s eye, mathematics mirrors, in another way, how we are acquainted with the world. It finds meaning that opens up other parts of that world, a bit more for us.  And it tells us something about the nature of vision and understanding itself. Mathematics will not be fully embraced by our culture until we see this – until we recognize its own living nature.

 

What mathematics can make of our intuition

The CogSci 2014 Proceedings have been posted and there are a number of links to interesting papers.

Here are some math-related investigations:

A neural network model of learning mathematical equivalence

The Psychophysics of Algebra Expertise:  Mathematics Perceptual Learning Interventions Produce Durable Encoding Changes

Two Plus Three is Five:  Discovering Efficient Addition Strategies without Metacognition

Modeling probability knowledge and choice in decisions from experience

Simplicity and Goodness-of-fit in Explanation:  The Case of Intuitive Curve-Fitting

Cutting In Line:  Discontinuities in the Use of Large Numbers by Adults

Applying Math onto Mechanism:  Investigating the Relationship Between Mechanistic and Mathematical Understanding

Pierced by the number line: Integers are associated with back-to-front sagittal space

Equations Are Effects: Using Causal Contrasts to Support Algebra Learning

One of the presentations I attended is represented by the paper:
Are Fractions Natural Numbers Too? This study challenges the argument that human cortical structures are ill-suited for processing fractions, a view which has been used to justify the well-documented difficulty that many children have with learning fractions.

Such accounts argue that the cognitive system for processing number, the approximate number system (ANS), is fundamentally designed to deal with discrete numerosities that map onto whole number values. Therefore, according to innate constraints theorists, fractions and rational number concepts are difficult because they lack an intuitive basis and must instead be built from systems originally developed to support whole number understanding.

…Emerging data from developmental psychology and neuroscience suggest that an intuitive (perhaps native) perceptually based cognitive system for grounding fraction knowledge may indeed exist. This cognitive system seems to represent and process amodal magnitudes of non- symbolic ratios (such as the relative length of two lines).

This particular study is fairly well-focused, however.  Researchers aim to demonstrate a link between our sensitivity to non-symbolic ratios and the acquired understanding of magnitudes represented by symbolic fractions.   Given this focus, the study looked at individual responses to “cross-format comparisons of various fractional values (i.e. ratios composed of dots or circles vs. traditional fraction symbols).  For example, a given symbolic ratio was given with a numerical numerator and a numerical denominator.  The dot stimulus would show an array of dots in the numerator, of a certain quantity, and another array, of a different quantity, in the demoninator.  The circle stimulus showed a blackened disc of a certain area in the numerator and another, of a different area, in the denominator.  With a reasonable amount of care taken in their analysis, the authors concluded that they had found evidence of “flexible and accurate processing of non-symbolic fractional magnitudes in ways similar to ANS processing of discrete numerosities.

Considered in concert with other recent findings, our evidence suggests that humans may have an intuitive “sense” of ratio magnitudes that may be as compatible with our cortical machinery as is the “sense” of natural number. Just as the ANS allows us to perceive the magnitudes of discrete numerosities, this ratio sense provides humans with an intuitive feel for non-integer magnitudes.

An important consequence of this kind of evidence is their suggestion that the widespread difficulty with fractions may be the result teaching fractions incorrectly –  with partitioning or sharing ideas that use counting skills and whole number magnitudes instead of encouraging the use of what may be our intuitive ratio processing system.  This point was driven home for me when I looked at the circle representations of ratios that were used in the study.  I found them very effective, very readable.

This view is certainly consistent with the proposal that a mental representation of continuous magnitudes predates discrete counting numbers  (as with Gallistel, et al).   But I also think that this initiative points to something likely to be important in cognitive science as well as math education.   My own hunch is that an intuitive sense of ratio is likely grounded in continuous magnitudes, like length and area or perhaps even in tactile sensations of measure, like in cooking, as was suggested by one of the paper’s authors.  And I think it plays some role in the long debate over the relationship between discrete and continuous numbers that can be seen in the history of mathematics.  One could argue that the ancient Greek’s rigorous distinction between number and magnitude contributed to their remarkable development of geometric ideas.  With the number concept isolated away from the geometric idea of magnitude, perhaps their geometric efforts were liberated, allowing a focused elaboration on that ‘intuitive sense of ratio,’ extending it, permitting manifold and deep results.  Understanding this cultural event in the light of cognitive processes might inform our ideas about how mathematics emerges as well as how to communicate that development in mathematics education.