The conceptual plasticity of ancient Babylonian astronomers

MardukA recent discovery in the history of science and mathematics has prompted a number of articles, links to which are provided at the end of this text. Astrophysicist and science historian Mathieu Ossendrijver, of Humboldt University in Berlin, made the observation that ancient Babylonian astronomers calculated Jupiter’s position from the area under a time-velocity graph. He very recently published his findings in the journal Science. Philip Ball reported on the findings in Nature.

A reanalysis of markings on Babylonian tablets has revealed that astronomers working between the fourth and first centuries BC used geometry to calculate the motions of Jupiter — a conceptual leap that historians thought had not occurred until fourteenth-century Europe.

Also from Ball,

Hermann Hunger, a specialist on Babylonian astronomy at the University of Vienna, says that the work marks a new discovery. However, he and Ossendrijver both point out that Babylonian mathematicians were well accustomed to geometry, so it is not entirely surprising that astronomers might have grasped the same points. “These findings do not so much show a higher degree of sophistication in geometric thinking, but rather a remarkable ability to apply traditional Babylonian geometric thinking to a new problem”, Hunger says. (emphasis added).

This is learning or cognition at every level – applying an established thought or experience to a new problem.

An NPR report on the discovery takes note of the fact that Jupiter was associated with the Babylonian god Marduk. NPR’s Nell Greenfieldboyce comments:

Of course, these priests wanted to track Jupiter to understand the will of their god Marduk in order to do things like predict future grain harvests. But still, they had the insight to see that the same math used for working with mundane stuff like land could be applied to the motions of celestial objects.

And NYU’s Alexander Jones replies:

They’re, in a way, like modern scientists. In a way, they’re very different. But they’re still coming up with very, you know – things that we can recognize as being like what we value as mathematics and science.

I think Greenfieldboyce’s “but still…” and Jones’ follow-up comment betray an unnecessary dismissal of this non-scientific motivation. Discoveries like this one challenge a number of ideas that represent the consensus of opinion. These include the standard accounts of the history of science and mathematics, how we may or may not understand conceptual development in the individual as well as in various cultures, and also the cultural overlaps that exist in science, mathematics, art, and religion. In any case, in the years that I’ve been teaching mathematics, I’ve tried to reassure my students that there’s a reason that learning calculus can be so difficult. I suggest to them that the development of calculus relies on cognitive shift, a quantification of change and movement, or of time and space. And the attention that has been given this Babylonian accomplishment highlights that fact. The accomplishment is stated most clearly at the end of Ossendrijver’s paper:

Ancient Greek astronomers such as Aristarchus of Samos, Hipparchus, and Claudius Ptolemy also used geometrical methods, while arithmetical methods are attested in the Antikythera mechanism and in Greco-Roman astronomical papyri from Egypt. However, the Babylonian trapezoid procedures are geometrical in a different sense than the methods of the mentioned Greek astronomers, since the geometrical figures describe configurations not in physical space but in an abstract mathematical space defined by time and velocity (daily displacement). (emphasis added)

The distinction being made here is the difference between imagining the idealized spatial figures of geometry against the spatial arrangement of observed celestial objects versus imagining the idealized spatial figures of geometry as a description of the relations within a purely numeric set of measurements. There’s a big difference, a major difference, that open paths to modern mathematics.

Ossendrijver also describes, a bit more specifically, the 14th century European scholars whose work, until now, was seen as the first use of these techniques:

The “Oxford calculators” of the 14th century CE, who were centered at Merton College, Oxford, are credited with formulating the “Mertonian mean speed theorem” for the distance traveled by a uniformly accelerating body, corresponding to the modern formula s = t•(v0 + v1)/2, where v0 and v1 are the initial and final velocities. In the same century Nicole Oresme, in Paris, devised graphical methods that enabled him to prove this relation by computing it as the area of a trapezoid of width t and heights v0 and v1

Mark Thakkar wrote an article on these calculators for a 2007 issue of Oxford Today with emphasis on the imaginative nature of their analyses.

These scholars busied themselves with quantitative analyses of qualities such as heat, colour, density and light. But their experiments were those of the imagination; practical experiments would have been of little help in any case without suitable measuring instruments. Indeed, some of the calculators’ works, although ostensibly dealing with the natural world, may best be seen as advanced exercises in logic.

Thakkar reminds the reader that these men were philosophers and theologians, many of whom went on “to enjoy high-profile careers in politics or the Church.”

He also says this:

…with hindsight we can see that the calculators made an important advance by treating qualities such as heat and force as quantifiable at all, even if only theoretically. although the problems they set themselves stemmed from imaginary situations rather than actual experiments, they nonetheless ‘introduced mathematics into scholastic philosophy’, as Leibniz put it. This influential move facilitated the full-scale application of mathematics to the real world that characterized the Scientific revolution and culminated triumphantly in Newton’s laws of motion.

I find it worth noting that reports of the ancient Babylonian accomplishment, as well as accounts of thinkers once credited with being the originators of these novel conceptualizations, necessarily include the theological considerations of their cultures.

Links:

Babylonian Astronomers Used Geometry to Track Jupiter

Full paper

NYTimes article

NPR

 

Testable thoughts?

Quanta magazine has a piece on a recent conference in Munich where scientists and philosophers discussed the history and future of scientific inquiry. The meeting seems to have been mostly motivated by two things. The first of these is found in the diminishing prospects for physics experiments – energy levels that can’t be reached by accelerators and the limits of our cosmic horizon. The second is the debate over the value of untesteable theories like string theory. Speakers and program can be found here.

But the underlying issues are, unmistakably, very old epistemological questions about the nature of truth and the acquisition of knowledge. And these questions highlight the impossible unraveling of mathematics and science.

Natalie Wolchover writes:

The crisis, as Ellis and Silk tell it, is the wildly speculative nature of modern physics theories, which they say reflects a dangerous departure from the scientific method. Many of today’s theorists — chief among them the proponents of string theory and the multiverse hypothesis — appear convinced of their ideas on the grounds that they are beautiful or logically compelling, despite the impossibility of testing them. Ellis and Silk accused these theorists of “moving the goalposts” of science and blurring the line between physics and pseudoscience. “The imprimatur of science should be awarded only to a theory that is testable,” Ellis and Silk wrote, thereby disqualifying most of the leading theories of the past 40 years. “Only then can we defend science from attack.”

Unfortunately, defending science from attack has become more urgent and often contributes to the debate, even if not acknowledged as a concern.

Reference was made to Karl Popper who, in the 1930s used falsifiablity as the criterion for establishing whether a theory was scientific or not. But the perspective reflected in Bayesian statistics has become an alternative.

Bayesianism, a modern framework based on the 18th-century probability theory of the English statistician and minister Thomas Bayes. Bayesianism allows for the fact that modern scientific theories typically make claims far beyond what can be directly observed — no one has ever seen an atom — and so today’s theories often resist a falsified-unfalsified dichotomy. Instead, trust in a theory often falls somewhere along a continuum, sliding up or down between 0 and 100 percent as new information becomes available. “The Bayesian framework is much more flexible” than Popper’s theory, said Stephan Hartmann, a Bayesian philosopher at LMU. “It also connects nicely to the psychology of reasoning.”

When Wolchover made the claim that rationalism guided Einstein toward his theory of relativity, I started thinking beyond the controversary over the usefulness of string theory.

“I hold it true that pure thought can grasp reality, as the ancients dreamed,” Einstein said in 1933, years after his theory had been confirmed by observations of starlight bending around the sun.

This reference to ‘the ancients’ brought me back to my recent preoccupation with Platonism.  The idea that pure thought can grasp reality is a provocative one, full of hidden implications about the relationship between thought and reality that have not been explored. It suggests that thought itself has some perceiving function, some way to see. It reminds me again of Leibniz’s philosophical dream where he found himself in a cavern with “little holes and almost imperceptible cracks” through which “a trace of daylight entered.” But the light was so weak, it “required careful attention to notice it.” His account of the action in the cavern (translated by Donald Rutherfore) includes this:

…I began often to look above me and finally recognized the small light which demanded so much attention. It seemed to me to grow stronger the more I gazed steadily at it. My eyes were saturated with its rays, and when, immediately after, I relied on it to see where I was going, I could discern what was around me and what would suffice to secure me from dangers. A venerable old man who had wandered for a long time in the cave and who had had thoughts very similar to mine told me that this light was what is called “intelligence” or “reason” in us. I often changed position in order to test the different holes in the vault that furnished this small light, and when I was located in a spot where several beams could be seen at once from their true point of view, I found a collection of rays which greatly enlightened me. This technique was of great help to me and left me more capable of acting in the darkness.

It reminds me also of Plato’s simile of the sun, where Plato observes that sight is bonded to something else, namely light, or the sun itself.

Then the sun is not sight, but the author of sight who is recognized by sight.

And the soul is like the eye: when resting upon that on which truth and being shine, the soul perceives and understands, and is radiant with intelligence…

Despite the fact that the Munich conference might be attached to the funding prospects for untestable theories, or the need to distinguish scientific theories from things like intelligent design theories, there is no doubt that we are still asking some very old, multi-faceted questions about the relationship between thought and reality.  And mathematics is still as the center of the mystery.

Plato, Gödel and quantum mechanics

I’ve been reading Rebecca Goldstein’s Incompleteness: The Proof and Paradox of Kurt Gödel which, together with my finding David Mumford’s Why I am a Platonist, has kept me a bit more preoccupied, of late, with Platonism. This is not an entirely new preoccupation. I remember one of my early philosophy teacher’s periodically blurting out, “See, Plato was right!” And I hate to admit that I often wondered, “About what?” But Plato’s idea has become a more pressing issue for me now. It inevitably touches on epistemology, questions about the nature of the mind, as well as the nature of our physical reality. Gödel’s commitment to a Platonic view is particularly striking to me because of how determined he appears to have been about it. Plato, incompleteness, objectivity, they all came to mind again when I saw a recent article in Nature by Davide Castelvcchi on a new connection between Gödel’s incompleteness theorems and unsolvable calculations in quantum physics.

Gödel intended for his proof of mathematics’ incompleteness to serve the notion of objectivity, but it was an objectivity diametrically opposed to the objectivity of the positivists, whose philosophy was gaining considerable momentum at the time. Goldstein quotes from a letter Gödel wrote in 1971 criticizing the direction that positivist thought took.

Some reductionism is correct, [but one should] reduce to (other) concepts and truths, not to sense perceptions….Platonic ideas are what things are to be reduced to.

The objectivity preserved when we restrict a discussion to sense perceptions rests primarily on the fact that we believe that we are seeing things ‘out there,’ that we are seeing objects that we can identify. This in itself is often challenged by the observation that what we see ‘out there’ is completely determined by the body’s cognitive actions. Quantum physics, however, has challenged our claims to objectivity for other reasons, like the wave/particle behavior of light and our inability to measure a particles position and momentum simultaneously. While there is little disagreement about the fact that mathematics manages to straddle the worlds of both thought and material, uniquely and successfully, not much has advanced about why. I often consider that Plato’s view of mathematics has something to say about this.

What impresses me at the moment is that Gödel’s Platonist view and the implications of his Incompleteness Theorems, are important to an epistemological discussion of mathematics, . Gregory Chaitin shares Gödel’s optimistic view of incompleteness, recognizing it as a confirmation of mathematics’ infinite creativity. And in Chaitin’s work on metabiology, described in his book Proving Darwin, mathematical creativity parallels biological creativity – the persistent creative action of evolution.  In an introduction to a course on metabiology, Chaitin writes the following:

in my opinion the ultimate historical perspective on the signi ficance of incompleteness may be that
Gödel opens the door from mathematics to biology.

Gödel’s work concerned formal systems – abstract, symbolic organizations of terms and the relationships among them. Such a system is complete if for, everything that can be stated in the language of the system, either the statement or its negation can be proved within the system. In 1931, Gödel established that, in mathematics, this is not possible. What he proved is that given any consistent axiomatic theory, developed enough to enable the proof of arithmetic propositions, it is possible to construct a proposition that can be neither proved nor disproved by the given axioms. He proved further that no such system can prove its own consistency (i.e. that it is free of contradiction). For Gödel, this strongly supports the idea that the mathematics thus far understood explores a mathematical reality that exceeds what we know of it. Again from Goldstein:

Gödel was able to twist the intelligence-mortifying material of paradox into a proof that leads us to deep insights into the nature of truth, and knowledge, and certainty. According to Gödel’s own Plantonist understanding of his proof, it shows us that our minds, in knowing mathematics, are escaping the limitations of man-made systems, grasping the independent truths of abstract reality.

In 1936, Alan Turing replaced Gödel’s arithmetic-based formal system with carefully described hypothetical devices. These devices would be able to perform any mathematical computation that could be represented as an algorithm. Turing then proved that there exist problems that cannot be effectively computed by such a device (now known as a ‘Turing machine’) and that it was not possible to devise a Turing machine program that could determine, within a finite time, if the machine would produce an output given some arbitrary input (the halting problem).

The paper that Castelvecchi discusses is one which brings Gödel’s theorems to quantum mechanics:

In 1931, Austrian-born mathematician Kurt Gödel shook the academic world when he announced that some statements are ‘undecidable’, meaning that it is impossible to prove them either true or false. Three researchers have now found that the same principle makes it impossible to calculate an important property of a material — the gaps between the lowest energy levels of its electrons — from an idealized model of its atoms.

Here, our ability to comprehend a quantum state of affairs and our ability to observe are tightly knit.

Again from Castelvecchi:

Since the 1990s, theoretical physicists have tried to embody Turing’s work in idealized models of physical phenomena. But “the undecidable questions that they spawned did not directly correspond to concrete problems that physicists are interested in”, says Markus Müller, a theoretical physicist at Western University in London, Canada, who published one such model with Gogolin and another collaborator in 2012.

The work described in the article concerns what’s called the spectral gap, which is the gap between the lowest energy level that electrons can occupy and the next one up. The presence or absence of the gap determines some of the material’s properties. The paper’s first author, Toby S. Cubitt, is a quantum information theorist and the paper is a direct application of Turing’s work. In other words, a Turing machine is constructed where the spectral gap depends on the outcome of a halting problem.

What I find striking is that the objectivity issues raised by the empiricists and positivists of the 1920s are not the same as the objectivity issues raised by quantum mechanics. But the notion of undecidability must be deep indeed. It persists and continues to be relevant.

Plato, graphs, vision and another anchor

300px-Duals_graphs.svg

I’m not sure what led me to David Mumford’s Why I am a Platonist,  which appeared in a 2008 issue of the European Mathematical Society (EMS) Newsletter, but I’m happy I found it. David Mumford is currently Professor Emeritus at Brown and Harvard Universities. The EMS piece is a clear and straightforward exposition of Mumford’s Platonism, which he defines in this way:

The belief that there is a body of mathematical objects, relations and facts about them that is independent of and unaffected by human endeavors to discover them.

Mumford steers clear of the impulse to place these objects, relations, and facts somewhere, like outside time and space. He relies, instead, on the observation that the history of mathematics seems to tell us that mathematics is “universal and unchanging, invariant across time and space.” He illustrates this with some examples, and uses the opportunity to also argue for a more multicultural perspective on the history of mathematics than is generally taught.

But Mumford’s piece is more than a piece on what many now think is an irrelevant debate about whether mathematics is created or discovered. As the essay unwinds, he begins to talk about the kind of thing that, in many ways, steers the direction of my blog.

So if we believe that mathematical truth is universal and independent of culture, shouldn’t we ask whether this is uniquely the property of mathematical truth or whether it is true of more general aspects of cognition? In fact, “Platonism” comes from Plato’s Republic, Book VII and there you find that he proposes “an intellectual world”, a “world of knowledge” where all things pertaining to reason and truth and beauty and justice are to be found in their full glory (cf. http://classics.mit.edu/Plato/republic.8.vii.html).

Mumford briefly discusses the Platonic view of ethical principles and of language, making the observation that all human languages can be translated into each other (“with only occasional difficulties”). This, he argues, suggests a common conceptual substrate. He notes that people have used graphs of one kind or another to organize concepts and proceeds to argue that studies in cognition have extended this same idea into things like semantic nets or Bayesian networks, with the understanding that knowledge is the structure given to the world of concepts. Mathematics, then, is characterized as pure structure, the way we hold it all together.

And here’s the key to this discussion for me. Mumford proposes that the Platonic view can be understood by looking at the body.

Brian Davies argues that we should study fMRI’s of our brains when think about 5, about Gregory’s formula or about Archimedes’ proof and that these scans will provide a scientific test of Platonism. But the startling thing about the cortex of the human brain is how uniform its structure is and how it does not seem to have changed in any fundamental way within the whole class of mammals. This suggests that mental skills are all developments of much simpler skills possessed, e.g. by mice. What is this basic skill? I would suggest that it is the ability to convert the analog world of continuous valued signals into a discrete representation using concepts and to allow these activated concepts to interact via their graphical links. The ability of humans to think about math is one result of the huge expansion of memory in homo sapiens, which allows huge graphs of concepts and their relations to be stored and activated and understood at one and the same time in our brains. (emphasis added)

Mumford ends this piece with what is likely the beginning of another discussion:

How do I personally make peace with what Hersh calls “the fatal flaw” of dualism? I like to describe this as there being two orthogonal sides of reality. One is blood flow, neural spike trains, etc.; the other is the word ‘loyal’, the number 5, etc. But I think the latter is just as real, is not just an epiphenomenon and that mathematics provides its anchor. (emphasis added)

David Mumford now studies the mathematics of vision. He has a blog where he discusses his work on vision, his earlier work in Algebraic Geometry, and other things.  In his introductory comments to the section on vision he says the following:

What is “vision”? It is not usually considered as a standard field of applied mathematics but in the last few decades it has assumed an identity of its own as a multi-disciplinary area drawing in engineers, computer scientists, statisticians, psychologists and biologists as well as mathematicians. For me, its importance is that it is a point of entry into the larger problem of the scientific modeling of thought and the brain. Vision is a cognitive skill that, on the one hand, is mastered by many lower animals while, on the other hand, has proved very hard to duplicate on a computer. This level of difficulty makes it an ideal test bed for theorizing on the subtler talents manifested by humans.

The section on vision provides a narrative that describes some of the hows and whys for particular mathematical efforts that are being used. And each one of these disciplines has its own section with links to references.

A 2011 post of mine was inspired, in part by the idea that what Plato was actually saying is consistent with even the most brain-based thoughts on how we come to know anything. I took note of a few statements from what has been called the simile of the sun.

And the power which the eye possesses is a sort of effluence which is dispensed from the sun

Then the sun is not sight, but the author of sight who is recognized by sight

And the soul is like the eye: when resting upon that on which truth and being shine, the soul perceives and understands, and is radiant with intelligence; but when turned toward the twilight of becoming and perishing, then she has opinion only, and goes blinking about, and is first of one opinion and then of another, and seems to have no intelligence.

 

 

 

 

 

“…an anchor in the cosmic swirl.”

Looking through some blog sites that I once frequented (but have recently neglected) I saw that John Horgan’s Cross Check had a piece on George Johnson’s book Fire in the Mind: Science, Faith, and the Search for Order. This quickly caught my attention because Horgan and Johnson figured prominently in my mind in the late 90’s. In the first paragraph Horgan writes:

Fire alarmed me, because it challenged a fundamental premise of The End of Science , which I was just finishing.

In the mid-nineties, I knew that Horgan was a staff writer for Scientific American and I had kept one of his pieces on quantum physics in my file of interesting new ideas. When I heard about The End of Science I got a copy and very much enjoyed it. I had begun writing, and was trying to create a new beginning for myself. This included my decision to leave New York (where I had lived my whole life) and Manhattan in particular, where I had lived for about seventeen years. In the end, it was Johnson’s book that gave my move direction. I wouldn’t just move to a place that was warmer, prettier, and easier. I decided to move to Santa Fe, New Mexico.

In his original review of Fire in the Mind, Horgan produced a perfect summary of the reasons I chose Santa Fe. He reproduced this review on his blog in response to the release of a new edition:

In New Mexico, the mountains’ naked strata compel historical, even geological, perspectives. The human culture, too, is stratified. Scattered in villages throughout the region are Native Americans such as the Tewa, whose creation myths rival those of modern cosmology in their intricacy. Exotic forms of Christianity thrive among both the Indians and the descendants of the Spaniards who settled here several centuries ago. In the town of Truchas, a sect called the Hermanos Penitentes seeks to atone for humanity’s sins by staging mock crucifixions and practicing flagellation.

Lying lightly atop these ancient belief systems is the austere but dazzling lamina of science. Slightly more than half a century ago, physicists at the Los Alamos National Laboratory demonstrated the staggering power of their esoteric formulas by detonating the first atomic bomb. Thirty miles to the south, the Santa Fe Institute was founded in 1985 and now serves as the headquarters of the burgeoning study of complex systems. At both facilities, some of the world’s most talented investigators are seeking to extend or transcend current explanations about the structure and history of the cosmos.

Santa Fe, it seemed, would not only be a nice place to live, it would be a good place to think. But I should stop reminiscing and get to the point, which has to do with Johnson’s book and a few related topics which Horgan pointed to in his suggestions for further reading.   Before I look at those suggestions, lets see why they were there.

Horgan characterizes Johnson’s book as “one that raises unsettling questions about science’s claims to truth.” Johnson puts forward a simple description of the view that characterizes Fire in the Mind in the Preface to the new edition.

Our brains evolved to seek order in the world. And when we can’t find it, we invent it. Pueblo mythology cannot compete with astrophysics and molecular biology in attempting to explain the origins of our astonishing existence. But there is not always such a crisp divide between the systems we discover and those we imagine to be true.

Horgan credits Johnson with providing, “an up-to-the-minute survey of the most exciting and philosophically resonant fields of modern research,” and goes on to say, “This achievement alone would make his book worth reading. His accounts of particle physics, cosmology, chaos, complexity, evolutionary biology and related developments are both lyrical and lucid.” But the issues raised, and battered about a bit by Horgan, have to do with what one understands science to be, and what one could mean by truth.  Johnson argues that there is a fundamental relationship between the character of pre-scientifc myths and scientific theories.  For Horgan, this brought Thomas Kuhn to mind and hence a reference to one of his posts from 2012, What Thomas Kuhn Really Thought about Scientific “Truth.”

While pre-scientific stories about the world are usually definitively distinguished from the scientific view, the impulse to explore them does occur in the scientific community.  I, for one, was  impressed some years ago when I saw that the sequence of events in the creation story I learned from Genesis somewhat paralleled scientific ideas (light appeared, then light was separated from darkness, sky from water, water from land, then creatures appeared in the water and sky followed by creatures on the land). The effectiveness of scientific theories, however, is generally accepted to be the consequence of the theories being correct. One of the things that inspires books like Johnson’s, however, is that science hasn’t actually diminished the mystery of our existence and our world. The stubborn strangeness of quantum-mechanical physics, the addition of dark matter and dark energy to the cosmos, the surprises in complexity theories, the difficulties understanding consciousness, all of these things stir up questions about the limits of science or even what it means to know anything.

Horgan’s also refers to the use of information theory to solve some of the physic’s mysteries, where information is treated as the fundamental substance of the universe. He links to a piece where he argues that this can’t be true.  But I believe Horgan is not seeing the reach of the information theses. According to some theorists, like David Deutsch, information is always ‘instantiated,’ always physical, but always undergoing transformation. It has, however, some substrate independence. Information as such includes the coding in DNA, the properties within quantum mechanical systems, as well as our conceptual systems. On another level, consciousness, is described by Giulio Tononi’s model as integrated information.

The persistence of mystery doesn’t cause me to wonder about whether scientific ideas are true or not. It leads me to ask more fundamental questions like –  What is science? How did it happen? Why or how was it perceived that mathematics was the key? I believe that these are the questions lying just beneath Johnson’s narrative.

The development of scientific thinking is an evolution, that is likely part of some larger evolution. It is real, it has meaning and it has consequences. I wouldn’t ask if it’s true. It is what we see when we hone particular skills of perception.  Mathematics is how we do it. Like the senses,  mathematics builds structure from data, even when those structures are completely beyond reach. When explored directly by the mathematician, he or she probes this structure-building apparatus itself.

I can’t help but interject here something from biologist Humberto Maturana, from a paper published in Cybernetics and Human Knowing  where he comments, “..reality is an explanatory notion invented to explain the experience of cognition.”

Relevant here is something else I found as I looked through Scientific American blog posts. An article by Paul Dirac from the May 1963 issue of Scientific American was reproduced in a 2010 post. It begins:

In this article I should like to discuss the development of general physical theory: how it developed in the past and how one may expect it to develop in the future. One can look on this continual development as a process of evolution, a process that has been going on for several centuries.

In the course of talking about quantum theory, Dirac describes Schrodinger’s early work on his famous equation.

Schrodinger worked from a more mathematical point of view, trying to find a beautiful theory for describing atomic events, and was helped by De Broglie’s ideas of waves associated with particles. He was able to extend De Broglie’s ideas and to get a very beautiful equation, known as Schrodinger’s wave equation, for describing atomic processes. Schrodinger got this equation by pure thought, looking for some beautiful generalization of De Broglie’s ideas, and not by keeping close to the experimental development of the subject in the way Heisenberg did.

Johnson ends his new Preface nicely:

As I write this, I can see out my window to the piñon-covered foothills where the Santa Fe Institute continues to explore the science of complex systems—those in which many small parts interact with one another, giving rise to a rich, new level of behavior. The players might be cells in an organism or creatures in an ecosystem. They might be people bartering and selling and unwittingly generating the meteorological gyrations of the economy. They might be the neurons inside the head of every one of us— collectively, and still mysteriously, giving rise to human consciousness and its beautiful obsession to find an anchor in the cosmic swirl.

The continuity of things

thI think often about the continuity of things – about the smooth progression of structure, that is the stuff of life, from the microscopic to the macrocosmic.  I was reminded, again, of how often I see things in terms of continuums when I listened online to a lecture given by Gregory Chaitin in 2008.  In that lecture (from which he has also produced a paper) Chaitin defends the validity and productivity of an experimental mathematics, one that uses the kind of reasoning with which a theoretical physicist would be comfortable. And here he argues:

Absolute truth you can only approach asymptotically in the limit from below.

For some time now, I have considered this asymptotic approach to truth in a very broad sense, where truth is just all that is. In fact, I tend to understand most things in terms of a continuum of one kind or another. And I have found that research efforts across disciplines increasingly support this view. It is consistent, for example, with the ideas expressed in David Deutsch’s The Beginning of Infinity, where knowledge is equated with information, whether physical (like quantum systems), biological (like DNA) or explanatory (like theory). From this angle, the non-explanatory nature of biological knowledge, like the characteristics encoded in DNA, is distinguished only by its limits. Deutch’s newest project, which he calls constructor theory, relies on the idea that information is fundamental to everything. Constructor theory is meant to get at what Deutsch calls the “substrate independence of information.” It defines a more fundamental level of physics than particles, waves and space-time.  And Deutsch expects that this ‘more fundamental level’ will be shared by all physical systems.

In constructor theory, it is information that undergoes consistent transformation – from the attribute of a creature determined by the arrangement a set of nucleic acids, to the symbolic representation of words on a page that begin as electrochemical signals in my brain, to the information transferred in quantum mechanical events. Everything becomes an instance on a continuum of possibilities.

I would argue that another kind of continuum can be drawn from Samir Zeki’s work on the visual brain. Zeki’s investigation of the neural components of vision has led to the study of what he calls neuroesthetics, which re-associates creativity with the body’s quest for knowledge. While neuroesthetics begins with a study of the neural basis of visual art, it inevitably touches on epistemological questions. The institute that organizes this work lists as its first aim:

-to further the study of the creative process as a manifestation of the functions and functioning of the brain.  (emphasis added)

The move to associate the execution and appreciation of visual art with the brain is a move to re-associate the body with the complexities of conscious experience. Zeki outlines some of the motivation in a statement on the neuroesthetics website.  He sees art as an inquiry through which the artist investigates the nature of visual experience.

It is for this reason that the artist is in a sense, a neuroscientist, exploring the potentials and capacities of the brain, though with different tools.

Vision is understood as a tool for the acquisition of knowledge.    (emphasis added)

The characteristic of an efficient knowledge-acquiring system, faced with permanent change, is its capacity to abstract, to emphasize the general at the expense of the particular. Abstraction, which arguably is a characteristic of every one of the many different visual areas of the brain, frees the brain from enslavement to the particular and from the imperfections of the memory system. This remarkable capacity is reflected in art, for all art is abstraction.

If knowledge is understood in Deutsch’s terms, then all of life is the acquisition of knowledge, and the production of art is a biological event.  But this use of the abstract, to free the brain of the particular, is present in literature as well, and is certainly operating in mathematics. One can imagine a continuum from retinal images, to our inquiry into retinal images, to visual art and mathematics and the productive entwining of science and mathematics.

Another Chaitin paper comes to mind here – Conceptual Complexity and Algorithmic Information. This paper focuses on the complexity that lies ‘between’ the complexities of the tiny worlds of particle physics and the vast expanses of cosmology, namely the complexity of ideas. The paper proposes a mathematical approach to philosophical questions by defining the conceptual complexity of an object X

to be the size in bits of the most compact program for calculating X, presupposing that we have picked as our complexity standard a particular fixed, maximally compact, concise universal programming language U.

Chaitin then uses this definition to explore the conceptual complexity of physical, mathematical, and biological theories. Particularly relevant to this discussion is his idea that the brain could be a two-level system. In other words, the brain may not only be working at the neuronal level, but also at the molecular level. The “conscious, rational, serial, sensual front-end mind” is fast and the action on this front is in the neurons. The “unconscious, intuitive, parallel, combinatorial back-end mind,” however, is molecular (where there is much greater computing and memory capacity).  If this model were correct, it would certainly break down our compartmental view of the body  (and the body’s experience).  And it would level the playing field, revealing an equivalence among all of the body’s actions that might redirect some of the questions we ask about ourselves and our world.

 

 

Shared paths to Infinity

thMy last post focused on the kinds of problems that can develop when abstract objects, created within mathematics, increase in complexity – like the difficulty of wrapping our heads around them, or of managing them without error. I thought it would be interesting to turn back around and take a look at how the seeds of an idea can vary.

I became aware only recently that a fairly modern mathematical idea was observed in the social organizations of African communities. Ron Eglash,  Professor at the Rensselaer Polytechnic Institute, has a multifaceted interest in the intersections of culture, mathematics, science, and technology. Sometime in the 1980’s Eglash made the observation that aerial views of African villages were fractals and he followed this up with visits to the villages to investigate the patterns.

th-1In a 2007 TED talk Eglash describes the content of the fractal patterns displayed by the villages. One of these villages, located in southern Zambia, is made up of a circular pattern of self-similar rings like the rings shown to the left. The whole village is a ring, and on that ring are the rings of individual families and, within each of those rings are the heads of families. In addition to the repetition of the rings that shape the village and the families, is the repetition of the sacred altar spot. There is a sacred altar placed in the same spot in each individual home. And in each family ring, the home of the head of the family is found in the sacred alter spot. In the ring of all families (or the whole village) the Chief’s ring is in the place of the sacred altar and, within the Chief’s ring, the ring for the Chief’s immediate family are in the place of the sacred altar. Within the home that is the chief’s immediate family, ‘a tiny village’ is in the place of the sacred altar. And within this tiny village live the ancestors. It’s a wonderful picture of an infinitely extending self-similar pattern.

Eglash is clear about the fact that these kinds of scaling patterns are not universal to all indigenous architectures, and also that the diversity of African cultures is fully expressed within the fractal technology:

…a widely shared design practice doesn’t necessarily give you a unity of culture — and it definitely is not “in the DNA.”…the fractals have self-similarity — so they’re similar to themselves, but they’re not necessarily similar to each other — you see very different uses for fractals. It’s a shared technology in Africa.

Certainly it is interesting that before the notion of a fractal in mathematics was formalized, purposeful fractal designs were being used by communities in Africa to organize themselves. But what I find even more provocative is that everything in the life of the village is subject to the scaling. Social, mystical, and spatial (geometric) ideas are made to correspond. This says something about the character of the mechanism being used (the fractals), as well as the culture that developed its use.

While it was brief, Eglash did provide a review of some early math ideas on recursive self-similarity, paying particular attention to the Cantor set  and the Koch curve. He made the observation that Cantor did see a correspondence between the infinities of mathematics and God’s infinite nature. But in these recursively produced village designs, that correspondence is embodied in the stuff of everyday life. It is as if the ability to represent recursive self-similarity and the facts of life itself are experienced together. The recursive nature of these village designs didn’t happen by accident. It was clearly understood. As Eglash says in his talk,

…they’re mapping the social scaling onto the geometric scaling; it’s a conscious pattern. It is not unconscious like a termite mound fractal.

Given that the development of these patterns happened outside mathematics proper, and predates mathematics’ formal representation of fractals, questions are inevitably raised about what mathematics is and this is exactly the kind of thing on which ethnomathematics focuses. Eglash is an ethnomathematician.  A very brief look at the some of the literature in ethnomathematics reveals a fairly broad range of interests, many of which are oriented toward more successful mathematics education, and many of which are strongly criticized.   But it seems to me that the meaning and significance of ethnomathematics has not been made precise.  In a 2006 paper, Eglash makes an interesting observation. He considers that the “reticence to consider indigenous mathematical knowledge,” may be related to the “Platonic realism of the mathematics subculture.”

For mathematicians in the Euro-American tradition, truth is embedded in an abstract realm, and these transcendental objects are inaccessible outside of a particular symbolic analysis.

Clearly there will be political questions (related to education issues) tied up in this kind of discussion about what and where mathematics is.  But, with respect to these African villages, I most enjoyed seeing a mathematical idea become the vehicle with which to explore and represent infinities.

 

 

 

“The future of mathematics is more a spiritual discipline…”

I did some following up on the work of Vladimir Voevodsky and for anyone who might ask, “what’s actually going on in mathematics,” Voevodsky’s work adds, perhaps, even more to the mystery. Not that I mind. The mystery emerges from the limitless depths (or heights) of thought that are revealed in mathematical ideas or objects. It is this that continues to captivate me. And the grounding of these ideas, provided by Voevodsky’s work on foundations, reflects the intrinsic unity of these highly complex and purely abstract entities, suggesting a firm rootedness to these thoughts – an unexpected and enigmatic rootedness that calls for attention.

Voevodsky gave a general audience talk in March of 2014 at the Institute for Advanced Studies at Princeton, where he is currently Professor in the School of Mathematics. In that talk he described the history of much of his work and how he became convinced that to do the kind of mathematics he most wanted to do, he would need a reliable source to confirm the validity of the mathematical structures he builds.

As I was working on these ideas I was getting more and more uncertain about how to proceed. The mathematics of 2-theories is an example of precisely that kind of higher-dimensional mathematics that Kapranov and I had dreamed about in 1989. And I really enjoyed discovering new structures there that were not direct extensions of structures in lower “dimensions”.

But to do the work at the level of rigor and precision I felt was necessary would take an enormous amount of effort and would produce a text that would be very difficult to read. And who would ensure that I did not forget something and did not make a mistake, if even the mistakes in much more simple arguments take years to uncover?

I think it was at this moment that I largely stopped doing what is called “curiosity driven research” and started to think seriously about the future.

It soon became clear that the only real long-term solution to the problems that I encountered is to start using computers in the verification of mathematical reasoning.

Voevodsky expresses the same concern in a Quanta Magazine article by Kevin Hartnett.

“The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes,” Voevodsky said. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error.

So, at the heart of this discussion seems to be a quest for useful math-assistant computer programs. But both the problems mathematicians like Voevodsky face, and the computer assistant solutions he explored, highlight something intriguing about mathematics itself.

Hartnett does a nice job making the issues relevant to Voevodsky’s innovations accessible to any interested reader. He reviews Bertrand Russell’s type theory, a formalism created to circumvent the paradoxes of Cantor’s original set theory – as in the familiar paradox created by things like the set of all sets that don’t contain themselves. (If the set does contain itself then it doesn’t contain itself) This kind of problem is avoided in Russel’s type theory by making a formal distinction between collections of elements and collections of other collections. In turns out that within type theory, equivalences among sets are understood in much the same way as equivalences among spaces are understood in topology.

Spaces in topology are said to be homotopy equivalent if one can be deformed into the other without tearing either. Hartnett illustrates this using letters of the alphabet:

The letter P is of the same homotopy type as the letter O (the tail of the P can be collapsed to a point on the boundary of the letter’s upper circle), and both P and O are of the same homotopy type as the other letters of the alphabet that contain one hole — A, D, Q and R.

The same kind of equivalence can be established between a line and a point, or a disc and a point, or a coffee mug and a donut.

Given their structural resemblance, type theory handles the world of topology well. Things that are homotopy equivalent can also be said to be of the same homotopy type. But the value of the relationship between type theory and homotopic equivalences was greatly enhanced when Voevodsky learned Martin-Löf type theory (MLTT), a formal language developed by a logician for the task of checking proofs on a computer. Voevodsky saw that this computer language formalized type theory and, by virtue of type theory’s similarity to homotopy theory, it also formalized homotopy theory.

Again, from Hartnett:

Voevodsky agrees that the connection is magical, though he sees the significance a little differently. To him, the real potential of type theory informed by homotopy theory is as a new foundation for mathematics that’s uniquely well-suited both to computerized verification and to studying higher-order relationships.

There is a website devoted to homotopy type theory where it is defined as follows:

Homotopy Type Theory refers to a new interpretation of Martin-Löf’s system of intensional, constructive type theory into abstract homotopy theory.  Propositional equality is interpreted as homotopy and type isomorphism as homotopy equivalence. Logical constructions in type theory then correspond to homotopy-invariant constructions on spaces, while theorems and even proofs in the logical system inherit a homotopical meaning.  As the natural logic of homotopy, constructive type theory is also related to higher category theory as it is used e.g. in the notion of a higher topos.

Voevodsky’s work is on a new foundation for mathematics and is also described there:

Univalent Foundations of Mathematics is Vladimir Voevodsky’s new program for a comprehensive, computational foundation for mathematics based on the homotopical interpretation of type theory. The type theoretic univalence axiom relates propositional equality on the universe with homotopy equivalence of small types. The program is currently being implemented with the help of the automated proof assistant Coq.  The Univalent Foundations program is closely tied to homotopy type theory and is being pursued in parallel by many of the same researchers.

In one of his talks, Voevodsky suggested that mathematics as we know it studies structures on homotopy types. And he describes a mathematics so rich in abstract complexity, “it just doesn’t fit in our heads very well. It somehow requires abilities that we don’t posses.”  Computer assistance would be expected to facilitate access to these high levels of complexity and abstraction.

But mathematics is, as I see it, the abstract expression of human understanding – the possibilities for thought, for conceptual relationships. So what is it that’s keeping us from being able to manage this level of abstraction?   Voevodsky seems to agree that it is comprehension that gives rise to mathematics. He’s quoted in a New Scientist article by Jacob Aron:

If humans do not understand a proof, then it doesn’t count as maths, says Voevodsky. “The future of mathematics is more a spiritual discipline than an applied art. One of the important functions of mathematics is the development of the human mind.”

While Aaron seems to suggest that computer companions to mathematicians could potentially know more than the mathematicians they assist, this view is without substance. It is only when the mathematician’s eye discerns something that we call it mathematics.

Mike Shulman has a few posts related to homotopy type theory on The n-Category Cafe site beginning with one entitled Homotopy Type Theory, I followed by IIIII, and IV.  There’s also one from June 2015 – What’s so HoTT about Formilization?
And here’s a link to Voevodsky’s Univalent Foundations.

Finding hidden structure by way of computers

An article in a recent issue of New Scientist highlights the potential partnership between computers and mathematicians. It begins with an account of the use of computers in a proof that would do little, it seems, to provide greater understanding, or greater insight into the content of the idea the proof explores. The computer program merely exhausts the counter examples of a theorem, thereby proving it true (a task far too impractical to attack by hand). Reviewing this kind of proof, however, requires checking computer code, and this is something that referees in mathematics are not likely to want to do. And so efforts have been made to make the checking easier by employing something called a ‘proof assistant.’ The article doesn’t do much to clarify how the ‘proof assistant’ works, and says just a little about how it makes things easier. But a question that comes to mind quickly for me is whether such a proof could reveal new bridges between different sub-disciplines of mathematics, the way the traditional effort has been known to do.

A discussion of the work of prominent mathematician Vladimir Voevodsky follows.  This work takes us back to foundational questions and clearly addresses those bridges. While mathematics is grounded in set theory,  set theory can permit more than one definition of the same mathematical object. Voevodsky decided to address the problem that this creates for computer generated proofs.

…if two computer proofs use different definitions for the same thing, they will be incompatible. “We cannot compare the results, because at the core they are based on two different things,” says Voevodsky.

Voevodsky swaps sets for types described as “a stricter way of defining mathematical objects in which every concept has exactly one definition.

“This lets mathematicians formulate their ideas with a proof assistant directly, rather than having to translate them later. In 2013 Voefodsky and colleagues published a book explaining the principles behind the new foundations. In a reversal of the norm, they wrote the book with a proof assistant and then “unformalized” it to produce something more human-friendly.

There’s a very well-written description of the history and recent successes of Voevodsky’s work in a Quanta Magazine piece from May 2015. Voevodsky’s new formalism is called the univalent foundation of mathematics. The Quanta article describes how these ideas grew from existing formalisms in reasonable detail. But, what I find most interesting is the surprising consistency among particular ideas from computer science, logic and mathematics.

This consistency and convenience reflects something deeper about the program, said Daniel Grayson, an emeritus professor of mathematics at the University of Illinois at Urbana-Champaign. The strength of univalent foundations lies in the fact that it taps into a previously hidden structure in mathematics.

“What’s appealing and different about [univalent foundations], especially if you start viewing [it] as replacing set theory,” he said, “is that it appears that ideas from topology come into the very foundation of mathematics.”

One of the youngest sub-disciplines finds its way into the foundation, a very appealing and suggestive idea. Finding hidden structure is what always looks magical about mathematics. And it is, fundamentally, what human cognition is all about.

There’s a nice report on one of Voevodsky’s talks in a Scientific American Guest Blog from 2013 by Julie Rehmeyer that includes a video of the talk itself.

This topic requires a closer look, which I expect to do with a follow-up to this post.

Thinking without a brain

OctopusCan the presence of intelligent behavior in other creatures (creatures that don’t have a nervous system comparable to ours) tell us something about what ideas are, or how thought fits into nature’s actions? It has always seemed to us humans that our ideas are one of the fruits of what we call our ‘intelligence.’  And the evolutionary history of this intelligence is frequently traced back through the archeological records of our first use of things like tools, ornamentation, or planning.  It is often thought that our intelligence is just some twist of nature, something that just happened. But once set in motion, it strengthened our survival prospects, and gave us an odd kind of control of our comfort and well-being. We tend to believe that ‘thoughts’ are a private human experience, not easily lined up with nature’s actions. Thoughts build human cultures, and one of the high points of thought is, of course, mathematics. Remember, it was the scarecrow’s reciting the Pythagorean Theorem that told us he had a brain.  Even though he got it wrong.

When an animal is able to learn something and apply that learning to a new circumstance we generally concede that this is also intelligent behavior. A good deal of research has been done on animals like chimpanzees, dolphins, and apes, where the ability to learn symbolic representations or sophisticated communication skills mark intelligent behavior. But these observations don’t significantly change our sense that intelligence is some quirk of the brain, and only in humans has this quirk gone through the development that gives birth to ideas and culture, and puts us in our unique evolutionary place.

But when intelligent behavior is observed in a bumble bee, for example, we have to think a little more. The bumble bee’s evolution isn’t particularly related to our own, and their brains are not like ours. More than one million interconnected neurons occupy less than one cubic millimeter of brain tissue in the bee. The density of neurons is about ten times greater than in a mammalian cerebral cortex. Research published in Nature (in 2001) is described in a Scientific American piece in 2008 by Christof Koch.

The abstract of the Nature paper includes this:

…honeybees can interpolate visual information, exhibit associative recall, categorize visual information, and learn contextual information. Here we show that honeybees can form ‘sameness’ and ‘difference’ concepts. They learn to solve ‘delayed matching-to-sample’ tasks, in which they are required to respond to a matching stimulus, and ‘delayed non-matching-to-sample’ tasks, in which they are required to respond to a different stimulus; they can also transfer the learned rules to new stimuli of the same or a different sensory modality. Thus, not only can bees learn specific objects and their physical parameters, but they can also master abstract inter-relationships, such as sameness and difference.

And Koch makes this observation:

Given all of this ability, why does almost everybody instinctively reject the idea that bees or other insects might be conscious? The trouble is that bees are so different from us and our ilk that our insights fail us.

In 2015, Koch coauthored a paper with Giulio Tononi, the focus of which was consciousness. There he argues:

Indeed, as long as one starts from the brain and asks how it could possibly give rise to experience—in effect trying to ‘distill’ mind out of matter, the problem may be not only hard, but almost impossible to solve. But things may be less hard if one takes the opposite approach: start from consciousness itself, by identifying its essential properties, and then ask what kinds of physical mechanisms could possibly account for them.  (emphasis added)

Potential clues to different kinds of physical mechanisms are described in a very recent Scientific American article that reports on the successful unraveling of the octopus genome.

Among the biggest surprises contained within the genome—eliciting exclamation point–ridden e-mails from cephalopod researchers—is that octopuses possess a large group of familiar genes that are involved in developing a complex neural network and have been found to be enriched in other animals, such as mammals, with substantial processing power. Known as protocadherin genes, they “were previously thought to be expanded only in vertebrates,” says Clifton Ragsdale, an associate professor of neurobiology at the University of Chicago and a co-author of the new paper. Such genes join the list of independently evolved features we share with octopuses—including camera-type eyes (with a lens, iris and retina), closed circulatory systems and large brains.

Having followed such a vastly different evolutionary path to intelligence, however, the octopus nervous system is an especially rich subject for study. “For neurobiologists, it’s intriguing to understand how a completely distinct group has developed big, complex brains,” says Joshua Rosenthal of the University of Puerto Rico’s Institute of Neurobiology. “Now with this paper, we can better understand the molecular underpinnings.”

In 2012, Scientific American reported on the signing of the Cambridge Declaration on Consciousness.

The weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness,” the scientists wrote. “Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.

And from the Declaration:

Furthermore, neural circuits supporting behavioral/electrophysiological states of attentiveness, sleep and decision-making appear to have arisen in evolution as early as the invertebrate radiation, being evident in insects and cephalopod mollusks (e.g., octopus).

Specific mention of the octopus was based on the collection of research that documented their intentional action, their use of tools, and their sophisticated spatial navigation and memory. Christof Koch was one of the presenters of the declaration and was quoted as saying, “The challenge that remains is to understand how the whispering of nerve cells, interconnected by thousands of gossamer threads (their axons), give rise to any one conscious sensation.”

My friend and former agent, Ann Downer, has a new book due out in September with the provocative title, Smart and Spineless: Exploring Invertebrate Intelligence. It was written for young adults and is a wonderful way to correct an old perspective for growing thinkers.

These many insights suggest that what we call intelligence is not something that happens to some living things, but is, perhaps, somehow intrinsic to life and manifest in many forms. Koch suggests that we begin a study of consciousness by identifying its essential properties and mathematics can likely help with this. It does so already with Giulio Tononi’s Integrated Information Theory of Consciousness.  But mathematics is a grand scale investigation of pure thought – of the abstract relationships that are often related to language, learning, and spatial navigation (to name just a few). As a fully abstract investigation of such things, it could help direct the search for the essential properties of awareness and cognition. And the chance that we will find the ubiquitous presence of such properties in the world around us may breath new life into how we understand nature itself.