Reasoning Babies, Abstract Principles and Probabilities
It happens many times in class that I say, “in mathematics when you see something you don’t know, you try to figure it out using something you do know. And, recently, in the context of thinking about the generalizations that blossomed in late 19th and early 20th century mathematics, I’ve also wondered how it is that we stay ‘on track’ so to speak. How is it, for example that Riemann’s foundation for geometry holds onto the working properties of the original ideas?
I think that both of these things are partially addressed by recent research in cognitive science. In this post, I’d like to bring your attention to the work of a team led by Josh Tenenbaum, at MIT. It was recently reported that this team made the observation that babies reason – that they will expect an outcome of a situation based on a few physical principles.
These experiments make use of a tool that has been developed to measure a baby’s surprise, because this provides a way to identify a baby’s expectations. Their surprise (or lack of it) is measured by the amount of time they look at something. In other words, the baby will look at something longer when an unexpected thing has happened. These times have been carefully and repeatedly recorded. As Tenenbaum has said, researchers have quantified surprise. The same kind of tool has been used to measure a baby’s number sense. But this particular study identifies reason based on principles, before there is language. According to Tenenbaum, the study
suggests infants reason by mentally simulating possible scenarios and figuring out which outcome is most likely based on a few physical principles.
The report made me want to look more at Tenebaum’s work. I found a link to his recent article in Science: How Minds Grow. The link is on his web page under the heading Representative reading and talks. The article begins with a question that I think can be applied directly to mathematics: How do our minds get so much from so little? But the article is a complex analysis of how we build our conceptual structures on probabilities. Early abstractions develop when we order the distinguishable features of a perceived object, and these develop with experience, opening vast territories of knowledge, with what Tenenbaum calls a hierarchical Bayesian framework. The body builds its world based on probabilities related to experience or evidence. And conceptual systems grow in tree-like structures. The full content of the article is beyond the scope of this post. But the work contributes to the current view that perception and understanding happen together, that their interaction is seamless. Abstraction is a fundamental aspect not only of vision but of learning and all aspects of the body’s interaction with its environment. Perhaps the body ‘knows’ how to use abstraction the way it knows how to use light, for example.
In the conclusion of the article, Tenenbaum says the following:
How can structured symbolic knowledge be acquired through statistical learning? The answers emerging suggest new ways to think about the development of a cognitive system. Powerful abstractions can be learned surprisingly quickly, together with or prior to learning the more concrete knowledge they constrain. Structured symbolic representations need not be rigid, static, hard-wired, or brittle. Embedded in a probabilistic framework, they can grow dynamically and robustly in response to the sparse, noisy data of experience.
These observations give me a way to think about how mathematics stays on track and why intuition can play so crucial a role. It may be that mathematics searches the paths taken by concepts (first rooted in the body’s managing physical experience with abstract principles) or just searches the possibilities for concepts, within the constraints (or principles) the body knows. The ‘intellect’ and the senses are here united in a provocative way.