Today, I involved myself in a debate that hasn’t gotten very loud yet and, perhaps for that reason, I felt like I was going around in circles a bit. The questions I began trying to answer were sparked by a Mind Hacks post entitled Radical embodied cognition: an interview with Andrew Wilson. Wilson’s ideas challenge a perspective that is fairly widely accepted. As Tom Stafford explains:
The computational approach is the orthodoxy in psychological science. We try and understand the mind using the metaphors of information processing and the storage and retrieval of representations. These ideas are so common that it is easy to forget that there is any alternative. Andrew Wilson is on a mission to remind us that there is an alternative – a radical, non-representational, non-information processing take on what cognition is.
Last June I participated in a symposium at the Cognitive Science Society’s annual conference. I wrote later that I was struck by the extent to which computational modeling, designed to mirror cognitive processes, governs investigative strategies. Modeling possibilities likely impact the kinds of questions that cognitive scientists ask. As I listened to some of the talks, I considered that these modeling strategies could begin to create conceptual or theoretical grooves from which it can become difficult to stray. And so this Mind Hacks post got my attention.
It doesn’t look like Wilson’s radical approach is just using a different language. To this possibility he responded:
If the radical hypothesis is right, then a lot of cognitive theories will be wrong. Those theories all assume that information comes into the brain, is processed by representations and then output as behaviour. If we successfully replace representations with information, all those theories will be telling the wrong story. ‘Interacting with information’ is a completely different job description for the brain than ‘building models of the world’. This is another reason why it’s ‘radical’.
While much of the work in cognitive science is built around the idea that thinking is best understood in terms of representational structures in the mind (or brain) on which computational processes operate, the brain is always interacting with information. It is the nature of this interaction that researchers try to understand. And so I looked at a couple of posts from Wilson and his colleague Sabrina Golonka on a site called Notes from Two Scientific Psychologists. Mathematics figured prominently in both of the posts that I looked at which raised new questions for me.
The first one was recommended by Wilson and written by Golonka. It had the title, What else could it be? The case of the centrifugal governor. The centrifugal governor is used to frame a discussion of a dynamical systems approach to cognition contrasted with the one that relies on the brain’s creation of representations of the world on which it acts. An 18th century engineering problem illustrates the point. The problem involved reconciling what should be the constant rotation of a wheel, fueled by the pumping action of pistons. The operation of a valve allows one to adjust the pressure of the steam generated by the engine, so that the speed of the wheel can be managed. Golonka begins by describing the algorithmic solution to keeping the rotation constant. The state of the system is consistently measured and a rule (algorithm) adjusts the valve in response to the measurement. There are two stages to this solution – the measurement and the adjustment – even when the time lag between the measurement and the correction is minimized. The dynamic solution, discovered and implemented in the 18th century, is to have the valve opening changed by the action of an object within the system that varies in response to some aspect of the system itself. The problem then reduces to connecting that object to the valve with the proper relation, i.e.,the one that produces the desired effect.
If we imagine ourselves trying to come up with a computational model for how this system works, without being able to see how it works, this illustration does highlight the way a computational or algorithmic model might actually obscure the underlying action. But the algorithmic model would still capture something about the action, namely the need for and the direction of the adjustment. It just wouldn’t account for how the adjustment is actually made.
Another post on the same site is about a fairly interesting area-measuring device built in 1854, called a planimeter. The subject of this post is taken from a 1977 paper by Sverker Runeson with the title On the possibility of “smart” perceptual mechanisms.
The planimeter can measure the area of any flat 2-dimensional shape without using lengths and widths or integrals.
This is a device that measures area directly, rather than measuring the ‘simpler’ physical unit length and then performing the necessary computation. Runeson uses this device as an example of a ‘smart’ mechanism, and proposes that perception might entail such mechanisms.
The device traces the perimeter of the shape. The area of the shape is proportional to the number of turns through which the measuring wheel rotates as it traces the path. It is the movement or lack of movement in the wheel that is recorded. The result can be justified mathematically, but the measurement is coming directly from the wheel. It is, actually, an opportunity to see the relationship between an action and the formal analytic structures of mathematics.
There are a few words that stand out in this framing of the debate – representation, action, and information. What one means by representation and what one means by information is fairly context driven. We generally understand representation as a particularly human phenomena. We find it in art, language and mathematics and not so much in the behavior of other animals (although bower birds come to mind as a possible counter example). We think in terms of representations – words, maps, models, diagrams, etc. Within cognitive science, however, the meaning of a mental representation is not precisely defined. I see no reason why representation can’t be understood as patterned action on the cellular level, as can information. The algorithmic solution for the centrifugal governor is a very specific programming idea, not sufficient to discount computer-like action. Brains are not computers, but computational methods, software, programming, etc., inevitably reflect something about nature and the brain. Further, the ‘smart perceptual mechanism,’ gets its meaning from its mathematical character. I would argue that mathematics could help define what one means by representation if we reassociate mathematics with action (perhaps as Humberto Maturana did with language). The power of mathematics comes from what we can see in the weave of relationships among its precise representations. The history of my blogs makes clear that I would argue that underlying these representations is perception and action. Modeling strategies, at their best, are a way to get at this action within the limitations of our language.
Recent Comments