Not too long ago I wrote about entropy, and what has come to be known as Maxwell’s demon – a hypothetical creature, invented in 1871 by James Clark Maxwell. The creature was the product of a thought experiment meant to explore the possibility of violating the second law of thermodynamics using information to impede entropy (otherwise known as the gradual but inevitable decline of everything into disorder). The only reason the demon succeeds in stopping the gradual decline of order is that it has information that can be used to rearrange the behavior of molecules, information that we cannot acquire from out perspective. That post was concerned with the surprising reality of such a creature, as physicists have now demonstrated that it could be made physical, even mechanical, in the form of an information heat engine or in the action of light beams.
I read about the demon again today in a Quanta Magazine article, How Life (and Death) Spring Fr0m Disorder. Much of the focus of this article concerns understanding evolution from a computational point of view. But author Philip Ball describes Maxwell’s creature and how it impedes entropy, since it is this action against entropy that is the key to this new and interesting approach to biology, and to evolution in particular.
Once we regard living things as agents performing a computation — collecting and storing information about an unpredictable environment — capacities and considerations such as replication, adaptation, agency, purpose and meaning can be understood as arising not from evolutionary improvisation, but as inevitable corollaries of physical laws. In other words, there appears to be a kind of physics of things doing stuff, and evolving to do stuff. Meaning and intention — thought to be the defining characteristics of living systems — may then emerge naturally through the laws of thermodynamics and statistical mechanics.
In 1944, Erwin Schrödinger approached this idea by suggesting that living organisms feed on what he called negative entropy. And this is exactly what this new research is investigating – namely the possibility that organisms behave in a way that keeps them out of equilibrium, by exacting work from the environment with which they are correlated, and this is done by using information that they share with that environment (as the demon does). Without using this information, entropy, or the second law of thermodynamics, would govern the gradual decline of the organism into disorder and it would die. Schrödinger’s hunch went so far as to propose that organisms achieve this negative entropy by collecting and storing information. Although he didn’t know how, he imagined that they somehow encoded the information and passed it on to future generations. But converting information from one form to another is not cost free. Memory storage is finite and erasing information to gather new information will cause the dissipation of energy. Managing the cost becomes one of the functions of evolution.
According to David Wolpert, a mathematician and physicist at the Santa Fe Institute who convened the recent workshop, and his colleague Artemy Kolchinsky, the key point is that well-adapted organisms are correlated with that environment. If a bacterium swims dependably toward the left or the right when there is a food source in that direction, it is better adapted, and will flourish more, than one that swims in random directions and so only finds the food by chance. A correlation between the state of the organism and that of its environment implies that they have information in common. Wolpert and Kolchinsky say that it’s this information that helps the organism stay out of equilibrium — because, like Maxwell’s demon, it can then tailor its behavior to extract work from fluctuations in its surroundings. If it did not acquire this information, the organism would gradually revert to equilibrium: It would die.
Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. And life turns out to be extremely good at it.
This correlation between an organism and its environment is reminiscent of the structural coupling introduced by biologist H.R. Maturana which he characterizes in this way: “The relation between a living system and the medium in which it exists is a structural one in which living system and medium change together congruently as long as they remain in recurrent interactions.”
And these ideas do not dismiss the notion of natural selection. Natural selection is just seen as largely concerned with minimizing the cost of computation. The implications of this perspective are compelling. Jeremy England at the Massachusetts Institute of Technology has applied this notion of adaptation to complex, nonliving systems as well.
Complex systems tend to settle into these well-adapted states with surprising ease, said England: “Thermally fluctuating matter often gets spontaneously beaten into shapes that are good at absorbing work from the time-varying environment.”
Working from the perspective of a general physical principle –
If replication is present, then natural selection becomes the route by which systems acquire the ability to absorb work — Schrödinger’s negative entropy — from the environment. Self-replication is, in fact, an especially good mechanism for stabilizing complex systems, and so it’s no surprise that this is what biology uses. But in the nonliving world where replication doesn’t usually happen, the well-adapted dissipative structures tend to be ones that are highly organized, like sand ripples and dunes crystallizing from the random dance of windblown sand. Looked at this way, Darwinian evolution can be regarded as a specific instance of a more general physical principle governing nonequilibrium systems.
This is an interdisciplinary effort that brings to mind a paper by Virginia Chaitin which I discussed in another post. The kind of interdisciplinary work that Chaitin is describing, involves the adoption of a new conceptual framework, borrowing the very way that understanding is defined within a particular discipline, as well as the way it is explored and the way it is expressed in that discipline. Here we have the confluence of thermodynamics and Darwinian evolution made possible with the mathematical study of information. And I would caution readers of these ideas not to assume that the direction taken by this research reduces life to the physical laws of interactions. It may look that way at first glance. But I would suggest that the direction these ideas are taking is more likely to lead to a broader definition of life. In fact there was a moment when I thought I heard the echo of Leibniz’s monads.
You’d expect natural selection to favor organisms that use energy efficiently. But even individual biomolecular devices like the pumps and motors in our cells should, in some important way, learn from the past to anticipate the future. To acquire their remarkable efficiency, Still said, these devices must “implicitly construct concise representations of the world they have encountered so far, enabling them to anticipate what’s to come.”
It’s not possible to do any justice to the nature of the fundamental, living, yet non-material substance that Leibniz called monads, but I can, at the very least, point to a a few things about them. Monads exist as varying states of perception (though not necessarily conscious perceptions). And perceptions in this sense can be thought of as representations or expressions of the world or, perhaps, as information. He describes a heirarchy of functionality among them. Ones mind, for example, has clearer perceptions (or representations) than those contained in the monads that make up other parts of the body. But, being a more dominant monad, ones mind contains ‘the reasons’ for what happens in the rest of the body. And here’s the idea that came to mind in the context of this article. An individual organ contains ‘the reasons’ for what happens in its cells, and a cell contains ‘the reasons’ for what happens in its organelles. The cell has its own perceptions or representations. I don’t have a way to precisely define ‘the reasons,’ but like the information-driven states of nonequilibrium being considered by physicists, biologists, and mathematicians, this view of things spreads life out.