Categories

Abstractions: What’s happening with them?

We all generally know the meaning of abstraction.  We all have some opinion, for example, about the value of abstract painting.  And I’ve heard from many that mathematics is too abstract to be understood or even interesting.  (But I must admit, it is exactly this about mathematics that keeps me so captivated).  An abstraction is usually thought of as the general idea as opposed to the particular circumstance.  I thought today of bringing a few topics back into focus, all of which I’ve written about before, to highlight something about knowledge – what it is, or how we seem to collect it.   This particular story centers around the idea of entropy.

First, here’s as brief a description of the history of the mathematics of this idea as I can manage at the moment:

In the history of science and mathematics, two kinds of entropy were defined – one in physics and one in information theory.  Mathematical physicist Rudolf Clausius first introduced the concept of entropy in 1850, and he defined it as a measure of a system’s thermal (or heat) energy that was not available to do work.  It was a fairly specific idea that provided a mathematical way to pin down the variations in physical possibilities. I’ve read that Clausius chose the word entropy because of its Greek ties to the word transformation and the fact that it sounded like energy. The mathematical statement of entropy provided a clear account, for example, of how gas, confined in a cylinder, would freely expand if released by a valve, but could also be made to push a piston in response to the pressure of something that confined it, like in our cars. The piston event is reversible, while the expansion is not.  In the piston event, however, some amount of heat or energy is always lost to entropy.  And so Clausius’ version of the second law of thermodynamics says that spontaneous change, for irreversible processes in isolated systems, always moves in the direction of increasing entropy.

In 1948 Claude Shannon initiated the development of what is now known as information theory when he formalized a mathematics of information based on the observation that transmitted messages could be encoded with just two bursts of energy – on and off.  In this light he defined an information entropy, still referred to as Shannon’s entropy, which is understood as the measure of the randomness in a message, or a measure of the absence of information.  Shannon’s formula was based on the probability of symbols (or letters in the alphabet) showing up in the message.

In the late 1800’s, James Clerk Maxwell, developed the statistical mechanical description of entropy in thermodynamics, where macroscopic phenomena (like temperature and volume) were understood in terms of the microscopic behavior of molecules.   Soon after, physicist and philosopher Ludwig Boltzman generalized Maxwell’s statistical understanding of the action and formalized the logarithmic expression of entropy that is grounded in probabilities.  In that version entropy is proportional to the logarithm of the number of microscopic ways (hard to see ways) that the system could acquire different macroscopic states (the things we see).   In other words we can come to a statistical conclusion about how the behavior of an immense number of molecules, that we don’t see, will affect the events we do see.  It is Boltzman’s logarithmic equation (which appears on his gravestone) that resembles Shannon’s equation, allowing both entropies to be understood, essentially, as probabilities related to the arrangement of things.

It is certainly true that the mathematics that defined entropy at each stage of its development is an abstraction of the phenomenon.  However reducing both the thermodynamic definition, and information theory definition, to probablities related purely to the arrangement of things is another (and fairly significant) level of abstraction.

You are likely familiar with the notion that entropy always increases, or as it is often understood, things always tend to disorder.  But, unlike what one might expect, it is this ‘arrangement of things’ idea that seems to best explain why eggs don’t un-crack or ice doesn’t un-melt.  The number of possible arrangements of atoms in an un-cracked egg is far, far, smaller that the number of possible arrangements of atoms in a cracked egg, and so far less likely.  Aatish Bhatia does a really nice job of explaining this way of understanding things here.

Next, the relationship between information and thermodynamics has had the attention of physicists since James Clerk Maxwell introduced a hypothetical little creature who seemed to challenge the second law of thermodynamics and has come to be know as Maxwell’s demon.  Some discussion of the demon can be found in a post I wrote in 2016.  In 2017, a Quanta Magazine article by Philip Ball, reviews the work of physicists, mathematicians, computer scientists, and biologists who explore the computational (or information processing) aspect of entropy as it relates to biology.

Living organisms seem rather like Maxwell’s demon. Whereas a beaker full of reacting chemicals will eventually expend its energy and fall into boring stasis and equilibrium, living systems have collectively been avoiding the lifeless equilibrium state since the origin of life about three and a half billion years ago. They harvest energy from their surroundings to sustain this nonequilibrium state, and they do it with “intention.”

In 1944, physicist Erwin Schrödinger proposed that living systems take energy from their surroundings to maintain non-equilibrium (or to stay organized) by capturing and storing information. He called it “negative entropy.”

Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information.

Now, physicist Jeremy England is considering pulling biology into physics (or at least some aspect of it) with the suggestion that the organization that takes place in living things is just one of the more extreme possibilities of a phenomena exhibited by all matter. From an essay written by England,

The theoretical research I do with my colleagues tries to comprehend a new aspect of life’s evolution by thinking of it in thermodynamic terms. When we conceive of an organism as just a bunch of molecules, which energy flows into, through and out of, we can use this information to build a probabilistic model of its behaviour. From this perspective, the extraordinary abilities of living things might turn out to be extreme outcomes of a much more widespread process going on all over the place, from turbulent fluids to vibrating crystals – a process by which dynamic, energy-consuming structures become fine-tuned or adapted to their environments. Far from being a freak event, finding something akin to evolving lifeforms might be quite likely in the kind of universe we inhabit – especially if we know how to look for it.

Living things manage not to fall apart as fast as they form because they constantly increase the entropy around them. They do this because their molecular structure lets them absorb energy as work and release it as heat. Under certain conditions, this ability to absorb work lets organisms (and other systems) refine their structure so as to absorb more work, and in the process, release more heat. It all adds up to a positive feedback loop that makes us appear to move forward in time, in accordance with the extended second law. (emphasis added)

Finally (not in any true sense, just for the scope of this post) physicist Chiara Marletto has a theory of life based on a new fundamental theory of physics called Constructor Theory.  I wrote a guest blog for Scientific American on Constructor Theory in 2013.  In her essay, also published by Aeon, Marletto explains,

In constructor theory, physical laws are formulated only in terms of which tasks are possible (with arbitrarily high accuracy, reliability, and repeatability), and which are impossible, and why – as opposed to what happens, and what does not happen, given dynamical laws and initial conditions. A task is impossible if there is a law of physics that forbids it. Otherwise, it is possible – which means that a constructor for that task – an object that causes the task to occur and retains the ability to cause it again – can be approximated arbitrarily well in reality. Car factories, robots and living cells are all accurate approximations to constructors.

But the constructor itself, the thing that causes a transformation, is abstracted away in constructor theory, leaving only the input/output states.  ‘Information’ is the only thing that remains unchanged in each of these transformations, and this is the focus of constructor theory.  With Constructor Theory, this underlying independence of information involves a more fundamental level of physics than particles, waves and space-time. And the expectation is that this ‘more fundamental level’ may be shared by all physical systems (another generality).

The input/output states of Constructor Theory are expressed as “ordered pairs of states” and are called construction tasks.  The idea is no doubt a distant cousin of the ordered pairs of numbers we learned about in high school, along with the one-to-oneness, and compositions taught in pre-calculus!  And Constructor Theory is an algebra, a new one certainly, but an algebra nonetheless.  This algebra is not designed to systematize current theories, but rather to find their foundation and then open a window onto things that we have not yet seen.

According to Marletto:

The early history of evolution is, in constructor-theoretic terms, a lengthy, highly inaccurate, non-purposive construction that eventually produced knowledge-bearing recipes out of elementary things containing none. These elementary things are simple chemicals such as short RNA strands…Thus the constructor theory of life shows explicitly that natural selection does not need to assume the existence of any initial recipe, containing knowledge, to get started.

Marletto has also written on the constructor theory of thermdynamics in which she argues that constructor theory highlights a relationship between information and the first law of thermodynamics, not just the second.

This story about information, thermodynamics, and life, certainly suggests something about the value of abstraction.  As a writer, I’m not only interested in the progression of scientific ideas, but also in the power of generalities that seem to produce new vision, as well as amplify the details of what we already see.  It seems to me that there is a particular character to the knowledge that is produced when communities of thinkers move through abstractions that bring them from measuring temperature and volume, to information driven theories of a science that could contain both physics and biology.  It’s all about relations.  I haven’t written this to answer my question about what’s happening here, but mostly to ask it.

 

 

 

Comments are closed.