From coin flipping to branching universes

A recent column in Quanta Magazine, by theorist Seam Carroll, highlights the far reaching implications of the role played by probability theory in quantum mechanics. Carroll’s intention is to bring into focus the need, which does seem to exist, for us to understand what, exactly, those probabilities are telling us. In quantum mechanics, the partnership of mathematics and physics has the unusual effect of both clarifying and mystifying things. Carroll’s concern is whether the probabilities that seem to contradict long held deterministic views of the physical world, should be thought of as properties of the objects studied, or just the cognitive strategy of the subjects studying them. As I see it, this difficulty of unraveling the thought from the material, may help us get a better look at the multidimensional nature of mathematics itself.

Probability is inextricably bound to our experience of uncertainty. When, in the 17th century, Pascal explored the calculation of probabilities, his efforts were aimed at finding ways to predict the results of games of chance. But the use of these strategies was fairly quickly adopted to address questions of law and insurance, as these concerned chance (or random) events (like weather or disease) in the day-to-day lives of individuals. The mathematics of probability provided a way to think about future events, about which we are always uncertain. I read in a Britannica article that in the early 19th century, LaPlace characterized probability theory as “good sense reduced to calculation.”

By the 18th century, Bayes’ theorem was already getting a lot of attention. It was beginning to look like the best calculation of likelihoods also relied on the experience of the individual doing the calculation. Bayes’ Theorem is a formula for calculating conditional probabilities, probabilities that are changed when conditions are altered. One of the conditions that could become altered is what the observer knows. This brings attention back to the subject, which is different from the way we understand the likelihood of heads or tails in a coin toss. Since a coin toss can only yield one of two possible outcomes, we have come to understand that there is a 50/50 chance of either. The more times we toss the coins the closer we get to seeing that 50/50 split in the outcomes. What we expect of the coin toss is entirely dependent on the nature of the coin. But conditional probabilities are not so clear. So how should physicists view our reliance on probabilities in quantum mechanical theory. This is what Carroll addresses.

There are numerous approaches to defining probability, but we can distinguish between two broad classes. The “objective” or “physical” view treats probability as a fundamental feature of a system, the best way we have to characterize physical behavior. An example of an objective approach to probability is frequentism, which defines probability as the frequency with which things happen over many trials.

Alternatively, there are “subjective” or “evidential” views, which treat probability as personal, a reflection of an individual’s credence, or degree of belief, about what is true or what will happen. An example is Bayesian probability, which emphasizes Bayes’ law, a mathematical theorem that tells us how to update our credences as we obtain new information. Bayesians imagine that rational creatures in states of incomplete information walk around with credences for every proposition you can imagine, updating them continually as new data comes in. In contrast with frequentism, in Bayesianism it makes perfect sense to attach probabilities to one-shot events, such as who will win the next election.

In an aeon article about Einstein’s rejection of unresolved randomness in any physical theory, Jim Baggott say this:

In essence, Bohr and Heisenberg argued that science had finally caught up with the conceptual problems involved in the description of reality that philosophers had been warning of for centuries. Bohr is quoted as saying: ‘There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.’ This vaguely positivist statement was echoed by Heisenberg: ‘[W]e have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.’ Their broadly antirealist ‘Copenhagen interpretation’ – denying that the wave function represents the real physical state of a quantum system – quickly became the dominant way of thinking about quantum mechanics. More recent variations of such antirealist interpretations suggest that the wave function is simply a way of ‘coding’ our experience, or our subjective beliefs derived from our experience of the physics, allowing us to use what we’ve learned in the past to predict the future.

But this was utterly inconsistent with Einstein’s philosophy. Einstein could not accept an interpretation in which the principal object of the representation – the wavefunction – is not ‘real’.

Today, proponents exist for more than one model of the universe. There are models where probability is “fundamental and objective,” as Carroll says.

There is absolutely nothing about the present that precisely determines the future…What happens next is unknowable, and all we can say is what the long=term frequency of different outcomes will be.

In other theories, nothing is truly random and probability is entirely subjective. If we knew, not just the wave function, but all the hidden variables, we could predict the future exactly. As it stands, however, we can only make probabilistic predictions.

Finally there is the many-worlds resolution to the problem, which is Carroll’s favorite.

Many-worlds quantum mechanics has the simplest formulation of all the alternatives. There is a wave function, and it obeys Schrödinger’s equation, and that’s all. There are no collapses and no additional variables. Instead, we use Schrödinger’s equation to predict what will happen when an observer measures a quantum object in a superposition of multiple possible states. The answer is that the combined system of observer and object evolves into an entangled superposition. In each part of the superposition, the object has a definite measurement outcome and the observer has measured that outcome.

Everett’s brilliant move was simply to say, “And that’s okay” — all we need to do is recognize that each part of the system subsequently evolves separately from all of the others, and therefore qualifies as a separate branch of the wave function, or “world.” The worlds aren’t put in by hand; they were lurking in the quantum formalism all along.

I find it foolish to ignore that probability theory keeps pointing back at us. Christopher Fuchs, a physicist at the University of Massachusetts, is the founder of a school of thought dubbed Qbism (for quantum bayesianism). In an interview published in Quanta Magazine, Fuchs explains that QBism goes against a devotion to objectivity “by saying that quantum mechanics is not about how the world is without us; instead it’s precisely about us in the world. The subject matter of the theory is not the world or us but us-within-the- world, the interface between the two.” And later:

QBism would say, it’s not that the world is built up from stuff on “the outside” as the Greeks would have had it. Nor is it built up from stuff on “the inside” as the idealists, like George Berkeley and Eddington, would have it. Rather, the stuff of the world is in the character of what each of us encounters every living moment — stuff that is neither inside nor outside, but prior to the very notion of a cut between the two at all.

The effectiveness of our thoughts on likelihood is astounding. Cognitive neuroscientists suggest that statistics is part of our intuition. They argue that we learn everything through probabilistic inferences. Optical illusions have been understood as the brain’s decision about the most likely source of a retinal image. Anil Seth, at the University of Sussex, argues that all aspects of the brain’s construction of our world are built with probabilities and inferences. The points in the geometry of Tonini’s Integrated Information Theory of consciousness are defined using probability distributions. Karl Friston’s free energy principal, first aimed at a better understanding of how the brain works, defines the boundaries around systems (like cells, organs, or social organizations) with a statistical partitioning – things that belong to each other are defined by the probability that the state of one thing will effect another. Uncertainty defines Claude Shannon’s information entropy and Max Tegmark’s laws of thermodynamics. It’s also interesting that a thought experiment, proposed by James Clerk Maxwell in 1871 and known as Maxwell’s demon, was designed to examine the question of whether or not the second law of thermodynamics is only statistically certain.

As Carroll sees it, “The study of probability takes us from coin flipping to branching universes.” So what’s me and what’s not me? Mathematics has a way of raising this issue over and over again. Maybe we are beginning to look to it for guidance.

Comments are closed.