I’m not completely sure I understand where my desire to grasp the value of abstractions is taking me, but as I think about mathematics, and more recent trends in the sciences, I keep wanting to get further and further behind what our symbolic reasoning is actually doing, and how it’s doing it. I have this idea that if I can manage to somehow see around, or inside, the products of our minds, I’ll see something new. Maybe I can simplify the question that my own mind keeps asking, but that won’t help answer it. Here’s the thing I’m stuck on. Everything that we do (in the arts and the sciences) is based on the continuous flow of thoughts, that begin to amass into threads of thoughts, that have now moved through the hands and minds of countless individuals, over thousands of years, creating giant fabrics of meaningful information. What are these threads made from? How do they develop? How are they related to everything else in nature? And what might the giant fabrics of human culture have to do with everything else. While we generally distinguish symbols from reality, a very large part of our reality is now built more from *ideas* than from concrete or wood, using the symbols that make the sharing of these ideas possible. Even language captures relations, among sentiments and experience, in symbol. These relations, together with a kind of reasoning or logic, build our social and political systems. We’re so immersed in our languages and our symbols that we don’t even see them. And I don’t know if we *can* see them in the way that could address my curiosities. But I’m convinced that we can see more than we do. And the steady growth of information theory, (and its more immediate relatives, like algorithmic information theory and quantum information theory) seems to shed new light on *the reality of abstract relations*. Mathematics is distinguished as the discipline that explores purely abstract relations. The fruits of many of these explorations now service the parts of our world that we try to get our hands on – things like astronomy, engineering, physics, computer science, biology, medicine, and so on. I’m beginning to consider that information sciences may yet uncover something about why mathematics has been so fruitful.

I read today about Constantinos Daskalakis who was awarded the Rolf Nevanlinna Prize at the International Congress of Mathematicians 2018 for his outstanding contributions to the mathematical aspects of information sciences. In particular, Daskalakis made some new observations of some older ideas – namely game theory and what is called Nash equilibrium. Marianne Freiberger explains Nash equilibrium in Plus Magazine:

When you throw together a collection of

agents(people, cars, etc) in a strategic environment, they will probably start by trying out all sorts of different ways of behaving — all sorts of differentstrategies. Eventually, though, they all might settle on the single strategy that suits them best in the sense that no other strategy can serve them better. This situation, when nobody has an incentive to change, is called a Nash equilibrium.

A Nash equilibrium is not necessarily positive, it’s just stable. Nash proved in 1950 that no matter how complex a system is, it is always possible to arrive at an equilibrium. But the questioned remained – knowing that a system can stabilize doesn’t tell us whether it will. And nothing in Nash’s proof tells us how these states of equilibrium are constructed, or how they happen. People have searched for algorithms that could find the Nash equilibrium of a system, and they found some, but the time it would take to do the computations, or to complete the task, wasn’t clear. Daskalakis explains in Freidberger’s article:

My work is a critique of Nash’s theorem coming from a computational perspective,” he explains. “What we showed is that [while] an equilibrium may exist, it may not be attainable. The best supercomputers may not be able to find it. This theorem applies to games that we play, it applies to road networks, it applies to markets. In many complex systems it may be computationally intractable for the system to find a stable operational mode. The system could be wondering around the equilibrium, or be far away from the equilibrium, without ever being drawn to a stable state.

Daskalakis’ work alerts people working in relevant industries that a Nash equilibrium, while it exists, may be essentially unattainable because the algorithms don’t exist, or because the complexity of the problem is just too difficult. These considerations are relevant to people who design things like road systems, or online products like dating sites or taxi apps.

When designing such a system, you want to optimise some objective: you want to make sure that traffic flows consistently, that potential dates are matched up efficiently, or that taxi drivers and riders are happy.

If you are counting on an equilibrium to deliver this happy state of affairs, then you better make sure the equilibrium can actually be reached. You better be careful that the rules that you set inside your system do not lead to a situation where our theorem applies,” says Daskalakis. “Your system should be clean enough and have the right mathematical structure so that equilibria can arise easily from the interaction of agents. [You need to make sure] that agents are able to get to equilibrium and that in equilibrium the objectives are promoted.

Another option is to forget about the equilibrium and try to guarantee that your objective is promoted even [with] dynamically changing behaviour of people in your system.

This confluence of game theory, complexity theory and information science has made it possible to see the abstract more clearly, or has made a mathematical notion somehow measurable. The work includes a look at how hard the solution to a problem can be, and whether or not the ideal can be actualized. What struck me about the discussion in *Plus* was the fact that Daskalakis’ work was thought to address the difference between the mathematical existence demonstrated by Nash and its real world counterparts, maybe even whether or how they are related. These things touch on my questions. Nash’s proof is non-constructive existence proof. It doesn’t build anything, it just finds something to be true. Daskalakis is a computer scientist and an engineer. He expects to build things. But the problem is attacked with mathematics. His effort spans game theory in mathematics, complexity theory (a branch of mathematics that classifies problems according to how hard they are) and information sciences. There is an interesting confluence of things here. And it didn’t answer any of the questions I have. But it encouraged me. I also like this quote from a recent Quanta Magazine article about Daskalakis:

The decisions the 37-year-old Daskalakis has made over the course of his career — such as forgoing a lucrative job right out of college and pursuing the hardest problems in his field — have all been in the service of uncovering distant truths. “It all originates from a very deep need to understand something,” he said. “You’re just not going to stop unless you understand; your brain cannot stay still unless you understand.”

## Recent Comments