Not long ago I wrote about the work of Bob Coecke, an Oxford University physicist, who is pioneering an application of category theory to quantum mechanics. In that post I referred to the work he is also doing with language, using the same kind of graphic structures. I drew attention to the fact that category theory’s ‘process’ rather than ‘object’ orientation may have helped open this door. But there are a number of things that distinguish Coecke’s work that are worth thinking about. I spent some time today doing just that.
Coecke spoke briefly, in a Foundational Questions Institute pod cast, and tried to clarify what makes his approach to quantum mechanics and linguistics unique and powerful. He first developed his graphical language to simplify the mathematics of quantum theory. One of the keys to the effectiveness of this tool is that, while it is a picture narrative, it is also computational. The graphics contain quantitative information. They are, Coecke explains, a calculus. And apparently when the object of our attention changes from being a quantitative description, to a scheme of diagrammed interactions, we can actually see more. It’s as if this calculus lets our visual senses do some of the thinking. It is the state of a physical system that flows through such a diagram. Physical laws are then expressed in the topology of the diagram, in how the flow is characterized or defined. Coecke suggested a cooking analogy. A recipe can be described as a flow of actions like chopping, mixing, boiling, frying, etc. For some aspects of the recipe, the order of actions is not important and for other aspects the order is crucial. A computation in Coecke’s graphical language can be thought of as rearranging the diagram without changing its topology (or in the case of the recipe, without changing the outcome of the dish).
That a picture can contain quantitative information is one of the keys to mathematics in general. But computing with the pictures reaches another level of abstraction. This possibility grows out of the branch of mathematics called category theory, about which the Stanford Encyclopedia of Philosophy says:
It could be argued that category theory represents the culmination of one of deepest and most powerful tendencies in twentieth century mathematical thought: the search for the most general and abstract ingredients in a given situation.
Category theory can identify ‘universal properties,’ or the way that different actions are actually doing the same thing. For example, through the lense of category theory, a Cartesian product in set theory, a product of topological spaces, and the conjunction of propositions in a deductive system, are the same kind of action. Identifying this fact makes it possible to express a problem in one area of mathematics as a problem in another area. What matters is the way the objects within a given structure are related, whether they are points on a line or the propositions in logic. With Coecke’s work, this level of abstraction not only bridges different areas within mathematics, it also creates some cross-disciplinary bridges and opens the door to the application of Coecke’s graphics to linguistics.
In the podcast, Coecke tells us that the way words interact in a sentence (to create meaning) is similar to the interactions in the subatomic world of quantum physics. This thought alone could inspire a host of philosophical discussions, but I’ll stay with computational linguistics for the moment.
Most of the existing models of human language use the meaning of individual words (for the algorithms of search engines) or the rules of grammar. But it has not been possible to formalize their interaction in an algorithmic way. Coecke’s work has suggested a way to do this. In existing models, words have been defined as vectors in a multi-dimensional space where each dimension represents an key attribute of the word. The vectors used to build the meaning of the word dog or cat, for example, would have no overlap with the ones used to build the meaning of a word like banker. The dictionaries compiled from these vectors are understood in terms of distance. The distance between dog and cat in a dictionary compiled with these vectors would be much less than the distance between, say, dog and banker. Coecke and his team have figured out a way to use his graphical links to connect individual words and create a vector for a sentence. This implies distances between sentences. The links contain grammatical rules. And the vectors create distances between sentences, neighborhoods of related meaning, as happens with the vector words.
In the podcast interview, Coecke emphasized that while there is a logic to the graphical analysis of sentence construction, it is not propositional, not a yes/no logic, not about facts that beget other facts, or things that are true or false. It is more an analysis of the information flowing through the wires of the diagram.
The work was outlined in a New Scientist article at the end of 2010. It’s young but, as the article says, “aims to create a universal “theory of meaning” in which language and grammar are encoded in a set of mathematical rules.”
I’m particularly interested in the fact that the same kinds of orders seem to bring about both matter and meaning, and that mathematics has actually found some of them.
well
i am closer to him
seeing he is in oxford
perhaps we could set up a conversation?
the trick is to come up with a purpose that is functional for us all
Me too.
very interesting bit about an algorithmic approach to language
creating vectors for a sentence
and the implication about distance between sentence
definitely matches the stepping stone i intuited a while back
wrt coding and creating a vector from a tweet
and
wrt the space-bar on the keyboard triggering an active link
(rather than the “space” between “words” we think of as being empty in typography)
i’d love to engage those dudes!