Categories

Compression, meaning, and mathematics

One of the more interesting applications of algorithmic action can be seen in Jürgen Schmidhuber’s work on artificial curiosity.

Schmidhuber has been building what he calls ‘artificial scientists and artists’ that possess an algorithmic mechanism for motivating invention. He provides a brief and fairly straightforward description of his creative machines in the transcript of a talk he gave at TEDxLausanne on January 20.

Let me explain it in a nutshell. As you are interacting with your environment, you record and encode (e.g., through a neural net) the growing history of sensory data that you create and shape through your actions.

Any discovery (say, through a standard neural net learning algorithm) of a new regularity in the data will make the code more efficient (e.g., less bits or synapses needed, or less time). This efficiency progress can be measured — it’s the wow-effect or fun! A real number.

This ‘efficiency progress’ or ‘learning progress’ is the ongoing and successful compression of data, the discovery of regularities or symmetries that reduce the work necessary to encode the data.  The webpage describing Schmidhuber’s Theory of Creativity says that

Since 1990 Jürgen Schmidhuber has built curious, creative agents that may be viewed as simple artificial scientists & artists with an intrinsic desire to explore the world by continually inventing new experiments. They never stop generating novel & surprising stuff…..

It’s an interesting model, and clearly effective.  The question, of course, is to what extent does it account for human creativity.  It does make use of an important development in artificial intelligence – the artificial Recurrent Neural Network – one that mimics the feedback mechanisms in the brain.  But it is still algorithmic in nature.

Yet nobel laureate and neuroscientist Gerald Edelman thinks it’s clear that the brain does not function algorithmically.  Edelman has been working on a theory of mind since the late 70’s.  During an interview for Discover in 2009, he says that someday scientists will make a conscious artifact.  He was then asked that if by proposing the possibility of artificial consciousness he was comparing the human brain to a computer?
 And his answer was

No. The world is unpredictable, and thus it is not an unambiguous algorithm on which computing is based. Your brain has to be creative about how it integrates the signals coming into it. And computers don’t do that. The human brain is capable of symbolic reference, not just syntax. Not just the ordering of things as you have in a computer, but also the meaning of things, if you will.

I think the key here is probably ‘meaning,’ which for Edelman is what brings humanity to its current level of consciousness.  Also during the interview, Edelman describes the evolution of our consciousness in this way:

About 250 million years ago, when therapsid reptiles gave rise to birds and mammals, a neuronal structure probably evolved in some animals that allowed for interaction between those parts of the nervous system involved in carrying out perceptual categorization and those carrying out memory. At that point an animal could construct a set of discriminations: qualia. It could create a scene in its own mind and make connections with past scenes. At that point primary consciousness sets in. But that animal has no ability to narrate. It cannot construct a tale using long-term memory, even though long-term memory affects its behavior. Then, much later in hominid evolution, another event occurred: Other neural circuits connected conceptual systems, resulting in true language and higher-order consciousness. We were freed from the remembered present of primary consciousness and could invent all kinds of images, fantasies, and narrative streams.(emphases my own).

Edelman pursues the creation of conscious artifacts by constructing what he calls brain-based devices (BBDs).  Their intent is to model the brain for the sake of understanding it, not imitating it.

It looks like maybe a robot, R2-D2 almost. But it isn’t a robot, because it’s not run by an artificial intelligence [AI] program of logic. It’s run by an artificial brain modeled on the vertebrate or mammalian brain. Where it differs from a real brain, aside from being simulated in a computer, is in the number of neurons. Compared with, let’s say, 30 billion neurons and a million billion connections in the human cortex alone, the most complex brain-based devices presently have less than a million neurons and maybe up to 10 million or so synapses, the space across which nerve impulses pass from one neuron to another.

What is interesting about BBDs is that they are embedded in and sample the real world. They have something that is equivalent to an eye: a camera. We give them microphones for the equivalent of ears. We have something that matches conductance for taste. These devices send inputs into the brain as if they were your tongue, your eyes, your ears. Our BBD called Darwin 7 can actually undergo conditioning. It can learn to pick up and “taste” blocks, which have patterns that can be identified as good-tasting or bad-tasting. It will stay away from the bad-tasting blocks, which have images of blobs instead of stripes on them —rather than pick them up and taste them. It learns to do that all on its own.

There is some fundamental disagreement about how or whether one can artificially produce a creative, intelligent agent.  But a few things in each of these perspectives got my attention.  I do find it interesting that curiosity and inventive action can be understood as compression, that the way to compression is through finding regularity and symmetry, and that the strategy can be reproduced with software.  It says something about the power of the strategy and the software.  I remember reading about how language was cognitive compression.  I thought at the time that mathematics excelled at compressing meaning.  And mathematics studies exactly the strategies of compression (like symmetry, regularity and pattern)  Compression must have very broad application in cognitive science. No doubt this software mimics something biological.

The other is that Edeman’s work can only be understood when the brain is understood in a wholly biological way – embedded in the body which is embedded in the world.  This includes Edelman’s evolutionary view of learning:

In Edelman’s grand theory of the mind, consciousness is a biological phenomenon and the brain develops through a process similar to natural selection. Neurons proliferate and form connections in infancy; then experience weeds out the useless from the useful, molding the adult brain in sync with its environment.

The creativity of Schmidhuber’s agents is impressive, but how likely is it that they would invent a mathematical idea.  It is, I think the evolutionary process that Edelman is exploring that brings about mathematics.

1 comment to Compression, meaning, and mathematics

  • happyseaurchin

    i agree with you
    a remarkable observation:
    invention or curiosity as compression
    and your own
    “language was cognitive compression”

    and edelman’s embedded in the real world seems pretty solid
    though i wouldn’t discount the “immersion” of schmidhuber’s algorithms
    if we consider the language-math realm to be an “environment” or ecology