Categories

Intelligence, artificial and otherwise

Earlier this month, Nature  reported on Artificial Intelligence (AI) research, where deep learning networks (an AI strategy) spontaneously generated patterns of computations that bore a striking resemblance to the activity generated by our own grey matter – namely by the neurons called grid cells in the mammalian brain. The patterned firing of grid cells enable mammals to create cognitive maps of their environment. The artificial network, that unexpectedly produced something similar, was developed by neuroscientists at University College London, together with AI researchers at the London-based Google company DeepMind.  A computer-simulated rat was trained to track its movement in a virtual environment.

The Nature article by Alison Abbott tells us that the grid-cell-like coding was so good, the virtual rat was even able to learn short-cuts in its virtual world. And here’s an interesting response to the work from neuroscientist Edvard Moser, a co-discover of biological grid cells:

“This paper came out of the blue, like a shot, and it’s very exciting,” says neuroscientist Edvard Moser at the Kavli Institute for Systems Neuroscience in Trondheim, Norway. Moser shared the 2014 Nobel Prize in Physiology or Medicine for his co-discovery of grid cells and the brain’s other navigation-related neurons, including place cells and head-direction cells, which are found in and around the hippocampus region.

“It is striking that the computer model, coming from a totally different perspective, ended up with the grid pattern we know from biology,” says Moser. The work is a welcome confirmation that the mammalian brain has developed an optimal way of arranging at least this type of spatial code, he adds.

There is something provocative about measuring the brain’s version of grid cell navigation against this emergent but simulated grid cell action.

In Nature’s News and Views  Francesco Savelli and James J. Knierim tell us a bit more about the study. First, for the sake of clarity, what researchers call deep learning is a kind of machine learning characterized by layers of computations, structured in such a way that the output from one computation becomes the input of another. Inputs and outputs are defined by a transformation of data, or information, being received by each layer.  The data is translated into “compact representations” that promote the success of the task at hand – like translating pixel data into a face that can be recognized. A system like this can learn to process inputs so as to achieve particular outputs. The extent to which each of the computations, in each of the layers, affects the final outcome is determined by how they are weighted. With optimization algorithms, these weights will be adjusted to optimize results. Deep learning networks have been successful with computer vision, speech recognition, and games, among other things. But navigating ones self through the space of ones environment is a fairly complex task.

The research that led to Moser’s Nobel Prize in 2014 was the discovery of a kind of family of neurons that produces the cognitive maps we develop of our environments. There are place cells, neurons that fire when an organism is in a particular position in an environment, often with landmarks.  There are head-direction neurons that signal where the animal seems to be headed. There are also neurons that respond to the presence of an edge to the environment.   And, most relevant here, there are grid cells.  Grid cells fire when an animal is at any of a set of points that define a hexagonal grid pattern across their environment. The neuron’s firing maps to a point on the ground. They contribute to the animal’s sense of position, and correspond to the direction and distance covered by some number of steps taken.

Banino and colleagues wanted to create a mechanism for self-locating, in a deep-learning network.  Such a mechanism is referred to as path integration.

Because path integration involves remembering the output from the previous processing step and using it as input for the next, the authors used a network involving feedback loops.They trained the network using simulations of pathways taken by foraging rodents. The system received information about the simulated rodent’s linear and angular velocity, and about the simulated activity of place and head-direction cells…

And this is what happened:

The authors found that patterns of activity resembling grid cells spontaneously emerged in computational units in an intermediate layer of the network during training, even though nothing in the network or the training protocol explicitly imposed this type of pattern. The emergence of grid-like units is an impressive example of deep learning doing what it does best: inventing an original, often unpredicted internal representation to help solve a task.

These grid-like units allowed the network to keep track of position, but whether they would function in the network’s navigation to a goal was still a question. They addressed this question by adding a reinforcement-learning component. The network learned to assign values to particular actions at particular locations, and higher values were assigned to actions that brought the simulated animal closer to a goal.

The grid-like representation markedly improved the ability of the network to solve goal-directed tasks, compared to control simulations in which the start and goal locations were encoded instead by place and head-direction cells.

Unlike the navigation systems developed by the brain, in this artificial network, the place cell layer is not changed during the training that affects grid cells. But the way that grid and place cells influence each other in the brain is not well understood. Further development of the artificial network might help unravel their interaction.

From a broader perspective, it is interesting that the network, starting from very general computational assumptions that do not take into account specific biological mechanisms, found a solution to path integration that seems similar to the brain’s. That the network converged on such a solution is compelling evidence that there is something special about grid cells’ activity patterns that supports path integration. The black-box character of deep learning systems, however, means that it might be hard to determine what that something is.

There is clear pragmatic promise in this research, involving both AI and it’s many applications, as well as cognitive neuroscience.  But I find it striking for a different reason.  I find it striking because it seems to provide something new, and provocative, about mathematics’ ubiquitous presence.  When I first learned about the action of grid cells I was impressed with the way this fully biological, unconscious, cognitive mechanism resembled the abstract coordinate systems in mathematics.  But here there is an interesting reversal.  Here we see the biological one emerging, without our direction, from a system that owes its existence entirely to mathematics.  It puts mathematics somewhere between everything and in a way that we haven’t quite grasped.  It’s intelligence we can’t locate.

Comments are closed.