Categories

Outer and Inner Limits of the Brain (or the body)

A recent Scientific American article on the physical limits of intelligence raised more questions for me than it answered with its intriguing analysis of neural mechanisms.  The point of the article is to consider that it may be physically impossible for humanity to become more ‘intelligent’ with further evolution.  I think we would all agree, intelligence is a pretty slippery concept.  I find it very unlikely that we could actually evaluate its future.  The way mathematics cracks open conceptual as well as observable possibilities in our experience introduces another consideration.  This is a kind of mental work that I don’t think could be measured by memory or even calculation skills. I was captivated, however, by what I believe is the real value of this work – specifically, the kind of detail of brain functions we have been able to discern (energy requirements, how signals happen, the distances signals travel – all given particular body-to-brain ratios and the shared needs of all of the body).   Perhaps the crux of the argument for our limited future is captured in these paragraphs:

IF COMMUNICATION BETWEEN NEURONS, and between brain areas, is really a major bottleneck that limits intelligence, then evolving neurons that are even smaller (and closer together, with faster communication) should yield smarter brains. Similarly, brains might become more efficient by evolving axons that can carry signals faster over longer distances without getting thicker. But something prevents animals from shrinking neurons and axons beyond a certain point. You might call it the mother of all limitations: the proteins that neurons use to generate electrical pulses, called ion channels, are inherently unreliable.

Ion channels are tiny valves that open and close through changes in their molecular folding. When they open, they allow ions of sodium, potassium or calcium to flow across cell membranes, producing the electrical signals by which neurons communicate. But being so minuscule, ion channels can get flipped open or closed by mere thermal vibrations. A simple biology experiment lays the defect bare. Isolate a single ion channel on the surface of a nerve cell using a microscopic glass tube, sort of like slipping a glass cup over a single ant on a sidewalk. When you adjust the voltage on the ion channel–a maneuver that causes it to open or close–the ion channel does not flip on and off reliably like your kitchen light does. Instead it flutters on and off randomly. Sometimes it does not open at all; other times it opens when it should not. By changing the voltage, all you do is change the likelihood that it opens.

I find this particular detail fascinating.  Our metaphors for the electrochemical activity in the brain are usually switch-like.   ‘Fluttering on and off randomly,’ is not what I would expect to hear about.  The accidental opening of an ion channel can cause an axon to deliver an unintended signal.   Smaller, closer neurons could push this accidental firing too far:

In a pair of papers published in 2005 and 2007, Laughlin and his collaborators calculated whether the need to include enough ion channels limits how small axons can be made. The results were startling. “When axons got to be about 150 to 200 nanometers in diameter, they became impossibly noisy,” Laughlin says. At that point, an axon contains so few ion channels that the accidental opening of a single channel can spur the axon to deliver a signal even though the neuron did not intend to fire [see box on page 41]. The brain’s smallest axons probably already hiccup out about six of these accidental spikes per second. Shrink them just a little bit more, and they would blather out more than 100 per second. “Cortical gray matter neurons are working with axons that are pretty close to the physical limit,” Laughlin concludes.

The article goes on to explain how this kind of problem is not unique to biology.  Engineers face similar limitations with transmission technologies.  But unlike engineers who can go back to the drawing board, “evolution cannot start from scratch: it has to work within the scheme and with the parts that have existed for half a billion years…”

Having said that the author concludes that, as happens with social insects, our communal intelligence (enhanced first by print and now by the electronic sharing of memory and experience) may be a better way for the human mind to expand.

I think it’s worth bringing attention back to the ‘neural recycling hypothesis’ proposed by Stanislas Dehaene, described in detail in this pdf. This document pays particular attention to reading and arithmetic and makes the following claim:

I conclude the paper by tentatively proposing the “neuronal recycling” hypothesis: the human capacity for cultural learning relies on a process of pre-empting or recycling preexisting brain circuitry. According to this third view, the architecture of the human brain is limited and shares many traits with other non-human primates. It is laid down under tight genetic constraints, yet with a fringe of variability. I postulate that cultural acquisitions are only possible insofar as they fit within this fringe, by reconverting pre-existing cerebral predispositions for another use. Accordingly, cultural plasticity is not unlimited, and all cultural inventions should be based on the pre-emption of pre-existing evolutionary adaptations of the human brain. It thus becomes important to consider what may be the evolutionary precursors of reading and arithmetic.

The extent to which the conceptual possibilities of mathematics have given us a way to see the quantum mechanical substructure of our universe could only be understood as some exploitation of our present day brain structure.  I would hazard the guess that the speed of transmitted signals will do little to reveal how we have accomplished this completely abstract vision of our world.  Please feel free to let me know what you think.

2 comments to Outer and Inner Limits of the Brain (or the body)