Ƶ

An open access publication of the Ƶ
Spring 2003

How the brain keeps time

Authors
Jennifer M. Groh and Michael Saunders Gazzaniga
View PDF

Jennifer M. Groh, an assistant professor at the Center for Cognitive Neuroscience and in the Department of Psychological and Brain Sciences at Dartmouth College, studies how the brain computes. She is the author or co-author of numerous articles, which have appeared in such journals as NeuronCurrent Biology, and The Journal of Neurophysiology.

Michael S. Gazzaniga, a Fellow of the American Ƶ since 1997, is David T. McLaughlin Distinguished Professor at Dartmouth College and director of the Center for Cognitive Neuroscience. His research concerns how the brain enables mind. He is the author of The Mind’s Past (1998) and the editor of The New Cognitive Neurosciences (2000).

One of the keys to playing the piano – or at least to playing it well – is the ability of the pianist to time appropriately a sequence of movements of the fingers. How does the brain coordinate and synchronize such complex movements? Obviously, it is not simple – as anyone who has watched a baby struggling to walk can attest.

As it happens, the timing of complex tasks poses a dilemma not just for brains, but also for computers. Neither brains nor computers are clocks – yet both must somehow ‘keep’ time in order to operate properly in the face of various forms of delay in processing information.

Computer scientists and electrical engineers have developed several different algorithms to allow computing devices and networks of computing devices to coordinate complex tasks. By exploring how these man-made systems manage to synchronize their operations, and then comparing how computers and brains solve analogous problems, we may gain insight into some of the solutions that evolution has conceived to enable babies to crawl – and pianists to play the most devilishly difficult of Chopin’s Éٳܻ.

The microprocessors at the heart of computers employ sets of tiny transistors in silicon chips to represent information. These transistors are wired up in pairs to convey values of either 0 or 1; other quantities are encoded by grouping together multiple sets of transistors or bits, and by representing numbers by their base-two decomposition into 1’s and 0’s.

In contrast, neurons represent information not in sets of 1’s and 0’s but in trains of electrical pulses known as action potentials. Each action potential is roughly the same size and shape, so the action potential itself contains little information. Rather, the rate at which these action potentials occur is thought to be the medium for carrying information through the nervous system. Discharge rates can vary continuously, with information being conveyed by the length of the intervals between identical action potentials, rather than by discrete ‘on’ and ‘off ’ states. In short, the brain’s internal language differs from that of computers in two key respects: the code is analog, not digital, and the vocabulary of that code is time, rather than voltage.

Generally speaking, this neural code seems to operate on a timescale in the 1–100 millisecond range. Action potentials last about 1 millisecond each, so the fastest rate for a train of action potentials is limited to about a thousand action potentials per second. In practice, neurons rarely ‘fire’ at such high rates for more than a few action potentials in a row; more commonly, discharge rates range up to the low hundreds of action potentials per second.

Both computers and brains routinely experience transmission delays, although at wildly different orders of magnitude. Electricity travels along wires at roughly the speed of light (300,000 km/s). Thus the delays to travel the full length of, say, a standard computer card are on the order of nanoseconds.

Although neural action potentials are also electrical in nature, they do not propagate along neural tissue at anything close to the speed that electricity travels along copper wire. Neural ‘wires’ are axons, which are essentially long, leaky tubes of fluid attached at one end to the main part of the neuron – the cell body. At the other end, axons form connections, known as synapses, with recipient neurons. Because axons are much worse at conducting electricity than copper wires, action potentials decay in size as they move down the axon. They would generally die out completely before reaching the next neuron in the chain were it not for a special active process that boosts the action potential back up to its original size periodically at a series of relay stations along the axon. This regeneration process introduces a brief delay at each node. How far the action potentials can travel before they need to be regenerated depends on the diameter of the axon (thicker is better) and whether the axon is insulated with myelin. Resulting conduction speeds range from about 0.5–100 meters per second – fast, but not as fast as copper wire.

An additional factor contributing to transmission delays in computers and brains involves how fast the computing elements can respond to their inputs. The transistors in today’s computers can switch between on and off within about 5 nanoseconds of receiving an input signal pulse. How long it takes a neuron to generate an action potential varies, depending on how strong the input signal is and how this input signal is being generated and transmitted. In the retina, the generation of neural activity in response to light takes on the order of tens of milliseconds, in part because the precipitating event – the absorption of a photon by a molecule of photopigment in a light-sensitive neuron – is so small that a process of biochemical amplification is needed to convert this event into an electrical signal. In contrast, the air pressure waves of a sound physically jostle the neurons of the inner ear, causing pores in the cell membrane to stretch open or be squashed shut. The alteration in the flow of charged ions through these pores then creates an electrical signal. The resulting response latency of the auditory nerve can be on the order of a few milliseconds or less.

Additional delays can be introduced at the synaptic connections between neurons. The brain’s synapses come in two basic flavors – electrical and chemical. At electrical synapses known as gap junctions, the pre- and post-synaptic neurons are physically fused with one another, and the electrical current can pass directly between them very quickly. At chemical synapses, the arrival of an action potential triggers the release of chemicals known as neurotransmitters into a small space between the axon and the next neuron in the chain. The neurotransmitters diffuse across and bind to specialized receptors on the other side of the gap. These receptors then cause an electrical response in the recipient neuron. At excitatory synapses, the postsynaptic electrical response can help trigger a full-blown action potential, whereas at inhibitory synapses the postsynaptic electrical response serves to impede the production of any action potentials that might otherwise be triggered from one of the neuron’s other synapses. The whole process can take between 1–5 milliseconds.

In short, brains appear to lumber along when compared with contemporary computers. Yet even computers cannot solve the problem of synchronizing operations simply by being fast.

To ensure that operations proceed in the desired sequence, most modern microprocessors employ a central clock that distributes a timing pulse to ensure that each circuit is marching to the same beat. This allows the output of any one circuit to provide the input to any other circuit in the next time step.1 This clock rate must be slow enough to allow the slowest operations to be completed before the next set of operations begins. The solution works because modern microprocessors have a comfortable margin of speed, and waiting for computational stragglers does not pose a major problem.

Given the brain’s comparative sluggishness, it seems unlikely that a synchronizing clock signal could work for it in the same fashion. Simply conveying this signal from a common source to distant regions of the brain could take many tens of milliseconds, and the size of this delay would vary substantially depending on how far the signal had to travel, with neurons located near the clock center receiving the timing pulse much sooner than more distant neurons. This would be a bit like trying to run a conference call via Pony Express, so it’s hard to see how synchronized computations could result. What, then, might be the solution?

The answer may be more analogous to an alternative method of coordinating operations known as asynchronous computing. This method has long been used by networks of computers, and recent research in computer science has focused on applying the technique at the level of chip design for the next generation of microprocessors.2 When the elements of an asynchronous computer system exchange information, they use feedback to ensure that the message has been received, much like a conversation in which the listener acknowledges the speaker by nodding his head. If the sender fails to obtain confirmation that a particular message has been received, that message is resent. This method is flexible, allowing for messages to be exchanged either quickly or slowly in any given instance, and it works well in contexts like the Internet in which the speed of the operations can vary over a large range of time delays – exactly the situation faced by the brain.

What features would be needed for a biological computer consisting of neurons to implement asynchronous coordination? Several critical features come to mind. First, the neural ‘hardware’ for delivering feedback should exist, and second, neural signals should show temporal profiles that are appropriate for asynchronous coordination.

The neural wires that could serve to provide the feedback necessary for asynchronous coordination exist in abundance. A neuroanatomical rule of thumb holds that every connection between brain areas is bidirectional, meaning that some neurons will send information from area A to area B and that others will send information from area B to area A. There are exceptions to this rule of course, but it provides a good general sense of the extensive interconnectedness of the brain.

Connections that proceed from the sensory periphery to the higher-order areas of the brain that are implicated in more complex processing are known as ascending or feedforward projections. Connections directed in the opposite direction are known as descending or feedback projections. Even within a brain area, neurons are heavily interconnected with one another, potentially forming feedback loops at the local level. Identifying the specific roles of these connections has proved tricky, because the activity of individual neurons is the complicated product of all of its inputs, and dissociating some sources of inputs from others is difficult. But it is certainly possible that part of the role of these reciprocal connections is to acknowledge receipt of incoming messages.

To transmit information ‘return receipt requested’ implies that if the message is not received, it ought to be resent. This in turn means that the message must be either continually broadcast or stored for later retransmission until the acknowledgement is received, at which point the message must be deleted.

Asynchronous coordination, then, calls for a kind of working memory operating at the neural level. Specifically, it requires an ability to sustain a pattern of neural activity for an arbitrary period of time, following the cessation of a sensory input signal. As it happens, this pattern of neural activity – known as delay period activity – has been identified in a variety of areas of the brain thought to be involved in remembering things for short periods of time.

Consider, for example, the simple behavior of looking at fireflies on a dark summer evening. The firefly’s light is only visible for an instant. In the fraction of a second that it takes to plan and execute an eye movement to that spot, the light is often gone. But neurons in the brain have been found to maintain the signal of where the light was located even after the light has disappeared.3 This sustained activity lasts until an eye movement is made to the remembered location, and then it ceases, as if the signal has served its purpose and can be discarded when it is no longer needed.

One of the fascinating things about delay period activity in neurons is that no one knows how it arises. Computer transistor pairs are specifically designed to have ‘state’: they maintain their value of 0 or 1 until instructed otherwise. However, if you dissected a typical neuron out of the brain, put it in a petri dish, and activated it using a stimulating electrode you would find that its firing pattern generally tracked that of the input pulses you delivered – it would not keep firing for very long after the input train ceased. Thus, delay period activity appears to reflect a specialization of some neurons or circuits of neurons. In fact, the feedback pathways described above could well play a role in creating and controlling this delay period activity – a volley of pulses will reverberate around a positive feedback loop, causing a sustained activity pattern in response to a transient input.

Saving a message until its delivery has been assured is vital to asynchronous coordination, but it is not the only thing that is crucial. Deletion of delivered messages is also critical. Possible fingerprints of message deletion are observable in another ubiquitous property of neural response profiles, namely, the tendency to respond to a sustained input with a transient change in discharge rate. Neurons in the visual, auditory, and somatosensory pathways frequently respond most vigorously to a sensory event right at the beginning of that event, and then the response rapidly decays. The resulting brevity of the neural response may help keep one message from stepping on other messages as it is sent up to higher brain areas. Many mechanisms might account for this pattern, including the possibility that inhibitory feedback from these higher areas serves to indicate that the message has been received.

Sustaining a brief input signal and truncating a prolonged one are flip sides of the same coin. Both are necessary to give the brain control over the duration of its activity patterns. Truncation can help keep separate signals from coinciding when they converge on higher brain areas. Sustaining brief signals gives neurons ‘state,’ so that they can hold a bit of information until recipient circuits are ready to act on it. Saving a bit of information until it can be responded to, and then deleting it so that it is not responded to twice, are both critical aspects of asynchronous coordination.

What happens if the temporal coordination of neural activity goes awry?

 

Let us return to the example of making eye movements to the remembered location of a visual stimulus. Suppose that the memory trace of the visual stimulus is not discarded after the eye movement has been made. Scientists can artificially create this scenario using a technique known as microstimulation. This technique, pioneered in the 1950s by Wilder Penfield in patients undergoing surgery for intractable epilepsy, involves activating a population of neurons in vivo using a stimulating electrode. A sustained pattern of neural activity, potentially mimicking a memory trace for the location of a visual stimulus, can be evoked by delivering a sustained train of microstimulation pulses to one of the areas of the brain in which neurons normally show sustained activity pending an eye movement to the remembrance of the location of a real visual stimulus. This stimulation typically triggers an eye movement. But if the train of microstimulation is turned off too soon, the eye movement either doesn’t occur at all, or it falls short of the intended target.4 If the train of pulses is left on too long, a second, then a third eye movement is made. In other words, continuing to broadcast the eye movement command signal even after the movement has been executed once produces repeated iterations of the same movement.5

Disorders of timing may underlie or at least contribute to the symptoms of a variety of naturally occurring neurological syndromes, such as those diseases that manifest themselves as some kind of motor impairment. For example, multiple sclerosis involves the progressive destruction of the myelin that insulates neural axons. This loss of insulation results in slower conduction of action potentials along axons. Early signs of ms include clumsiness, as it becomes difficult to coordinate movements when transmission delays get out of whack.

Disorders of movement provide the most obvious window into the critical role of timing in producing properly ordered computations, because failures of coordination are readily apparent when physical actions are involved. Disorders of timing on the sensory end may be equally disruptive. Indeed, impairments in processing the temporal sequence of sensory information are currently thought to contribute to dyslexia. To conceive of these disorders as relating to deficits in the brain’s ability to synchronize its computations does not necessarily shed light on what went wrong to trigger a particular disease or condition, but it may help illuminate the constellation of symptoms that can result.

Perhaps both the synchronizing-clock and asynchronous-computing algorithms described here are used in the brain in different contexts, as is the case for man-made computing systems. Or perhaps the brain uses a wholly different method that we have yet to imagine.

Whatever the mechanism that the brain employs to synchronize its operations, when it all works swimmingly, the results are astounding. The product is the effortless integration of a myriad of sensory information to produce coherent thought and graceful physical action. And while most of us do not achieve the level of manual dexterity needed to play piano in Carnegie Hall, we do think, walk, and talk – although not necessarily in that order and not always at the same time. And this, given the tools that the brain has to work with, is nothing short of a miracle.6

ENDNOTES

1 I. E. Sutherland and J. Ebergen, “Computers without clocks: asynchronous chips improve computer performance by letting each circuit run as fast as it can,” Scientific American (August 2002).

2 Ibid.

3 L. E. Mays and D. L. Sparks, “Dissociation of visual and saccade-related responses in superior colliculus neurons,” Journal of Neurophysiology 43 (1980): 207–232; J. W. Gnadt and R. A. Andersen, “Memory related motor planning activity in posterior parietal cortex of macaque,” Experimental Brain Research 70 (1988): 216–220; C. J. Bruce and M. E. Goldberg, “Primate frontal eye fields. III. Maintenance of a spatially accurate saccade signal,” Journal of Neurophysiology 64 (1990): 489–508; R. Levy and P. Goldman-Rakic, “Segregation of working memory functions within the dorsolateral prefrontal cortex,” Experimental Brain Research 133 (2000): 23–32.

4 T. R. Stanford, E. G. Freedman, and D. L. Sparks, “Site and parameters of microstimulation: evidence for independent effects on the properties of saccades evoked from the primate superior colliculus,” Journal of Neurophysiology 76 (1996): 3360–3381.

5 D. A. Robinson, “Eye movements evoked by collicular stimulation in the alert monkey,” Vision Research 12 (1972): 1795–1807; P. H. Schiller and M. Stryker, “Single-unit recording and stimulation in superior colliculus of the alert rhesus monkey,” Journal of Neurophysiology 35 (1972): 915–924.

6 We are indebted to B. R. Donald, K. N. Dunbar, H. Farid, C. R. Gallistel, S. T. Grafton, A. M. Groh, and M. N. Shadlen for their helpful comments on an earlier version of this manuscript. We are grateful to the following sources for providing financial support: the Alfred P. Sloan Foundation (JMG), the McKnight Endowment Fund for Neuroscience (JMG), the Whitehall Foundation (JMG), the John Merck Scholars Program (JMG), the Office of Naval Research Young Investigator Program (JMG), the EJLB Foundation (JMG), The Nelson A. Rockefeller Center at Dartmouth (JMG), and NIH NS 17778-19 (MSG and JMG).