Wednesday, July 1, 2009

Disorderly genius: How chaos drives the brain

HAVE you ever experienced that eerie feeling of a thought popping into your head as if from nowhere, with no clue as to why you had that particular idea at that particular time? You may think that such fleeting thoughts, however random they seem, must be the product of predictable and rational processes. After all, the brain cannot be random, can it? Surely it processes information using ordered, logical operations, like a powerful computer?

Actually, no. In reality, your brain operates on the edge of chaos. Though much of the time it runs in an orderly and stable way, every now and again it suddenly and unpredictably lurches into a blizzard of noise.



Neuroscientists have long suspected as much. Only recently, however, have they come up with proof that brains work this way. Now they are trying to work out why. Some believe that near-chaotic states may be crucial to memory, and could explain why some people are smarter than others.

In technical terms, systems on the edge of chaos are said to be in a state of "self-organised criticality". These systems are right on the boundary between stable, orderly behaviour - such as a swinging pendulum - and the unpredictable world of chaos, as exemplified by turbulence.

The quintessential example of self-organised criticality is a growing sand pile. As grains build up, the pile grows in a predictable way until, suddenly and without warning, it hits a critical point and collapses. These "sand avalanches" occur spontaneously and are almost impossible to predict, so the system is said to be both critical and self-organising. Earthquakes, avalanches and wildfires are also thought to behave like this, with periods of stability followed by catastrophic periods of instability that rearrange the system into a new, temporarily stable state.

Self-organised criticality has another defining feature: even though individual sand avalanches are impossible to predict, their overall distribution is regular. The avalanches are "scale invariant", which means that avalanches of all possible sizes occur. They also follow a "power law" distribution, which means bigger avalanches happen less often than smaller avalanches, according to a strict mathematical ratio. Earthquakes offer the best real-world example. Quakes of magnitude 5.0 on the Richter scale happen 10 times as often as quakes of magnitude 6.0, and 100 times as often as quakes of magnitude 7.0.

These are purely physical systems, but the brain has much in common with them. Networks of brain cells alternate between periods of calm and periods of instability - "avalanches" of electrical activity that cascade through the neurons. Like real avalanches, exactly how these cascades occur and the resulting state of the brain are unpredictable.

It might seem precarious to have a brain that plunges randomly into periods of instability, but the disorder is actually essential to the brain's ability to transmit information and solve problems. "Lying at the critical point allows the brain to rapidly adapt to new circumstances," says Andreas Meyer-Lindenberg from the Central Institute of Mental Health in Mannheim, Germany.
Disorder is essential to the brain's ability to transmit information and solve problems

The idea that the brain might be fundamentally disordered in some way first emerged in the late 1980s, when physicists working on chaos theory - then a relatively new branch of science - suggested it might help explain how the brain works.

The focus at that time was something called deterministic chaos, in which a small perturbation can lead to a huge change in the system - the famous "butterfly effect". That would make the brain unpredictable but not actually random, because the butterfly effect is a phenomenon of physical laws that do not depend on chance. Researchers built elaborate computational models to test the idea, but unfortunately they did not behave like real brains. "Although the results were beautiful and elegant, models based on deterministic chaos just didn't seem applicable when looking at the human brain," says Karl Friston, a neuroscientist at University College London.

In the 1990s, it emerged that the brain generates random noise, and hence cannot be described by deterministic chaos. When neuroscientists incorporated this randomness into their models, they found that it created systems on the border between order and disorder - self-organised criticality.

More recently, experiments have confirmed that these models accurately describe what real brain tissue does. They build on the observation that when a single neuron fires, it can trigger its neighbours to fire too, causing a cascade or avalanche of activity that can propagate across small networks of brain cells. This results in alternating periods of quiescence and activity - remarkably like the build-up and collapse of a sand pile.
Neural avalanches

In 2003, John Beggs of Indiana University in Bloomington began investigating spontaneous electrical activity in thin slices of rat brain tissue. He found that these neural avalanches are scale invariant and that their size obeys a power law. Importantly, the ratio of large to small avalanches fit the predictions of the computational models that had first suggested that the brain might be in a state of self-organised criticality (The Journal of Neuroscience, vol 23, p 11167).

To investigate further, Beggs's team measured how many other neurons a single cell in a slice of rat brain activates, on average, when it fires. They followed this line of enquiry because another property of self-organised criticality is that each event, on average, triggers only one other. In forest fires, for example, each burning tree sets alight one other tree on average - that's why fires keep going, but also why whole forests don't catch fire all at once.

Sure enough, the team found that each neuron triggered on average only one other. A value much greater than one would lead to a chaotic system, because any small perturbations in the electrical activity would soon be amplified, as in the butterfly effect. "It would be the equivalent of an epileptic seizure," says Beggs. If the value was much lower than one, on the other hand, the avalanche would soon die out.

Beggs's work provides good evidence that self-organised criticality is important on the level of small networks of neurons. But what about on a larger scale? More recently, it has become clear that brain activity also shows signs of self-organised criticality on a larger scale.

As it processes information, the brain often synchronises large groups of neurons to fire at the same frequency, a process called "phase-locking". Like broadcasting different radio stations at different frequencies, this allows different "task forces" of neurons to communicate among themselves without interference from others.

The brain also constantly reorganises its task forces, so the stable periods of phase-locking are interspersed with unstable periods in which the neurons fire out of sync in a blizzard of activity. This, again, is reminiscent of a sand pile. Could it be another example of self-organised criticality in the brain?

In 2006, Meyer-Lindenberg and his team made the first stab at answering that question. They used brain scans to map the connections between regions of the human brain and discovered that they form a "small-world network" - exactly the right architecture to support self-organised criticality.

Small-world networks lie somewhere between regular networks, where each node is connected to its nearest neighbours, and random networks, which have no regular structure but many long-distance connections between nodes at opposite sides of the network (see diagram). Small-world networks take the most useful aspects of both systems. In places, the nodes have many connections with their neighbours, but the network also contains random and often long links between nodes that are very far away from one another.

For the brain, it's the perfect compromise. One of the characteristics of small-world networks is that you can communicate to any other part of the network through just a few nodes - the "six degrees of separation" reputed to link any two people in the world. In the brain, the number is 13.

Meyer-Lindenberg created a computer simulation of a small-world network with 13 degrees of separation. Each node was represented by an electrical oscillator that approximated a neuron's activity. The results confirmed that the brain has just the right architecture for its activity to sit on the tipping point between order and disorder, although the team didn't measure neural activity itself (Proceedings of the National Academy of Sciences, vol 103, p 19518).

That clinching evidence arrived earlier this year, when Ed Bullmore of the University of Cambridge and his team used brain scanners to record neural activity in 19 human volunteers. They looked at the entire range of brainwave frequencies, from 0.05 hertz all the way up to 125 hertz, across 200 different regions of the brain.
Power laws again

The team found that the duration both of phase-locking and unstable resynchronisation periods followed a power-law distribution. Crucially, this was true at all frequencies, which means the phenomenon is scale invariant - the other key criterion for self-organised criticality.

What's more, when the team tried to reproduce the activity they saw in the volunteers' brains in computer models, they found that they could only do so if the models were in a state of self-organised criticality (PLoS Computational Biology, vol 5, p e1000314). "The models only showed similar patterns of synchronisation to the brain when they were in the critical state," says Bullmore.

The work of Bullmore's team is compelling evidence that self-organised criticality is an essential property of brain activity, says neuroscientist David Liley at Swinburne University of Technology in Melbourne, Australia, who has worked on computational models of chaos in the brain.

But why should that be? Perhaps because self-organised criticality is the perfect starting point for many of the brain's functions.

The neuronal avalanches that Beggs investigated, for example, are perfect for transmitting information across the brain. If the brain was in a more stable state, these avalanches would die out before the message had been transmitted. If it was chaotic, each avalanche could swamp the brain.

At the critical point, however, you get maximum transmission with minimum risk of descending into chaos. "One of the advantages of self-organised criticality is that the avalanches can propagate over many links," says Beggs. "You can have very long chains that won't blow up on you."

Self-organised criticality also appears to allow the brain to adapt to new situations, by quickly rearranging which neurons are synchronised to a particular frequency. "The closer we get to the boundary of instability, the more quickly a particular stimulus will send the brain into a new state," says Liley.

It may also play a role in memory. Beggs's team noticed that certain chains of neurons would fire repeatedly in avalanches, sometimes over several hours (The Journal of Neuroscience, vol 24, p 5216). Because an entire chain can be triggered by the firing of one neuron, these chains could be the stuff of memory, argues Beggs: memories may come to mind unexpectedly because a neuron fires randomly or could be triggered unpredictably by a neuronal avalanche.

The balance between phase-locking and instability within the brain has also been linked to intelligence - at least, to IQ. Last year, Robert Thatcher from the University of South Florida in Tampa made EEG measurements of 17 children, aged between 5 and 17 years, who also performed an IQ test.
The balance between stability and instability in the brain has been linked with intelligence, at least as measured by scores on an IQ test

He found that the length of time the children's brains spent in both the stable phase-locked states and the unstable phase-shifting states correlated with their IQ scores. For example, phase shifts typically last 55 milliseconds, but an additional 1 millisecond seemed to add as many as 20 points to the child's IQ. A shorter time in the stable phase-locked state also corresponded with greater intelligence - with a difference of 1 millisecond adding 4.6 IQ points to a child's score (NeuroImage, vol 42, p 1639).

Thatcher says this is because a longer phase shift allows the brain to recruit many more neurons for the problem at hand. "It's like casting a net and capturing as many neurons as possible at any one time," he says. The result is a greater overall processing power that contributes to higher intelligence.

Hovering on the edge of chaos provides brains with their amazing capacity to process information and rapidly adapt to our ever-changing environment, but what happens if we stray either side of the boundary? The most obvious assumption would be that all of us are a short step away from mental illness. Meyer-Lindenberg suggests that schizophrenia may be caused by parts of the brain straying away from the critical point. However, for now that is purely speculative.

Thatcher, meanwhile, has found that certain regions in the brains of people with autism spend less time than average in the unstable, phase-shifting states. These abnormalities reduce the capacity to process information and, suggestively, are found only in the regions associated with social behaviour. "These regions have shifted from chaos to more stable activity," he says. The work might also help us understand epilepsy better: in an epileptic fit, the brain has a tendency to suddenly fire synchronously, and deviation from the critical point could explain this.

"They say it's a fine line between genius and madness," says Liley. "Maybe we're finally beginning to understand the wisdom of this statement."

David Robson is a junior editor at New Scientist

No comments: