Read The Mind and the Brain Online

Authors: Jeffrey M. Schwartz,Sharon Begley

Tags: #General, #Science

The Mind and the Brain (29 page)

It is worth pausing here to address what neuroplasticity is not: just a fancy name for learning and the formation of memories. This not-infrequent criticism of the excitement surrounding the neuroplasticity of the adult brain is reminiscent of the old joke about how new ideas are first dismissed as wrong, and then, when finally accepted, as unimportant. In the case of neuroplasticity, the criticism goes something like this: the idea that the adult brain can rewire itself in some way, and that this rewiring changes how we process information, is no more than a truism. If by neuroplasticity you mean the ability of the brain to form new synapses, then this point is valid: the discovery of the molecular basis of memory shows that the brain undergoes continuous physical change. But the neuroplasticity I’m talking about extends beyond the formation of a synapse here, the withering away of a synapse there. It refers to the wholesale remapping of neural real estate. It refers to regions of the brain’s motor cortex that used to control the movement of the elbow and shoulder, after training, being rewired to control the movement of the right hand. It refers to what happens when a region of the somatosensory cortex that used to register when the left arm was touched, for example, is invaded by the part of the somatosensory cortex that registers when the chin is gently brushed. It refers to visual cortex that has been reprogrammed to receive and process tactile inputs. It is the neural version of suburban sprawl: real estate that used to serve one purpose being developed for another. Use-induced cortical reorganization, says Taub, “involves alterations different from mere learning and memory. Rather than producing just increased synaptic strength at certain junctions, which is believed to underlie learning, some unknown mechanism is instead producing wholesale topographic reorganization.” And more: we are seeing evidence of the brain’s ability to remake itself throughout adult life, not only in response to outside stimuli but even in response to directed mental effort. We are see
ing, in short, the brain’s potential to correct its own flaws and enhance its own capacities.

The existence, and importance, of brain plasticity are no longer in doubt. “Some of the most remarkable observations made in recent neuroscience history have been on the capacity of…the cerebral cortex to reorganize [itself] in the face of reduced or enhanced afferent input,” declared Edward Jones of the University of California, Davis, Center for Neuroscience, in 2000. What had been learned from the many experiments in which afferent input to the brain increased? Cortical representations are not immutable; they are, to the contrary, dynamic, continuously modified by the lives we lead. Our brains allocate space to body parts that are used in activities that we perform most often—the thumb of a videogame addict, the index finger of a Braille reader. But although experience molds the brain, it molds only an attending brain. “Passive, unattended, or little-attended exercises are of limited value for driving” neuroplasticity, Merzenich and Jenkins concluded. “Plastic changes in brain representations are generated only when behaviors are specifically attended.” And therein lies the key. Physical changes in the brain depend for their creation on a mental state in the mind—the state called attention. Paying attention matters. It matters not only for the size of the brain’s representation of this or that part of the body’s surface, of this or that muscle. It matters for the dynamic structure of the very circuits of the brain and for the brain’s ability to remake itself.

This would be the next frontier for neuroplasticity, harnessing the transforming power of mind to reshape the brain.

{
SEVEN
}
NETWORK REMODELING

The mind is its own place, and in itself Can make a heaven of hell.


John Milton
, Paradise Lost

In the previous two chapters, we examined the brain’s talent for rewriting its zoning laws—or, to be more formal about it, the expression of neuroplasticity that neuroscientists call cortical remapping. We’ve seen how a region of somatosensory cortex that once processed feelings from an arm can be rezoned to handle input from the face; how the visual cortex can stop “seeing” and begin to “feel” how the motor cortex can reassign its neuronal real estate so that regions controlling much-used digits expand, much as a town might expand a playground when it enjoys a baby boom. In all these cases, brain plasticity follows an increase or decrease in sensory input: an increase, as in the case of violin players’ giving their fingering digits a workout, leads to an expansion of the cortical space devoted to finger movement, whereas a decrease in sensory input, as in the case of amputation, leads to a shrinkage. But there is another aspect of neuroplasticity. Rather than a brute force expansion or shrinkage of brain regions zoned for particular functions, this form of neuroplasticity alters circuitry within a given region. And it results not from a change in the amount of sensory input, but from a change in its quality.

By the mid-1990s, Michael Merzenich and his UCSF team had two decades of animal research behind them. In addition to all the studies they had made of how changing levels of sensory stimulation altered the somatosensory cortex, they had shown that auditory inputs have the power to change the brain, too: altering sound input, they found, can physically change the auditory cortex of a monkey’s brain and thus change the rate at which the brain processes sounds. The researchers began to suspect that the flip side of this held, too: a brain unable to process rapid-fire sounds, and thus to recognize the differences between sounds like
gee
and
key
, or
zip
and
sip
, may be different—physically different—from a brain that can. Across the country, at Rutgers University in New Jersey, Paula Tallal and Steve Miller had been studying children who had specific language impairment (SLI). In this condition, the kids have normal intelligence but great difficulty in reading and writing, and even in comprehending spoken language. Perhaps the best-known form of specific language impairment is dyslexia, which affects an estimated 5 to 17 percent of the U.S. population. When Tallal began studying dyslexia in the early 1970s, most educators ascribed it to deficits of visual processing. As the old (and now disproved) stereotype had it, a dyslexic confuses
p
with
q
, and
b
with
d
. Tallal didn’t buy it. She suspected that dyslexia might reflect a problem not with recognizing the appearance of letters and words but, instead, with processing certain speech sounds—fast ones.

Her hunch was counterintuitive—most dyslexics, after all, have no detectable speech impediments—but it turned out to be right. Dyslexia often does arise from deficits in phonological processing. Dyslexics therefore struggle to decompose words into their constituent sounds and have the greatest trouble with
phonemes
(the smallest units of oral speech) like the sounds of
b, p, d
, and
g
, all of which burst from the lips and vanish in just a few thousandths of a second. In these dyslexics the auditory cortex, it seems, can no more resolve closely spaced sounds than a thirty-five-millimeter camera
on Earth can resolve the craters and highlands of the Moon. They literally cannot hear these staccato sounds. How might this happen? Pat Kuhl’s work, discussed in Chapter 3, shows how infants normally become attuned to the sounds of their native language: particular clumps of neurons in the auditory cortex come to represent the phonemes they hear every day. But consider what would happen if this input were somehow messed up, if the brain never correctly detected the phoneme. One likely result would be a failure to assign neurons to particular phonemes. As a result, dyslexics would be no more able to distinguish some phonemes than most native Japanese speakers are to distinguish
l
from
r
. Since learning to read involves matching written words to the heard language—learning that
C A T
has a one-to-one correspondence with the sound
cat
, for instance—a failure to form clear cortical representations of spoken language leads to impaired reading ability.

Merzenich knew about Tallal’s hypothesis. So at a science meeting in Santa Fe, they discussed her suspicion that some children have problems hearing fast sounds, and her hunch that this deficit underlies their language impairment and reading problems. You could almost see the light bulb go off over Merzenich’s head: his plasticity experiments on monkeys, he told Tallal, had implications for her ideas about dyslexia. Might reading be improved in dyslexics, he wondered, if their ability to process rapid phonemes were improved? And could that be done by harnessing the power of neuroplasticity? Just as his monkeys’ digits became more sensitive through repeated manipulation of little tokens, Merzenich thought, so dyslexics might become more sensitive to phonemes through repeated exposure to auditory stimuli. But they would have to be acoustically modified stimuli: if the basis of dyslexia is that the auditory cortex failed to form dedicated circuits for explosive, staccato phonemes, then the missing circuits would have to be created. They would have to be coaxed into being by exposing a child over and over to phonemes that had been artificially drawn out, so that
instead of being so staccato they remained in the hearing system a fraction of a second longer—just enough to induce a cortical response.

Tallal, in the meantime, had received a visit from officials of the Charles A. Dana Foundation, which the industrialist David Mahoney was leading away from its original mission of education and into neuroscience. But not just any neuroscience. Incremental science was all well and good, Mahoney told Tallal, but what he was interested in was discovery science, risk-taking science—research that broke paradigms and made us see the world, and ourselves, in a new light. “Put your hand in the fire!” he encouraged her. The upshot was the launch of a research program on the neurological mechanisms that underlie reading and on how glitches in those mechanisms might explain reading difficulties. Rutgers and UCSF would collaborate in a study aimed at determining whether carefully manipulated sounds could drive changes in the human auditory cortex.

In January 1994, Merzenich, Bill Jenkins, Christoph Schreiner (a postdoc in Merzenich’s lab), and Xiaoqin Wang trekked east, and over two days Tallal and her collaborators told the Californians “everything they knew about kids with Specific Language Impairment,” recalls Jenkins. “We sat there and listened, and about halfway through I blurted out, ‘It sounds like these kids have a backwards masking problem’—a brain deficit in auditory processing. That gave us the insight into how we might develop a way to train the brain to process sounds correctly.” Two months later, the Dana Foundation awarded them a three-year grant of $2.3 million.

The UCSF and Rutgers teams set to work, trying to nail down whether a phonological processing deficit truly underlies dyslexia and whether auditory plasticity might provide the basis for fixing it. They started with the hypothesis that children with specific language impairment construct their auditory cortex from faulty inputs. The kids take in speech sounds in chunks of one-third to one-fifth of a second—a period so long that it’s the length of sylla
bles, not phonemes—with the result that they do not make sharp distinctions between syllables. It’s much like trying to see the weapons carried by troops when your spy camera can’t resolve anything smaller than a tank. So it is with this abnormal “signal chunking”: the brains of these children literally do not hear short phonemes.
Ba
, for instance, starts with a
b
and segues explosively into
aaaah
in a mere 40 milliseconds. For brains unable to process transitions shorter than 200 milliseconds, that’s a problem. The transition from
mmm
to
all
in
mall
, in contrast, takes about 300 milliseconds. Children with specific language impairment can hear
mall
perfectly well, but
ba
is often confused with
da
because all they actually hear is the vowel sound. There are undoubtedly multiple causes of this processing abnormality, including developmental delays, but middle ear infections that muffle sounds are a prime suspect. These deficits in acoustic signal reception seem to emerge in the first year of life and have profound consequences. By age two or three, children with these deficits lag behind their peers in language production and understanding. Later, they often fail to connect the letters of written speech with the sounds that go with those letters. When
ba
sounds like
da
, it’s tough to learn to read phonetically.

If language deficits are the result of abnormal learning by the auditory cortex, then the next question was obvious: can the deficits be remedied by learning, too? To find out, Rutgers recruited almost a dozen kids with SLI and set up experimental protocols; UCSF developed the acoustic input, in the form of stretched-out speech, that they hoped would rewire the children’s auditory cortex. But from the beginning Mike Merzenich was concerned. The auditory map forms early in life, so that by the time children are two they have heard spoken something like 10 million to 20 million words—words that, if the hypothesis about phonemic processing deficits was correct, sounded wrong. He knew that cortical representations are maintained through experience, and experience was what these kids had every time they misheard speech. “How are we going to undo that?” he worried. And worse, although the kids would hear
modified speech in the lab, they would be hearing, and mishearing, the regular speech of their family and friends the rest of the time. That, Merzenich fretted, would reinforce all of the faulty phonemic mapping that was causing these kids’ problems. Short of isolating the children, there was no way around it: the researchers would simply have to take their best shot at rebuilding a correct phonemic representation in the children’s brains, competing input be damned.

As luck would have it, Xiaoqin Wang had joined Merzenich’s lab in the early 1990s after finishing his Ph.D. at Johns Hopkins, where he had studied the auditory system. Although reading a book on the brain had lured him into neuroscience, Wang’s first love had been information processing: he had earned a master’s degree in computer science and electrical engineering. That experience had given him just the signal-processing knowledge that Paula Tallal and Merzenich needed to produce modified speech tapes that would, they hoped, repair the faulty phonemic representations in the brains of SLI children. Wang was reluctant to enlist in the project, because he was so busy with the experiments on cortical remapping of monkeys’ hand representations. “But Mike is someone you just can’t say no to,” he recalls. “So we took this idea of Tallal’s that if you slow down rapid phonemes the kids will hear them. What I managed to do was slow down speech without changing its pitch or other characteristics. It still sounded like spoken English, but the rapid phonemes were drawn out.” The software stretched out the time between
b
and
aaah
, for example, and also changed which syllables were emphasized. To people with normal auditory processing, the sound was like an underwater shout. But to children with SLI, the scientists hoped, it would sound like
baa
—a sound they had never before heard clearly. When Tallal listened to what Wang had come up with, she was so concerned that the kids would be bored out of their minds, listening to endless repetitions of words and phonemes, that she dashed out to pick up a supply of Cheetos. She figured her team would really have to bribe—er, motivate—the kids to stick with the program.

And so began Camp Rutgers, in the summer of 1994. It was a small study with a grand goal: to see whether chronic exposure to acoustically modified phonemes would alter the cortical representation of language of an SLI child and help him overcome his impairment. The scientists’ audacious hope was that they could retrain neurons in the auditory cortex to recognize lightning-fast phonemes. The school-age kids would show up every weekday morning at eight and stay until eleven. While their parents watched behind a one-way mirror, the children donned headphones. Using tapes of speech processed with Wang’s software, they were coached in listening, grammar, and following directions (a novelty for some, since they’d never understood many instructions in the first place). For example, “Point to the boy who’s chasing the girl who’s wearing red,” intoned the program over and over, the better to create the cortical representations of phonemes. To break up the monotony, the scientists offered the kids snacks and puppets, frequent breaks—and in one case, even handstand demonstrations. Steve Miller recalls, “All we did for three hours every day was listen. We couldn’t even talk to the kids: they got enough normal [misheard] speech outside the lab. It was so boring that Paula had to give us pep talks and tell us to stop whining. She would give us a thumbs-up for a good job—and we’d give her a different finger back.” In addition to the three hours listening to modified speech in the lab, every day at home the children played computer games that used processed speech.

As the children progressed, the program moved them from ultra-drawn-out phonemes through progressively less drawn-out ones, until the modified speech was almost identical to normal speech. The results startled even the scientists. After only a month, all the children had advanced two years in language comprehension. For the first time in their life, they understood speech as well as other kids their age.

“So we had these great results from a small group of kids,” Steve Miller says. “But when Paula went to a conference in Hawaii, peo
ple jumped all over her, screaming that we couldn’t make these results public. They pointed out that we had no controls: how did we know that the language improvement didn’t reflect simply the one-on-one attention the kids got, rather than something specific to the modified speech?” Merzenich was hugely offended. He was itching to get the results to people who would benefit from them. But he agreed to keep quiet. “Sure, we had these great results with seven kids,” says Bill Jenkins. “But we knew no one would believe it. We knew we had to go back,” to get better data on more children.

Other books

The Second Ring of Power by Carlos Castaneda
The Curiosity Machine by Richard Newsome
Sentinel [Covenant #5] by Jennifer L. Armentrout
What Happens in Scotland by Jennifer McQuiston
The Islanders by Pascal Garnier
The Choirboys by Joseph Wambaugh
A Gull on the Roof by Derek Tangye