Three interactive resources—correlated with the text Exploring Biological Psychology CD-ROM This CD-ROM, packaged with each new text, draws you into a multimedia world that reinforces text concepts interactively while making it more fun to learn. You’ll find animations, videos of top researchers, drag-anddrop puzzles, and experiential Try It Yourself activities— many written specifically for the CD-ROM. A special highlight is a virtual reality activity featuring a threedimensional brain that you can rotate and dissect! There are also quizzes and critical thinking exercises, and your instructor may ask you to print out or e-mail your responses.
Exploring Biological Psychology CD-ROM Chapter 1: The Major Issues Binocular Rivalry (Try It Yourself) New! Evolutionary Studies (video) New! Offspring of Parents Homozygous and Heterozygous for Brown Eyes (animation) RNA, DNA, and Protein (animation) Selection and Random Drift (Try It Yourself) New!
Chapter 2: Nerve Cells and Nerve Impulses The Parts of a Neuron (animation) Virtual Reality Neuron (virtual reality) Neuron Puzzle (drag and drop) Resting Potential (animation) Action Potential (AP) (animation) Action Potential: Na+ Ions (animation) Neuron Membrane at Rest (figure animation) New! Propagation of the Action Potential (figure animation) New!
Chapter 3: Synapses Postsynaptic Potentials (animation) Release of Neurotransmitter (animation) Cholinergic (animation) Release of ACh (animation) AChE Inactivates ACh (animation) AChE Inhibitors (animation) Opiate Narcotics
Chapter 4: Anatomy of the Nervous System Virtual Reality Head Planes (virtual reality) Planes Puzzle (drag and drop) 3D Virtual Brain (virtual reality) Left Hemisphere Function #1 (rollover with text pop-ups) Cortex Puzzle (drag and drop) Sagittal Section: Right Hemisphere #1 (rollover with text pop-ups) Sagittal Section: Right Hemisphere #2 (rollover with text pop-ups) Sagittal Section: Right Hemisphere #3 (rollover with text pop-ups) Brain Puzzle (drag and drop) The Motor Cortex (animation) The Sensory Cortex (animation) Illustration of Binding (Try It Yourself) Possible Failure of Binding (Try It Yourself) New! Research with Brain Scans (video) New!
www.thomsonedu.com/psychology/kalat
Chapter 5: Development and Plasticity of the Brain
Book Companion Website
Sperry Experiment (figure animation) Phantom Limb (figure animation)
Here’s another way to make learning an interactive experience. Get prepared for class exams with chapterby-chapter online quizzes, a final test that’s great for practice, web links, flashcards, a pronunciation glossary, Try It Yourself experiential exercises (different from the ones in the text), and more. Visit today!
Chapter 6: Vision Anatomy of the Eye
Due to contractual reasons certain ancillaries are available only in higher education or U.S. domestic markets. Minimum purchases may apply to receive the ancillaries at no charge. For more information, please contact your local Thomson sales representative.
The Retina (animation) Virtual Reality Eye (virtual reality) Blind Spot (Try It Yourself) Color Blindness in Visual Periphery (Try It Yourself) Brightness Contrast (Try It Yourself) Motion Aftereffect (Try It Yourself) Visuo-Motor Control (Try It Yourself) New!
to provide multimedia support for each chapter! Animations, Videos, and Activities Chapter 7: The Other Sensory Systems Hearing (learning module) The Hearing Puzzle (puzzle) Somesthetic Experiment (drag and drop)
Just what you need to do NOW!
Chapter 8: Movement
Save time, focus your study, and use the choices and tools you need to get the grade you want!
The Withdrawal Reflex (animation) The Crossed Extensor Reflex (animation) Major Motor Areas (figure animation)
ThomsonNOW™ for Kalat’s Biological Psychology, Ninth Edition, is your source for results NOW. It also includes an integrated eBook version of this text.
Chapter 9: Wakefulness and Sleep Sleep Rhythms (learning module) New! Sleep Cycle (video) New!
Chapter 10: Internal Regulation Pathways from the Lateral Hypothalamus (figure animation) Hypothalamic Control of Feeding (figure animation) New! Anorexia Patient: Susan (video)
Chapter 11: Reproductive Behaviors Menstruation Cycle (animation) Erectile Dsyfunction (video)
Students who use ThomsonNOW find that they study more efficiently, because they spend their time on what they still need to master rather than on information they have already learned. And they find studying more fun because they aren’t just reading— they’re interacting with everything from videos and websites to text pages and animations.
Chapter 12: Emotional Behaviors Facial Analysis (video) New! Amygdala and Fear Conditioning (figure animation) GABA Receptors (figure animation) New! CNS Depressants (animation) Health and Stress (video) Cells of the Immune System (figure animation) New!
Personalized study plans and live online tutoring
Chapter 13: The Biology of Learning and Memory Classical Conditioning (video) Amnesia and Different Types of Memory (video) New! Amnesic Patient (video) Implicit Memories (Try It Yourself) Alzheimer’s Patient (video) Long-Term Potentiation (Try It Yourself) Neural Networks and Memory (video)
After reading a text chapter, go to www.thomsonedu.com/thomsonnow and take the online pre-test. ThomsonNOW automatically grades the pre-test, and then provides a personalized study plan that directs you to electronic text pages and online resources that you need to review. If you need more help, you can click on the vMentor™ icon and get live, online assistance from a psychology tutor! A posttest measures your mastery.
Chapter 14: Cognitive Functions Hemisphere Control (Try It Yourself) Lateralization and Language (animation) Split Brain (learning module) New! McGurk Effect (Try It Yourself) New! Wernicke-Geschwind Model (learning module) New! Attention Deficit Disorder (video) Change Blindness (Try It Yourself) New! Stop Signal (Try It Yourself) New!
Convenient access to CD-ROM content
Chapter 15: Psychological Disorders Understanding Addiction (video) CNS Stimulants (animation) Major Depressive Disorder—Barbara 1 (video) Major Depressive Disorder—Barbara 2 (video) Antidepressant Drugs (animation) Bipolar Disorder—Mary 1 (video) Bipolar Disorder—Mary 2 (video) Bipolar Disorder—Mary 3 (video) Bipolar Disorder—Etta 1 (video) Bipolar Disorder—Etta 2 (video)
䉳
If you wish, you can access the Exploring Biological Psychology CD-ROM’s media content through ThomsonNOW—it’s like having two study companions in one!
Turn to the back of this book for defining comments from noted biological psychologists.
Biological Psychology
James W. Kalat North Carolina State University
Australia • Brazil • Canada • Mexico • Singapore • Spain United Kingdom • United States
9
Biological Psychology James W. Kalat
Publisher: Vicki Knight Development Editor: Kirk Bomont Managing Assistant Editor: Jennifer Alexander Editorial Assistant: Sheila Walsh Managing Technology Project Manager: Darin Derstine Vice President, Director of Marketing: Caroline Croley Marketing Assistant: Natasha Coats Senior Marketing Communications Manager: Kelley McAllister Content Project Manager: Karol Jurado Creative Director: Rob Hugel Senior Art Director: Vernon Boes Senior Print Buyer: Karen Hunt Senior Permissions Editor: Joohee Lee
Production Service: Nancy Shammas, New Leaf Publishing Services Text Designer: Tani Hasegawa Art Editor: Lisa Torri Photo Researcher: Eric Schrader Copy Editor: Frank Hubert Illustrator: Precision Graphics Cover Designer: Denise Davidson Cover Image: Runaway Technology Cover Printer: Transcontinental Printing/Interglobe Compositor: Thompson Type Printer: Transcontinental Printing/Interglobe
© 2007 Thomson Wadsworth, a part of The Thomson Corporation. Thomson, the Star logo, and Wadsworth are trademarks used herein under license.
Thomson Higher Education 10 Davis Drive Belmont, CA 94002-3098 USA
ALL RIGHTS RESERVED. No part of this work covered by the copyright hereon may be reproduced or used in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, web distribution, information storage and retrieval systems, or in any other manner—without the written permission of the publisher. Printed in Canada 1 2 3 4 5 6 7 10 09 08 07 06 ExamView® and ExamView Pro® are registered trademarks of FSCreations, Inc. Windows is a registered trademark of the Microsoft Corporation used herein under license. Macintosh and Power Macintosh are registered trademarks of Apple Computer, Inc. Used herein under license. © 2007 Thomson Learning, Inc. All Rights Reserved. Thomson Learning WebTutor™ is a trademark of Thomson Learning, Inc. Library of Congress Control Number: 2006924488 Student Edition: ISBN 0-495-09079-4
For more information about our products, contact us at: Thomson Learning Academic Resource Center 1-800-423-0563 For permission to use material from this text or product, submit a request online at http://www.thomsonrights.com. Any additional questions about permissions can be submitted by e-mail to [emailprotected].
About the Author
James W. Kalat (rhymes with ballot) is Professor of Psychology at North Carolina State University, where he teaches courses in introduction to psychology and biological psychology. Born in 1946, he received an AB degree summa cum laude from Duke University in 1968 and a PhD in psychology from the University of Pennsylvania in 1971. He is also the author of Introduction to Psychology, Seventh Edition (Belmont, CA: Wadsworth, 2005) and co-author with Michelle Shiota of Emotion (Belmont, CA: Wadsworth, 2006). In addition to textbooks, he has written journal articles on taste-aversion learning, the teaching of psychology, and other topics. A remarried widower, he has three children, two stepchildren, and two grandchildren.
To My Family
Brief Contents 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 A B
The Major Issues 1 Nerve Cells and Nerve Impulses 29 Synapses 51 Anatomy of the Nervous System 81 Development and Plasticity of the Brain 121 Vision 151 The Other Sensory Systems 195 Movement 231 Wakefulness and Sleep 265 Internal Regulation 295 Reproductive Behaviors 325 Emotional Behaviors 353 The Biology of Learning and Memory 383 Cognitive Functions 415 Psychological Disorders 451 Brief, Basic Chemistry 485 Society for Neuroscience Policies on the Use of Animals and Human Subjects in Neuroscience Research 491
v
Contents
Module 1.3
1
The Use of Animals in Research Reasons for Animal Research 22 The Ethical Debate 23 In Closing: Humans and Animals
The Major Issues 1 2
Biological Explanations of Behavior 3 The Brain and Conscious Experience 5 Research Approaches 7 Career Opportunities 8 In Closing: Your Brain and Your Experience Summary 10 Answers to Stop & Check Questions Thought Questions 11 Author’s Answer About Machine Consciousness 11
25
Summary 25 Answers to Stop & Check Questions
Module 1.1
The Mind–Brain Relationship
22
10
25
Chapter Ending Key Terms and Activities Terms 26 Suggestions for Further Reading 26 Websites to Explore 26 Exploring Biological Psychology CD 27 ThomsonNOW 27
10
2
Module 1.2
The Genetics of Behavior Mendelian Genetics
12
Nerve Cells and Nerve Impulses 29
12
Chromosomes and Crossing Over 13 Sex-Linked and Sex-Limited Genes 13 Sources of Variation 14
Heredity and Environment
Module 2.1
The Cells of the Nervous System
14
Anatomy of Neurons and Glia
Possible Complications 14 Environmental Modification 15 How Genes Affect Behavior 16
The Evolution of Behavior
Common Misunderstandings About Evolution 16 Evolutionary Psychology 19 20
Summary 21 Answers to Stop & Check Questions Thought Questions 21
vi
30
EXTENSIONS AND APPLICATIONS Santiago Ramón y Cajal, a Pioneer of Neuroscience 30
16
In Closing: Genes and Behavior
30
The Structures of an Animal Cell 31 The Structure of a Neuron 32 Variations Among Neurons 34 Glia 35
The Blood-Brain Barrier 21
36
Why We Need a Blood-Brain Barrier 36 How the Blood-Brain Barrier Works 36
The Nourishment of Vertebrate Neurons
37
In Closing: Neurons
Relationship Among EPSP, IPSP, and Action Potential 56 In Closing: The Neuron as Decision Maker
37
Summary 38 Answers to Stop & Check Questions
38
Summary 56 Answers to Stop & Check Questions Thought Questions 57
Module 2.2
The Nerve Impulse
39
The Resting Potential of the Neuron
42
Propagation of the Action Potential 45 The Myelin Sheath and Saltatory Conduction Local Neurons 47 Graded Potentials 47 EXTENSIONS AND APPLICATIONS Small Neurons and Big Misconceptions 47 47
Summary 48 Answers to Stop & Check Questions Thought Questions 48
50
3
46
Types of Neurotransmitters 59 Synthesis of Transmitters 60 Transport of Transmitters 61 Release and Diffusion of Transmitters 61 Activation of Receptors of the Postsynaptic Cell 62 Inactivation and Reuptake of Neurotransmitters 66 Negative Feedback from the Postsynaptic Cell 67 Synapses and Personality 67
Summary 68 Answers to Stop & Check Questions Thought Questions 69
Module 3.1
The Concept of the Synapse
52
52
Speed of a Reflex and Delayed Transmission at the Synapse 53 Temporal Summation 53 Spatial Summation 53 Inhibitory Synapses 54
68
68
Module 3.3
Drugs and Synapses
70
Drug Mechanisms 71 Common Drugs and Their Synaptic Effects
71
Stimulant Drugs 71 Nicotine 73 Opiates 74 Marijuana 74 Hallucinogenic Drugs 75
In Closing: Drugs and Behavior
76
Summary 76 Answers to Stop & Check Questions Thought Question 77
Synapses 51
The Properties of Synapses
58
Chemical Events at the Synapse
In Closing: Neurotransmitters and Behavior 48
Chapter Ending Key Terms and Activities Terms 49 Suggestions for Further Reading 49 Websites to Explore 49 Exploring Biological Psychology CD 49 ThomsonNOW
Module 3.2 The Discovery of Chemical Transmission at Synapses 58 The Sequence of Chemical Events at a Synapse 59
The Molecular Basis of the Action Potential 43 The All-or-None Law 44 The Refractory Period 44
In Closing: Neural Messages
56
39
Forces Acting on Sodium and Potassium Ions 40 Why a Resting Potential? 41
The Action Potential
56
77
Chapter Ending Key Terms and Activities Terms 78 Suggestions for Further Reading 78 Websites to Explore 78 Exploring Biological Psychology CD 78 ThomsonNOW 79
Contents
vii
Module 4.3
Research Methods
4
105
Correlating Brain Anatomy with Behavior Recording Brain Activity 107 Effects of Brain Damage 109 Effects of Brain Stimulation 111 Brain and Intelligence 112
Anatomy of the Nervous System 81
Comparisons Across Species 112 Comparisons Across Humans 113
Module 4.1
Structure of the Vertebrate Nervous System 82
In Closing: Research Methods and Their Limits 115
Terminology That Describes the Nervous System 82 The Spinal Cord 84 The Autonomic Nervous System
Summary 116 Answers to Stop & Check Questions Thought Question 116 85
EXTENSION AND APPLICATIONS Goose Bumps 85
The Hindbrain The Midbrain The Forebrain
87 89 89
Thalamus 91 Hypothalamus 92 Pituitary Gland 92 Basal Ganglia 92 Basal Forebrain 93 Hippocampus 93
The Ventricles 94 In Closing: Learning Neuroanatomy Summary 95 Answers to Stop & Check Questions Thought Question 95
5 Development and Plasticity of the Brain 121
95
Module 5.1
Development of the Brain
96
Organization of the Cerebral Cortex The Occipital Lobe 98 The Parietal Lobe 98 The Temporal Lobe 98 The Frontal Lobe 100
Growth and Development of Neurons 122 New Neurons Later in Life 125
Pathfinding by Axons
Modern View of the Prefrontal Cortex 101
How Do the Parts Work Together? 101 In Closing: Functions of the Cerebral Cortex Summary 103 Answers to Stop & Check Questions Thought Question 104
Contents
122
Growth and Differentiation of the Vertebrate Brain 122
96
EXTENSIONS AND APPLICATIONS The Rise and Fall of Prefrontal Lobotomies 100
viii
116
Chapter Ending Key Terms and Activities Terms 117 Suggestions for Further Reading 118 Websites to Explore 118 Exploring Biological Psychology CD 118 ThomsonNOW 119
95
Module 4.2
The Cerebral Cortex
105
103
125
Chemical Pathfinding by Axons 125 Competition Among Axons as a General Principle 127
Determinants of Neuronal Survival 128 The Vulnerable Developing Brain 129 Fine-Tuning by Experience 131 Experience and Dendritic Branching 131 Effects of Special Experiences 132
104
In Closing: Brain Development
The Eye and Its Connections to the Brain
135
Summary 135 Answers to Stop & Check Questions Thought Questions 136
136
Visual Receptors: Rods and Cones Color Vision 157
Module 5.2
Plasticity After Brain Damage
137
Brain Damage and Short-Term Recovery
137
139
164
Summary 164 Answers to Stop & Check Questions Thought Question 165
Diaschisis 139 The Regrowth of Axons 140 Sprouting 141 Denervation Supersensitivity 141 Reorganized Sensory Representations and the Phantom Limb 142
164
Module 6.2
166
The Neural Basis of Visual Perception
METHODS 5.1 Histochemistry 143
An Overview of the Mammalian Visual System 166 Processing in the Retina 167 Pathways to the Lateral Geniculate and Beyond 169 Pattern Recognition in the Cerebral Cortex
Learned Adjustments in Behavior 144
Summary 146 Answers to Stop & Check Questions Thought Questions 147
The Trichromatic (Young-Helmholtz) Theory 158 The Opponent-Process Theory 159 The Retinex Theory 161 Color Vision Deficiency 163
In Closing: Visual Receptors
Reducing the Harm from a Stroke 137
In Closing: Brain Damage and Recovery
156
EXTENSIONS AND APPLICATIONS People with Four Cone Types 163
EXTENSIONS AND APPLICATIONS How Woodpeckers Avoid Concussions 137
Later Mechanisms of Recovery
153
The Route Within the Retina 153 Fovea and Periphery of the Retina 155
146
147
171
Pathways in the Visual Cortex 171 The Shape Pathway 173
Chapter Ending Key Terms and Activities Terms 147 Suggestions for Further Reading 148 Websites to Explore 148 Exploring Biological Psychology CD 149 ThomsonNOW 149
METHODS 6.1 Microelectrode Recordings 173
The Columnar Organization of the Visual Cortex 175 Are Visual Cortex Cells Feature Detectors? 175 Shape Analysis Beyond Area V1 176
Disorders of Object Recognition 177 The Color, Motion, and Depth Pathways
179
Structures Important for Motion Perception 179 EXTENSIONS AND APPLICATIONS Suppressed Vision During Eye Movements 180
6
Motion Blindness 181
Visual Attention 181 In Closing: From Single Cells to Vision
Vision 151 Module 6.1
Visual Coding and the Retinal Receptors General Principles of Perception
152
From Neuronal Activity to Perception 152 Law of Specific Nerve Energies 152
152
Summary 182 Answers to Stop & Check Questions Thought Question 183
182
183
Module 6.3
Development of Vision Infant Vision
184
184
Attention to Faces and Face Recognition 184 Visual Attention and Motor Control 184
Contents
ix
Early Experience and Visual Development
185
Early Lack of Stimulation of One Eye 185 Early Lack of Stimulation of Both Eyes 186 Uncorrelated Stimulation in the Two Eyes 186 Restoration of Response After Early Deprivation of Vision 187 Early Exposure to a Limited Array of Patterns 187 People with Vision Restored After Early Deprivation 188
In Closing: The Nature and Nurture of Vision Summary 190 Answers to Stop & Check Questions Thought Questions 191
191
Chapter Ending Key Terms and Activities Terms 192 Suggestions for Further Reading 192 Websites to Explore 192 Exploring Biological Psychology CD 193 ThomsonNOW 193
190
Module 7.2 Vestibular Sensation 205 Somatosensation 206 Somatosensory Receptors 206 EXTENSIONS AND APPLICATIONS Tickle 206
Input to the Spinal Cord and the Brain 208
Pain
209
Pain Stimuli and the Pain Pathways 209 Ways of Relieving Pain 210 Sensitization of Pain 212
Itch 213 In Closing: The Mechanical Senses
Module 7.3
196
Physical and Psychological Dimensions of Sound 196 Structures of the Ear 196 198
Frequency Theory and Place Theory 198
The Auditory Cortex 199 Hearing Loss 201 Sound Localization 202 In Closing: Functions of Hearing
Taste Receptors 216 How Many Kinds of Taste Receptors? 216 EXTENSIONS AND APPLICATIONS Chemicals That Alter the Taste Buds 216
Contents
220
Vomeronasal Sensation and Pheromones 224 Synesthesia 225 In Closing: Different Senses as Different Ways of Knowing the World 226 Summary 226 Answers to Stop & Check Questions Thought Questions 227
203
Summary 204 Answers to Stop & Check Questions Thought Questions 204
x
215
Behavioral Methods of Identifying Olfactory Receptors 221 Biochemical Identification of Receptor Types 222 Implications for Coding 223 Messages to the Brain 223 Individual Differences 223
196
Pitch Perception
215
General Issues About Chemical Coding Taste 216
Olfaction Module 7.1 Sound and the Ear
214
Mechanisms of Taste Receptors 218 Taste Coding in the Brain 218 Individual Differences in Taste 219
The Other Sensory Systems 195 Audition
213
Summary 213 Answers to Stop & Check Questions Thought Question 214
The Chemical Senses
7
205
The Mechanical Senses
204
227
Chapter Ending Key Terms and Activities Terms 228 Suggestions for Further Reading 228 Websites to Explore 228 Exploring Biological Psychology CD 228 ThomsonNOW 229
In Closing: Heredity and Environment in Movement Disorders 261
8
Summary 261 Answers to Stop & Check Questions Thought Questions 262
Chapter Ending Key Terms and Activities Terms 262 Suggestions for Further Reading 263 Websites to Explore 263 Exploring Biological Psychology CD 263 ThomsonNOW 263
Movement 231 Module 8.1
The Control of Movement
232
Muscles and Their Movements
232
Fast and Slow Muscles 234 Muscle Control by Proprioceptors 235
Units of Movement
261
236
Voluntary and Involuntary Movements 236
9
EXTENSIONS AND APPLICATIONS Infant Reflexes 236
Movements with Different Sensitivity to Feedback 238 Sequences of Behaviors 238
In Closing: Categories of Movement
Wakefulness and Sleep 265
239
Summary 239 Answers to Stop & Check Questions Thought Question 239
Module 9.1 239
Rhythms of Waking and Sleeping Endogenous Cycles
266
Duration of the Human Circadian Rhythm 268
Module 8.2
Brain Mechanisms of Movement The Cerebral Cortex
Mechanisms of the Biological Clock
240
241
Connections from the Brain to the Spinal Cord 243 Areas Near the Primary Motor Cortex 244 Conscious Decisions and Movements 245
The Cerebellum
266
247
Evidence of a Broad Role 248 Cellular Organization 249
Summary 252 Answers to Stop & Check Questions Thought Question 253
252
Module 8.3
Disorders of Movement Parkinson’s Disease
254
254
Possible Causes 254 L-Dopa Treatment 257 Therapies Other Than L-Dopa 257
Huntington’s Disease
Setting and Resetting the Biological Clock
251
271
Jet Lag 272 Shift Work 272 How Light Resets the SCN 273
In Closing: Sleep–Wake Cycles
The Basal Ganglia 250 Brain Areas and Motor Learning 251 In Closing: Movement Control and Cognition
269
The Suprachiasmatic Nucleus (SCN) 269 The Biochemistry of the Circadian Rhythm 270 Melatonin 271
273
Summary 274 Answers to Stop & Check Questions Thought Questions 274
274
Module 9.2
Stages of Sleep and Brain Mechanisms
275
The Stages of Sleep 275 Paradoxical or REM Sleep 276 Brain Mechanisms of Wakefulness and Arousal 277 Brain Structures of Arousal and Attention 278 Getting to Sleep 280
Brain Function in REM Sleep
281
258
Heredity and Presymptomatic Testing 259
Contents
xi
Sleep Disorders
282
Sleep Apnea 283 Narcolepsy 283 Periodic Limb Movement Disorder 284 REM Behavior Disorder 284 Night Terrors, Sleep Talking, and Sleepwalking 284
In Closing: Stages of Sleep
Summary 301 Answers to Stop & Check Questions Thought Question 302
285
Module 9.3
Why Sleep? Why REM? Why Dreams?
286
286
Sleep and Energy Conservation 286 EXTENSIONS AND APPLICATIONS Hibernation 286
Restorative Functions of Sleep 288 Sleep and Memory 288
Functions of REM Sleep
289
Individual and Species Differences 289 Effects of REM Sleep Deprivation 289 Hypotheses 289 290 The Activation-Synthesis Hypothesis 290 The Clinico-Anatomical Hypothesis 291
In Closing: Our Limited Self-Understanding
291
292
Chapter Ending Key Terms and Activities Terms 292 Suggestions for Further Reading 293 Websites to Explore 293 Exploring Biological Psychology CD 293 ThomsonNOW 293
Thirst
303
Mechanisms of Water Regulation 303 Osmotic Thirst 303 Hypovolemic Thirst and Sodium-Specific Hunger 304 In Closing: The Psychology and Biology of Thirst 306
Internal Regulation 295 Module 10.1
Temperature Regulation
296
Homeostasis and Allostasis 297 Controlling Body Temperature 297
xii
Contents
306
Module 10.3
Hunger
307
How the Digestive System Influences Food Selection 307 Enzymes and Consumption of Dairy Products 308 Other Influences on Food Selection 308
Short- and Long-Term Regulation of Feeding 309 Oral Factors 309 The Stomach and Intestines 310 Glucose, Insulin, and Glucagon 310 Leptin 312
Brain Mechanisms
10
301
Module 10.2
Summary 306 Answers to Stop & Check Questions Thought Questions 306
Biological Perspectives on Dreaming
Summary 291 Answers to Stop & Check Questions Thought Question 292
The Advantages of Constant High Body Temperature 299 Brain Mechanisms 300 Fever 300
In Closing: Combining Physiological and Behavioral Mechanisms 301
284
Summary 285 Answers to Stop & Check Questions Thought Question 285
Functions of Sleep
EXTENSIONS AND APPLICATIONS Surviving in Extreme Cold 298
313
The Arcuate Nucleus and Paraventricular Hypothalamus 313 The Lateral Hypothalamus 314 Medial Areas of the Hypothalamus 315
Eating Disorders
318
Genetics and Body Weight 318 Weight-Loss Techniques 319 Anorexia Nervosa 319 Bulimia Nervosa 320
In Closing: The Multiple Controls of Hunger 320
Summary 320 Answers to Stop & Check Questions Thought Question 322
321
Chapter Ending Key Terms and Activities Terms 322 Suggestions for Further Reading 323 Websites to Explore 323 Exploring Biological Psychology CD 323 ThomsonNOW 323
Gender Identity and Gender-Differentiated Behaviors 340 Intersexes 341 Interests and Preferences of CAH Girls 342 Testicular Feminization 343 Issues of Gender Assignment and Rearing 343 Discrepancies of Sexual Appearance 344
Possible Biological Bases of Sexual Orientation 345 Genetics 345 Hormones 346 Prenatal Events 347 Brain Anatomy 348
In Closing: We Are Not All the Same
11 Reproductive Behaviors 325 Module 11.1
Sex and Hormones
326
Organizing Effects of Sex Hormones
327
Sex Differences in the Gonads 327 Sex Differences in the Hypothalamus 329 Sex Differences in the Cerebral Cortex and Cognition 329
Activating Effects of Sex Hormones
Summary 349 Answers to Stop & Check Questions Thought Questions 350
12
331
Parental Behavior 335 In Closing: Reproductive Behaviors and Motivations 336 Summary 337 Answers to Stop & Check Questions Thought Questions 338
337
Module 11.2
Variations in Sexual Behavior
339
Evolutionary Interpretations of Mating Behavior 339 Interest in Multiple Mates 339 What Men and Women Seek in Their Mates 339 Differences in Jealousy 340 Evolved or Learned? 340 Conclusions 340
350
Chapter Ending Key Terms and Activities Terms 351 Suggestions for Further Reading 351 Websites to Explore 351 Exploring Biological Psychology CD 351 ThomsonNOW 351
Rodents 331 Humans 331 EXTENSIONS AND APPLICATIONS Premenstrual Syndrome 334
349
Emotional Behaviors 353 Module 12.1
What Is Emotion?
354
Emotions, Autonomic Response, and the James-Lange Theory 354 Is Physiological Arousal Necessary for Emotions? 355 Is Physiological Arousal Sufficient for Emotions? 355
Brain Areas Associated with Emotion
356
Attempts to Localize Specific Emotions 357 Contributions of the Left and Right Hemispheres 358
The Functions of Emotions 359 In Closing: Emotions and the Nervous System 360
Contents
xiii
Summary 360 Answers to Stop & Check Questions Thought Question 360
360
13
Module 12.2
Attack and Escape Behaviors Attack Behaviors
361
361
Heredity and Environment in Violence 361 Hormones 363 Brain Abnormalities and Violence 363 Serotonin Synapses and Aggressive Behavior 364
Escape, Fear, and Anxiety
366
Fear, Anxiety, and the Amygdala 366 Studies of Rodents 366 Studies of Monkeys 367 Activation of the Human Amygdala 368 Damage to the Human Amygdala 369
Types of Memory
In Closing: Doing Something About Emotions
387
The Hippocampus and Amnesia 373
374
398
Korsakoff’s Syndrome and Other Prefrontal Damage 398 Alzheimer’s Disease 399 What Patients with Amnesia Teach Us 401
376
Concepts of Stress 376 Stress and the Hypothalamus-Pituitary-Adrenal Cortex Axis 376 The Immune System 377 Effects of Stress on the Immune System 378
Posttraumatic Stress Disorder 379 In Closing: Emotions and Body Reactions 380
Chapter Ending Key Terms and Activities Terms 381 Suggestions for Further Reading 381 Websites to Explore 381 Exploring Biological Psychology CD 381 ThomsonNOW 381
380
In Closing: Different Types of Memory Summary
402
402
Answers to Stop & Check Questions Thought Questions 403
402
Module 13.2
Storing Information in the Nervous System 404 EXTENSIONS AND APPLICATIONS Blind Alleys and Abandoned Mines 404
Learning and the Hebbian Synapse 405 Single-Cell Mechanisms of Invertebrate Behavior Change 406 Aplysia as an Experimental Animal 406 Habituation in Aplysia 406 Sensitization in Aplysia 406
Long-Term Potentiation in Mammals Biochemical Mechanisms 408 LTP and Behavior 410
Contents
389
Amnesia After Hippocampal Damage 390 Individual Differences in Hippocampus and Memory 393 Theories of the Function of the Hippocampus 393 The Hippocampus and Consolidation 397
Other Types of Brain Damage and Amnesia
Module 12.3
xiv
384
Short- and Long-Term Memory 387 Working Memory 389
EXTENSIONS AND APPLICATIONS Alcohol as an Anxiety Reducer 373
Summary 380 Answers to Stop & Check Questions Thought Question 380
Learning, Memory, Amnesia, and Brain Functioning 384 Lashley’s Search for the Engram 384 The Modern Search for the Engram 386
Anxiety-Reducing Drugs 371
Stress and Health
Module 13.1
Localized Representations of Memory
METHODS 12.1 Microdialysis 371
Summary 374 Answers to Stop & Check Questions Thought Questions 375
The Biology of Learning and Memory 383
408
In Closing: The Physiology of Memory
411
In Closing: One Brain, Two Hemispheres
Summary 411 Answers to Stop & Check Questions Thought Question 412
412
Summary 427 Answers to Stop & Check Questions Thought Question 428
Chapter Ending Key Terms and Activities Terms 412 Suggestion for Further Reading 413 Websites to Explore 413 Exploring Biological Psychology CD 413 ThomsonNOW 413
Cognitive Functions 415 Module 14.1
Lateralization of Function
416
Handedness and Its Genetics 416 The Left and Right Hemispheres 417 Visual and Auditory Connections to the Hemispheres 418 Cutting the Corpus Callosum 419 METHODS 14.1 Testing Hemispheric Dominance for Speech 421
Split Hemispheres: Competition and Cooperation 421 The Right Hemisphere 423 Hemispheric Specializations in Intact Brains 424
Development of Lateralization and Handedness 424 Anatomical Differences Between the Hemispheres 425 Maturation of the Corpus Callosum 425 Development Without a Corpus Callosum 426 Hemispheres, Handedness, and Language Dominance 426 Recovery of Speech After Brain Damage 426
Avoiding Overstatements
427
428
Module 14.2
429
Evolution and Physiology of Language Nonhuman Precursors of Language
429
Common Chimpanzees 429 Bonobos 429 Nonprimates 430
How Did Humans Evolve Language?
14
427
432
Language as a Product of Overall Intelligence 432 Language as a Special Module 434 Does Language Learning Have a Critical Period? 434
Brain Damage and Language
435
Broca’s Aphasia (Nonfluent Aphasia) 435 Wernicke’s Aphasia (Fluent Aphasia) 437
Dyslexia 438 In Closing: Language and the Brain
440
Summary 440 Answers to Stop & Check Questions Thought Questions 441
441
Module 14.3
Attention
442
Alterations in Brain Responses Neglect 443
442
Attention-Deficit Hyperactivity Disorder
444
Measurements of ADHD Behavior 445 Possible Causes and Brain Differences 445 Treatments 446
In Closing: Attending to Attention
446
Summary 446 Answers to Stop & Check Questions Thought Question 447
447
Chapter Ending Key Terms and Activities Terms 448 Suggestions for Further Reading 448 Websites to Explore 448 Exploring Biological Psychology CD 448 ThomsonNOW 449
Contents
xv
Module 15.3
15
Schizophrenia
470
Characteristics
470
Behavioral Symptoms 470 EXTENSIONS AND APPLICATIONS Differential Diagnosis of Schizophrenia 471
Psychological Disorders 451
Demographic Data 471
Genetics
Module 15.1
Substance Abuse and Addictions
452
Synapses, Reinforcement, and Addiction
452
Reinforcement and the Nucleus Accumbens 452 Addiction as Increased “Wanting” 452 Sensitization of the Nucleus Accumbens 453
Alcohol and Alcoholism
The Neurodevelopmental Hypothesis
METHODS 15.1 The Wisconsin Card Sorting Task 475
Early Development and Later Psychopathology 476
Medications to Combat Substance Abuse
456
Neurotransmitters and Drugs
In Closing: Addictions
457
In Closing: The Fascination of Schizophrenia
Summary 457 Answers to Stop & Check Questions Thought Question 458
458
Module 15.2
459
Major Depressive Disorder
459
Genetics and Life Events 459 Hormones 460 Abnormalities of Hemispheric Dominance 461 Viruses 461 Antidepressant Drugs 462 EXTENSIONS AND APPLICATIONS Accidental Discoveries of Psychiatric Drugs 462
Other Therapies 464 466
Genetics 467 Treatments 467
Seasonal Affective Disorder (SAD) 467 In Closing: The Biology of Mood Swings Summary 468 Answers to Stop & Check Questions Thought Question 469
xvi
Contents
477
Antipsychotic Drugs and Dopamine 477 Role of Glutamate 478 New Drugs 479
Antabuse 456 Methadone 456
Bipolar Disorder
473
Prenatal and Neonatal Environment 473 Mild Brain Abnormalities 474
454
Genetics 454 Risk Factors 455
Mood Disorders
472
Twin Studies 472 Adopted Children Who Develop Schizophrenia 472 Efforts to Locate a Gene 472
468
469
Summary 480 Answers to Stop & Check Questions Thought Questions 481
481
Chapter Ending Key Terms and Activities Terms 482 Suggestions for Further Reading 482 Websites to Explore 483 Exploring Biological Psychology CD 483 ThomsonNOW 483
A
Brief, Basic Chemistry 485
B
Society for Neuroscience Policies on the Use of Animals and Human Subjects in Neuroscience Research 491
References 494 Name Index 550 Subject Index/Glossary 565
480
Preface
I
n the first edition of this text, published in 1981, I remarked, “I almost wish I could get parts of this text . . . printed in disappearing ink, programmed to fade within ten years of publication, so that I will not be embarrassed by statements that will look primitive from some future perspective.” I would say the same thing today, except that I would like for the ink to fade in much less than ten years. Biological psychology progresses rapidly, and many statements become out-of-date quickly. The most challenging aspect of writing a text is selecting what to include and what to omit. My primary goal in writing this text through each edition has been to show the importance of neuroscience, genetics, and evolution for psychology and not just for biology. I have focused on the biological mechanisms of such topics as language, learning, sexual behavior, anxiety, aggression, attention, abnormal behavior, and the mind–body problem. I hope that by the end of the book readers will clearly see what the study of the brain has to do with “real psychology” and that they will be interested in learning more. Each chapter is divided into modules; each module begins with its own introduction and finishes with its own summary and questions. This organization makes it easy for instructors to assign part of a chapter per day instead of a whole chapter per week. Parts of chapters can also be covered in a different order. Indeed, whole chapters can be rearranged in a different order. I know one instructor who likes to start with Chapter 14. I assume that the reader has a basic background in psychology and biology and understands such basic terms as classical conditioning, reinforcement, vertebrate, mammal, gene, chromosome, cell, and mitochondrion. Naturally, the stronger the background is, the better. I also assume a high-school chemistry course. Those with a weak background in chemistry or a fading memory of it may consult Appendix A.
Changes in This Edition The changes in this text are my attempt to keep pace with the rapid progress in biological psychology. This text includes more than 550 new references from 2002 through 2006 and countless major and minor changes,
including new or improved illustrations, a redesigned layout, and new Try It Yourself activities, both within the text and online. Here are some highlights:
General At relevant points throughout the text, icons and references have been added to direct the reader’s attention to the Try It Yourself activities available on line and in the CD that accompanies the text. Five new or revised Try It Yourself activities are available on the CD and on line. These demonstrations show some processes that a static display in a book cannot. When I read about a perceptual phenomenon or an experimental result, I often wonder what it would be like to experience the effect described. If I can experience it myself, I understand it better. I assume other people feel the same way. The module on attention was moved from Chapter 7 (senses) to Chapter 14 (cognitive functions). Most of the discussion of drugs and their mechanisms was moved from Chapter 15 (psychological disorders) to Chapter 3 (synapses). Many of the research methods previously discussed in other chapters have been consolidated into the module on research methods in Chapter 4 (anatomy). Chapter 1 • Added research studies examining the brain mechanisms related to consciousness. Chapter 2 • New information indicates that in some cases axons convey information by their rhythm of firing as well as their overall frequency of firing. Chapter 3 • Most of the material on hormones in general was moved from Chapter 11 to this chapter. • Updated and expanded discussion of drug mechanisms. • Added discussion of mechanisms for the postsynaptic cell to provide negative feedback to the presynaptic cell. Chapter 4 • The module on research methods was expanded, modified, and moved to the end of the chapter.
xvii
• New explanation of binding with an improved TryIt-Yourself activity. • New section discussing the research on the relationship between brain size and intelligence. Discussion of species differences in brain anatomy moved here from Chapter 5.
• Reorganized section on theories of the need for sleep. • New examples of sleep specializations in other species: dolphins, migratory birds, European swifts (who sleep while flying). • Added information about sleep in astronauts, submarine sailors, and people working in Antarctica.
Chapter 5
Chapter 10
• Revised order of topics in both modules. • Experiment on reorganization of the infant ferret cortex revised and moved here from Chapter 6. • New discussion of brain changes that result from lifelong blindness.
• Several new examples of seemingly odd animal behaviors that make sense in terms of temperature regulation. • A completely rewritten section on brain mechanisms of feeding. • Discussion of evidence suggesting that consuming high-fructose corn syrup increases the risk of obesity.
Chapter 6 • New examples of species differences in vision. • Updated discussion of blindsight, face recognition, motion blindness, and visual attention. • Added several new studies of the development of vision, including people who had vision restored in adulthood after having had little or none since early childhood. Chapter 7 • Much expanded discussion of the auditory cortex, including parallels between the auditory and visual systems. • Neuropsychological studies of a patient who cannot integrate vestibular sensation with other senses and therefore has “out of body” experiences. • Neuropsychological studies of a patient who has no conscious touch sensation but nevertheless feels pleasure when touched. • Reorganized discussion of pain. • New research added showing that chronic pain depends on a mechanism related to learning. • New section added on synesthesia, the tendency of certain people to experience one sense in response to stimulation of a different sense. Chapter 8 • Added “mirror neurons” in the motor cortex that respond both to one’s own movements and the sight of other people doing the same movements. • New section on the relationship between conscious decisions and movements. Chapter 9 • New material added on the differences between morning people and evening people. • New research included on the role of orexin in maintaining wakefulness. • GABA release during sleep does not decrease neuronal activity, but decreases the spread of excitation at synapses. Neuronal activity continues, although much of it is not conscious.
xviii
Preface
Chapter 11 • Revised discussion of hormonal effects on intellectual performance. • New study included that shows that one gene controlling vasopressin can alter social behaviors, causing male meadow voles to establish pair bonds with females and help them rear babies—a behavior never previously seen in males of this species. • Much revised discussion of gender identity and gender-differentiated behaviors in people with congenital adrenal hyperplasia. • Several updates about homosexuality including: the probability of homosexuality is increased among boys with older brothers; and men with a homosexual orientation have female relatives who have a greater than average number of children—a possible explanation for maintenance of a gene promoting homosexuality. Chapter 12 • Substantial updating and revision throughout this chapter. • Clarification of the James-Lange theory and evidence relevant to it. • Monkeys with low serotonin turnover become aggressive and are likely to die young, but if they survive, they are likely to achieve dominant status. • The human amygdala responds most strongly to emotional stimuli that are sufficiently ambiguous to require processing. • People with amygdala damage fail to identify fear in photographs because they focus their vision almost entirely on the nose and mouth. • Genetic variance in the amygdala probably contributes to variance in predisposition to anxiety disorders. • Stress module: Deleted the discussion of psychosomatic illness and expanded discussion of stress and the immune system.
• New evidence indicates that people with smaller than average hippocampus are predisposed to PTSD. Chapter 13 • New studies on patient H.M.: If tested carefully, he shows slight evidence of new declarative memories since his operation, although no new episodic memories. • Brief new discussion of individual differences in the hippocampus and their relationship to differences in memory. • Reorganized discussion of consolidation of memory. • Updates added on Alzheimer’s disease, including some new prospects for treatment. Chapter 14 • New section on the genetics of handedness. • Revised module on attention. Chapter 15 • The first module now deals with substance abuse and addiction, but not the mechanisms of drugs in general. That section is now in Chapter 3. • Greatly revised discussion of addiction, with more explanation of the distinction between wanting and liking. • Evidence now says depression relates more to lack of happiness than to increased sadness. • New evidence relates depression to an interaction between a gene and a series of stressful experiences. • New evidence on the genetics of schizophrenia. • New evidence suggests a parasitic infection in childhood can predispose someone to schizophrenia later. • Reorganized discussion of antipsychotic drugs and their relationship to neurotransmitters.
Supplements The CD-ROM that accompanies this text includes animations, film clips, Try It Yourself activities, quizzes, and other supplements to the text. Darin Derstine took responsibility for coordinating the CD, working with Rob Stufflebeam, University of New Orleans, and me on the new online Try It Yourself activities. Those who adopt the book may also obtain from the publisher a copy of the Instructor’s Manual, written by Cynthia Crawford, California State University at San Bernardino. The manual contains chapter outlines, class demonstrations and projects, a list of video resources, additional Websites, InfoTrac Virtual Reader, and the author’s answers to the Thought Questions. A separate print Testbank lists multiple-choice and true–false items written and assembled by Jeffrey Stowell, Eastern Illinois University. Note the test bank includes special files of questions for a midterm and a
comprehensive final exam. The test items are also available electronically on Examview. The Study Guide, written by Elaine M. Hull of Florida State University, may be purchased by students. Also available is the Multimedia Manager Instructor’s Resource CD-ROM, written by Chris Hayashi, Southwestern College. I am grateful for the excellent work of Darin Derstine, Cynthia Crawford, JeffreyStowell, Elaine Hull, and Chris Hayashi. In addition, it is possible to use technology in a variety of ways in your course with the following new products: JoinIn™ on TurningPoint® Exclusive from Thomson for colleges and universities . . . turn your lecture into an interactive experience for your students, using “clickers.” WebTutor™ Advantage Save time managing your course, posting materials, incorporating multimedia, and tracking progress with this engaging, text-specific e-learning tool. Visit http:// webtutor.thomsonlearning.com.
ThomsonNow™ A powerful, assignable, personalized online learning companion that assesses individual study needs and builds focused Personalized Learning Plans that reinforce key concepts with interactive animations, text art, and more.
Acknowledgments Let me tell you something about researchers in this field: As a rule, they are amazingly cooperative with textbook authors. Many of my colleagues sent me comments, ideas, articles, and photos. I thank especially the following: Greg Allen, University of Texas Southwestern Medical Center Ralph Adolphs, University of Iowa Danny Benbassat, Ohio Northern University Stephen L. Black, Bishop’s University Martin Elton, University of Amsterdam Jane Flinn, George Mason University Ronnie Halperin, SUNY-Purchase Julio Ramirez, Davidson College Sarah L. Pallas, Georgia State University Alex Pouget, University of Rochester Robert Provine, University of Maryland, Baltimore County Roberto Refinetti, University of South Carolina I have received an enormous number of letters and e-mail messages from students. Many included
Preface
xix
helpful suggestions; some managed to catch errors or inconsistencies that everyone else had overlooked. I thank especially the following: Jacqueline Counotte, Leiden University, Netherlands Terry Fidler, University of Victoria, British Columbia Paul Kim, N. C. State University Florian van Leeuwen, University of Groningen, Netherlands Elizabeth Rose Murphy, North Carolina State University Steve Williams, Massey University, New Zealand I appreciate the helpful comments and suggestions provided by the following reviewers who commented on the 8th edition and provided suggestions for the 9th edition, and/or who reviewed the revised manuscript for the 9th edition: Joseph Porter, Virginia Commonwealth University Marjorie Battaglia, George Mason University Anne Marie Brady, St. Mary’s College of Maryland Linda James, Georgian Court University Mary Clare Kante, University of Illinois at Chicago Circle Frank Scalzo, Bard College Nancy Woolf, University of California, Los Angeles Joseph Dien, University of Kansas Derek Hamilton, University of New Mexico Alexander Kusnecov, Rutgers University Ronald Baenninger, College of St. Benedict/ St. John’s University Christine Wagner, SUNY, Albany Amira Rezec, Saddleback College Brian Kelley, Bridgewater College Lisa Baker, Western Michigan University Steven Brown, Rockhurst University Chris Bloom, University of Southern Indiana Anthony Risser, University of Houston Douglas Grimsley, University of North Carolina, Charlotte Yuan B. Peng, University of Texas at Arlington Carlota Ocampo, Trinity University Ron Salazar, San Juan College
In preparing this text I have been most fortunate to work with Vicki Knight, a wise, patient, and very supportive acquisitions editor/publisher. She was especially helpful in setting priorities and planning the major thrust of this text. Kirk Bomont, my developmental editor, reads manuscripts with extraordinary care, noticing discrepancies, unclear points, and ideas that need more explanation. His work helped me enormously in the preparation of this edition. Karol Jurado, Content Project Manager, did a stellar job in coordinating the production process and working closely with all of the players, including Nancy Shammas at New Leaf Publishing Services who provided the service for the book and undertook the management of all of the talented people who contributed to the production of this book—a major task for a book with such a large art and photo program. As art editor, Lisa Torri’s considerable artistic abilities helped to compensate for my complete lack. And once again, Precision Graphics did an outstanding job with modifications on the art and new renderings. Joohee Lee handled all of the permissions, no small task for a book like this. Eric Schrader was the photo researcher; I hope you enjoy the new photos in this text as much as I do. Jennifer Wilkinson oversaw the development of supplements, such as the Instructor’s Manual and test item file. I thank Vernon Boes, who managed the interior design and the cover, Tani Hasegawa for the outstanding changes to the interior design, Frank Hubert for the copyediting, Linda Dane for the proofreading, and Do Mi Stauber for the indexes. All of these people have been splendid colleagues. I thank my wife, Jo Ellen, for keeping my spirits high, and my department head, David Martin, for his support and encouragement. I especially thank my son Sam for many discussions and many insightful ideas. Sam Kalat, coming from a background of biochemistry and computer science, has more original and insightful ideas about brain functioning than anyone else I know. I welcome correspondence from both students and faculty. Write: James W. Kalat, Department of Psychology, Box 7650, North Carolina State University, Raleigh, NC 27695–7801, USA. E-mail: james_kalat @ncsu.edu James W. Kalat
xx
Preface
TO THE OWNER OF THIS BOOK: I hope that you have found Biological Psychology, 9th Edition, useful. So that this book can be improved in a future edition, would you take the time to complete this sheet and return it? Thank you. School and address: ________________________________________________________________________________________ ______________________________________________________________________________________________________________ Department: ________________________________________________________________________ Instructor’s name: ____________________________________________________________________ 1. What I like most about this book is: ____________________________________________________ ______________________________________________________________________________________________________________ ______________________________________________________________________________________________________________ 2. What I like least about this book is: ____________________________________________________ ______________________________________________________________________________________________________________ ______________________________________________________________________________________________________________ 3. My general reaction to this book is: ____________________________________________________ ______________________________________________________________________________________________________________ ______________________________________________________________________________________________________________ 4. The name of the course in which I used this book is: ______________________________________ ______________________________________________________________________________________________________________ 5. Were all of the chapters of the book assigned for you to read? ______________________________ If not, which ones weren’t? __________________________________________________________ 6. In the space below, or on a separate sheet of paper, please write specific suggestions for improving this book and anything else you’d care to share about your experience in using this book. ______________________________________________________________________________________________________________ ______________________________________________________________________________________________________________ ______________________________________________________________________________________________________________ ______________________________________________________________________________________________________________ ______________________________________________________________________________________________________________ ______________________________________________________________________________________________________________ ______________________________________________________________________________________________________________
DO NOT STAPLE. TAPE HERE.
DO NOT STAPLE. TAPE HERE. FOLD HERE
NO POSTAGE NECESSARY IF MAILED IN THE UNITED STATES
BUSINESS REPLY MAIL FIRST-CLASS MAIL
PERMIT NO. 102
MONT E R E Y C A
P O S TA G E W I L L B E PA I D B Y A D D R E S S E E
Attn: Vicki Knight, Psychology
Wadsworth/Thomson Learning 60 Garden Ct Ste 205 Monterey CA 93940-9967
FOLD HERE
OPTIONAL: Your name: __________________________________________________ Date: __________________________ May we quote you, either in promotion for Biological Psychology, 9th Edition, or in future publishing ventures? Yes: __________ No: __________ Sincerely yours, James W. Kalat
Biological Psychology
1
The Major Issues
Chapter Outline
Main Ideas
Module 1.1
1. Biological explanations of behavior fall into several categories, including physiology, development, evolution, and function.
The Mind–Brain Relationship Biological Explanations of Behavior The Brain and Conscious Experience Research Approaches Career Opportunities In Closing: Your Brain and Your Experience Summary Answers to Stop & Check Questions Thought Questions Author’s Answer About Machine Consciousness Module 1.2
The Genetics of Behavior Mendelian Genetics Heredity and Environment The Evolution of Behavior In Closing: Genes and Behavior Summary Answers to Stop & Check Questions Thought Questions Module 1.3
The Use of Animals in Research Reasons for Animal Research The Ethical Debate In Closing: Humans and Animals Summary Answers to Stop & Check Questions Terms Suggestions for Further Reading Websites to Explore Exploring Biological Psychology CD ThomsonNow
Opposite: It is tempting to try to “get inside the mind” of people and other animals, to imagine what they are thinking or feeling. In contrast, biological psychologists try to explain behavior in terms of its physiology, development, evolution, and function. Source: George D. Lepp/CORBIS
2. Nearly all current philosophers and neuroscientists reject the idea that the mind exists independently of the physical brain. Still, the question remains as to how and why brain activity is connected to consciousness. 3. The expression of a given gene depends on the environment and on interactions with other genes. 4. Research with nonhuman animals can produce important information, but it sometimes inflicts distress or pain on the animals. Whether to proceed with a given experiment can be a difficult ethical issue. It is often said that Man is unique among animals. It is worth looking at this term “unique” before we discuss our subject proper. The word may in this context have two slightly different meanings. It may mean: Man is strikingly different—he is not identical with any animal. This is of course true. It is true also of all other animals: Each species, even each individual is unique in this sense. But the term is also often used in a more absolute sense: Man is so different, so “essentially different” (whatever that means) that the gap between him and animals cannot possibly be bridged—he is something altogether new. Used in this absolute sense the term is scientifically meaningless. Its use also reveals and may reinforce conceit, and it leads to complacency and defeatism because it assumes that it will be futile even to search for animal roots. It is prejudging the issue. Niko Tinbergen (1973, p. 161)
B
iological psychologists study the “animal roots” of behavior, relating actions and experiences to genetics and physiology. In this chapter, we consider three major issues and themes: the relationship between mind and brain, the roles of nature and nurture, and the ethics of research. We also briefly consider prospects for further study.
1
Module 1.1
The Mind–Brain Relationship
B
iological psychology is the study of the physiological, evolutionary, and developmental mechanisms of behavior and experience. It is approximately synonymous with the terms biopsychology, psychobiology, physiological psychology, and behavioral neuroscience. The term biological psychology emphasizes that the goal is to relate the biology to issues of psychology. Neuroscience as a field certainly includes much that is relevant to behavior, but it also includes more detail about anatomy and chemistry. Much of biological psychology is devoted to studying brain functioning. Figure 1.1 offers a view of the human brain from the top (what anatomists call a dorsal view) and from the bottom (a ventral view). The labels point to a few important areas that will become more familiar as you proceed through this text. An inspection of brain areas reveals distinct subareas. At the microscopic level, we find two kinds of cells: the neurons (Figure 1.2) and the glia. Neurons, which convey messages to one another and to muscles and glands, vary enormously in size, shape, and functions. The glia, generally smaller than neurons, have many functions
but do not convey information over great distances. The activities of neurons and glia somehow produce an enormous wealth of behavior and experience. This book is about researchers’ attempts to elaborate on that word “somehow.” Biological psychology is the most interesting topic in the world. No doubt every professor or textbook author feels that way about his or her field. But the others are wrong. Biological psychology really is the most interesting topic. When I make this statement to a group of students, I always get a laugh. But when I say it to a group of biological psychologists or neuroscientists, they nod their heads in agreement, and I do in fact mean it seriously. I do not mean that memorizing the names and functions of brain parts and chemicals is unusually interesting. I mean that biological psychology addresses some fascinating issues that should excite anyone who is curious about nature. Actually, I shall back off a bit and say that biological psychology is about tied with cosmology as the most interesting topic. Cosmologists ask why the uni-
Frontal lobe of cerebral cortex
Frontal lobe
Olfactory bulbs
Precentral gyrus Central sulcus
Longitudinal fissure
Anterior
Postcentral gyrus
Temporal lobe of cerebral cortex
Optic nerves
Parietal lobe
Dr. Dana Copeland
Posterior Occipital lobe
Medulla Cerebellum
Figure 1.1 A dorsal view (from above) and a ventral view (from below) of the human brain The brain has an enormous number of divisions and subareas; the labels point to a few of the main ones on the surface of the brain.
2
Chapter 1
The Major Issues
Spinal cord
© Dan McCoy/Rainbow
Figure 1.2 Neurons, greatly magnified
verse exists at all: Why is there something instead of nothing? And if there is something, why is it this particular kind of something? Biological psychologists ask: Given the existence of this universe composed of matter and energy, why is there consciousness? Is it a necessary function of the brain or an accident? How does the brain produce consciousness and why? Researchers also ask more specific questions such as: What genes, prenatal environment, or other factors predispose some people to psychological disorders? Is there any hope for recovery after brain damage? And what enables humans to learn language so easily?
© Dorr/Premium Stock/PictureQuest
The brain is composed of individual cells called neurons and glia.
Researchers continue to debate exactly what good yawning does. Yawning is a behavior that even people do without knowing its purpose.
Commonsense explanations of behavior often refer to intentional goals such as, “He did this because he was trying to . . .” or “She did that because she wanted to . . . .” But frequently, we have no reason to assume any intentions. A 4-month-old bird migrating south for the first time presumably does not know why; the next spring, when she lays an egg, sits on it, and defends it from predators, again she probably doesn’t know why. Even humans don’t always know the reasons for their own behaviors. (Yawning and laughter are two examples. You do them but can you explain what good they accomplish?) In contrast to commonsense explanations, biological explanations of behavior fall into four categories: physiological, ontogenetic, evolutionary, and functional (Tinbergen, 1951). A physiological explanation relates a behavior to the activity of the brain and other organs. It deals with the machinery of the body—for
© Steve Maslowski/Photo Researchers
Biological Explanations of Behavior
Unlike all other birds, doves and pigeons can drink with their heads down. (Others fill their mouths and then raise their heads.) A physiological explanation would describe these birds’ unusual pattern of nerves and throat muscles. An evolutionary explanation states that all doves and pigeons share this behavioral capacity because they inherited their genes from a common ancestor. 1.1
The Mind–Brain Relationship
3
example, the chemical reactions that enable hormones to influence brain activity and the routes by which brain activity ultimately controls muscle contractions. The term ontogenetic comes from Greek roots meaning “to be” and “origin” (or genesis). Thus, an ontogenetic explanation describes the development of a structure or a behavior. It traces the influences of genes, nutrition, experiences, and their interactions in molding behavior. For example, the ability to inhibit an impulse develops gradually from infancy through the teenage years, reflecting gradual maturation of the frontal parts of the brain. An evolutionary explanation reconstructs the evolutionary history of a structure or behavior. For example, frightened people sometimes get “goose bumps”— erections of the hairs, especially on their arms and shoulders. Goose bumps are useless to humans because our shoulder and arm hairs are so short. In most other mammals, however, hair erection makes a frightened animal look larger and more intimidating (Figure 1.3). Thus, an evolutionary explanation of human goose bumps is that the behavior evolved in our remote ancestors and we inherited the mechanism. A functional explanation describes why a structure or behavior evolved as it did. Within a small, isolated population, a gene can spread by accident through a process called genetic drift. (For example, a dominant male with many offspring spreads all his genes, including neutral and harmful ones.) However, the larger the population, the less powerful is genetic drift. Thus, a gene that becomes common in a large population
© Frank Siteman/Stock Boston
Image not available due to copyright restrictions
Figure 1.3 A frightened cat with erect hairs When a frightened mammal erects its hairs, it looks larger and more intimidating. (Consider, for example, the “Halloween cat.”) Frightened humans sometimes also erect their body hairs, forming goose bumps. An evolutionary explanation for goose bumps is that we inherited the tendency from ancestors who had enough hair for the behavior to be useful.
4
Chapter 1
The Major Issues
presumably has provided some advantage—at least in the past, though not necessarily in today’s environment. A functional explanation identifies that advantage. For example, many species have an appearance that matches their background (Figure 1.4). A functional explanation is that camouflaged appearance makes the animal inconspicuous to predators. Some species use their behavior as part of the camouflage. For example, zone-tailed hawks, which live in Mexico and parts of the southwest United States, fly among vultures and hold their wings in the same posture as vultures. Small mammals and birds run for cover when they see a hawk, but they learn to ignore vultures, which are no threat unless an animal is already dying. Because the zone-tailed hawks resemble vultures in both appearance and flight behavior, their prey disregard them, enabling the hawks to pick up easy meals (W. S. Clark, 2004). Functional explanations of human behavior are often controversial because many behaviors alleged to be part of our evolutionary heritage could have been learned instead. Also, we know little about the lives of early humans. We shall examine one example of such controversies in Chapter 11. To contrast the four types of biological explanations, consider how they all apply to one example, birdsong (Catchpole & Slater, 1995): Physiological explanation: A particular area of a songbird brain grows under the influence of testoster-
other wrong? Or if they are both right, what is the connection between them? Biological explanations of behavior raise the mind– body or mind–brain problem: What is the relationship between the mind and the brain? The most widespread view among nonscientists is, no doubt, dualism, the belief that mind and body are different kinds of substance—mental substance and physical substance— that exist independently. The French philosopher René Descartes defended dualism but recognized the vexing issue of how a mind that is not made of material could influence a physical brain. He proposed that mind and brain interact at a single point in space, which he suggested was the pineal gland, the smallest unpaired structure he could find in the brain (Figure 1.5). Although we credit Descartes with the first explicit defense of dualism, he hardly originated the idea. Mental experience seems so different from the physical actions of the brain that most people take it for granted that mind and brain are different. However, nearly all current philosophers and neuroscientists reject dualism. The decisive objection is that dualism conflicts with a consistently documented observation in physics, known as the law of the conservation of matter and energy: So far as we can tell, the total amount of matter and energy in the universe is fixed. Matter can transform into energy or energy into matter, but neither one appears out of nothing or disappears into nothing. Because any movement of matter requires energy, a mind that is not composed of matter or energy would seem unable to make anything happen, even a muscle movement. The alternative to dualism is monism, the belief that the universe consists of only one kind of substance. Various forms of monism are possible, grouped into the following categories:
one; hence, it is larger in breeding males than in females or immature birds. That brain area enables a mature male to sing. Ontogenetic explanation: In many species, a young male bird learns its song by listening to adult males. Development of the song requires both the genes that prepare him to learn the song and the opportunity to hear the appropriate song during a sensitive period early in life. Evolutionary explanation: In certain cases, one species’ song closely resembles that of another species. For example, dunlins and Baird’s sandpipers, two shorebird species, give their calls in distinct pulses, unlike other shorebirds. This similarity suggests that the two evolved from a single ancestor. Functional explanation: In most bird species, only the male sings, and he sings only during the reproductive season and only in his territory. The functions of the song are to attract females and warn away other males. As a rule, a bird sings loudly enough to be heard only in the territory he can defend. In short, birds have evolved tendencies to sing in ways that improve their chances for mating. We improve our understanding of behavior when we can combine as many of these approaches as possible. That is, ideally, we should understand the body mechanisms that produce the behavior, how it develops within the individual, how it evolved, and what function it serves.
STOP
&
CHECK
1. How does an evolutionary explanation differ from a functional explanation?
• materialism: the view that everything that exists is material, or physical. According to one version of
Check your answer on page 10.
A 5 3
The Brain and Conscious Experience
6 4
a H. b c
2 6 2
Explaining birdsong in terms of hormones, brain activity, and evolutionary selection probably does not trouble you. But how would you feel about a physical explanation of your own actions and experiences? Suppose you say, “I became frightened because I saw a man with a gun,” and a neuroscientist says, “You became frightened because of increased electrochemical activity in the central amygdala of your brain.” Is one explanation right and the
1 50
4
B
5 3 1
B
C
Figure 1.5 René Descartes’s conception of brain and mind Descartes understood how light from an object reached the retinas at the back of the eyes. From there, he assumed the information was all channeled back to the pineal gland, a small unpaired organ in the brain. (Source: From Descartes’ Treaties on Man)
1.1
The Mind–Brain Relationship
5
this view (“eliminative materialism”), mental events don’t exist at all, and the common folk psychology based on minds and mental activity is fundamentally mistaken. However, most of us find it difficult to believe that our mind is a figment of our imagination! A more plausible version is that we will eventually find a way to explain all psychological experiences in purely physical terms. • mentalism: the view that only the mind really exists and that the physical world could not exist unless some mind were aware of it. It is not easy to test this idea—go ahead and try!—but few philosophers or scientists take it seriously. • identity position: the view that mental processes are the same thing as certain kinds of brain processes but are described in different terms. In other words, the universe has only one kind of substance, but it includes both material and mental aspects. By analogy, one could describe the Mona Lisa as an extraordinary painting of a woman with a subtle smile, or one could list the exact color and brightness of each point on the painting. Although the two descriptions appear very different, they refer to the same object. According to the identity position, every mental experience is a brain activity, even though descriptions of thoughts sound very different from descriptions of brain activities. For example, the fright you feel when someone threatens you is the same thing as a certain pattern of activity in your brain. Note how the definition of the identity position is worded. It does not say that the mind is the brain. Mind is brain activity. In the same sense, fire is not really a “thing.” Fire is what is happening to something. Similarly, mental activity is what is happening in the brain. Can we be sure that monism is correct? No. However, we adopt it as the most reasonable working hypothesis. That is, we conduct research on the assumption of monism and see how far we can go. As you will find throughout this text, experiences and brain activities appear inseparable. Stimulation of any brain area provokes an experience, and any experience evokes brain activity. You can still use terms like mind or mental activity if you make it clear that you regard these terms as describing an aspect of brain activity. However, if you lapse into using mind to mean a ghostlike something that is neither matter nor energy, don’t underestimate the scientific and philosophical arguments that can be marshaled against you (Dennett, 1991). (Does a belief in monism mean that we are lowering our evaluation of minds? Maybe not. Maybe we are elevating our concept of the material world.) Even if we accept the monist position, however, we have done little more than restate the mind–brain problem. The questions remain: Why is consciousness a property of brain activity? Is it important or just an accident, like the noises a machine makes? What kind
6
Chapter 1
The Major Issues
of brain activity produces consciousness? How does it produce consciousness? And what is consciousness, anyway? (You may have noted the lack of a definition. A firm, clear definition of consciousness is elusive. The same is true for many other terms that we feel comfortable using. For example, you know what time means, but can you define it?) The function (if any) of consciousness is far from obvious. Several psychologists have argued that many nonhuman species also have consciousness because their behavior is so complex and adaptive that we cannot explain it without assuming consciousness (e.g., Griffin, 2001). Others have argued that even if other animals are conscious, their consciousness explains nothing. Consciousness may not be a useful scientific concept (Wynne, 2004). Indeed, because we cannot observe consciousness, none of us knows for sure even that other people are conscious, much less other species. According to the position known as solipsism (SOL-ip-sizm, based on the Latin words solus and ipse, meaning “alone” and “self”), I alone exist, or I alone am conscious. Other people are either like robots or like the characters in a dream. (Solipsists don’t form organizations because each is convinced that all other solipsists are wrong!) Although few people take solipsism seriously, it is hard to imagine evidence to refute it. The difficulty of knowing whether other people (or animals) have conscious experiences is known as the problem of other minds. David Chalmers (1995) has proposed that in discussions of consciousness we distinguish between what he calls the easy problems and the hard problem. The easy problems pertain to many phenomena that we call consciousness, such as the difference between wakefulness and sleep and the mechanisms that enable us to focus our attention. These issues pose all the usual difficulties of any scientific question but no philosophical problems. In contrast, the hard problem concerns why and how any kind of brain activity is associated with consciousness. As Chalmers (1995) put it, “Why doesn’t all this information-processing go on ‘in the dark,’ free of any inner feel?” (p. 203). That is, why does brain activity feel like anything at all? Many scientists (Crick & Koch, 2004) and philosophers (Chalmers, 2004) agree that we have no way to answer that question, at least at present. We don’t even have a clear hypothesis to test. The best we can do is determine what brain activity is necessary or sufficient for consciousness. After we do so, maybe we will see a way to explain why that brain activity is associated with consciousness, or maybe we won’t.1
1Note the phrasing “is associated with consciousness,” instead of “leads to consciousness” or “causes consciousness.” According to the identity position, brain activity does not cause or lead to consciousness any more than consciousness leads to brain activity. Each is the same as the other.
Why are most of us not solipsists? That is, why do you (I assume) believe that other people have minds? We reason by analogy: “Other people look and act much like me, so they probably have internal experiences much like mine.” How far do we extend this analogy? Chimpanzees look and act somewhat like humans. Most of us, but not all, are willing to assume that chimpanzees are conscious. If chimpanzees are conscious, how about dogs? Rats? Fish? Insects? Trees? Rocks? Most people draw the line at some point in this sequence, but not all at the same point. A similar problem arises in human development: At what point between the fertilized egg and early childhood does someone become conscious? At what point in dying does someone finally lose consciousness? And how could we possibly know? Speculating on these issues leads most people to conclude that consciousness cannot be a yes-or-no question. We can draw no sharp dividing line between those having consciousness and those lacking it. Consciousness must have evolved gradually and presumably develops gradually within an individual (Edelman, 2001). What about computers and robots? Every year, they get more sophisticated and complicated. What if someone builds a robot that can walk, talk, carry on an intelligent conversation, laugh at jokes, and so forth? At what point, if any, would we decide that the robot is conscious? You might respond, “Never. A robot is just a machine that is programmed to do what it does.” True, but the human brain is also a machine. (A machine is anything that converts one kind of energy into another.) And we, too, are programmed—by our genes and our past experiences. (We did not create ourselves.) Perhaps no robot ever can be conscious, if consciousness is a property of carbon chemistry (Searle, 1992). Can you imagine any conceivable evidence that would persuade you that a robot is conscious? If you are curious about my answer, check page 11. But think about your own answer first.
STOP
&
CHECK
2. What are the three major versions of monism? 3. What is meant by the “hard problem”? Check your answers on page 10.
Research Approaches Even if the “hard problem” is unanswerable at present, it might be possible to determine which kinds of
brain activity are associated with consciousness (Crick & Koch, 2004). For the most part, researchers have assumed that even though you might be conscious of something and unable to report it in words (e.g., as infants are), if you can describe something you saw or heard, then you must have been conscious of it. Based on that operational definition of consciousness,2 it is possible to do research on the brain activities related to consciousness. Let’s consider two examples. One clever study used this approach: Suppose we could present a visual stimulus that people consciously perceived on some occasions but not others. We could then determine which brain activities differed between the occasions with and without consciousness. The researchers flashed a word on a screen for 29 milliseconds (ms). In some cases, it was preceded and followed by a blank screen:
GROVE
In these cases, people identified the word almost 90% of the time. In other cases, however, the researchers flashed a word for the same 29 ms but preceded and followed it with a masking pattern:
SALTY
Under these conditions, people almost never identify the word and usually say they didn’t see any word at all. Although the physical stimulus was the same in both cases—a word flashed for 29 ms—it reached consciousness in the first case but not the second. Using a brain scan technique that we shall examine in Chapter 4, the researchers found that the conscious stimulus activated the same brain areas as the unconscious stimulus, but more strongly. Also, the conscious stimuli activated a broader range of areas, presumably because strong activation in the initial areas sent excitation to other areas (Dehaene et al., 2001). These data imply that consciousness of a stimulus depends on the amount of brain activity. At any moment, a variety of stimuli act on your brain; in effect, they compete for control (Dehaene & Changeux, 2004). Right now, for example, you have the visual sensations from this page, as well as auditory, touch, and other sensations. You cannot be simultaneously conscious of all of them. You might, however, direct
2 An operational definition tells how to measure something or how to determine whether it is present or absent.
1.1
The Mind–Brain Relationship
7
your attention to one stimulus or another. For example, right now what is your conscious experience of your left foot? Until you read that question, you probably had no awareness of that foot, but now you do. Because you directed your attention to it, activity has increased in the brain area that receives sensation from the left foot (Lambie & Marcel, 2002). Becoming conscious of something means letting its information take over more of your brain’s activity.
STOP
&
CHECK
4. In the experiment by Dehaene et al., how were the conscious and unconscious stimuli similar? How were they different? 5. In this experiment, how did the brain’s responses differ to the conscious and unconscious stimuli? Check your answers on page 10.
Here is a second kind of research. Look at Figure 1.6, but hold it so close to your eyes that your nose touches the page, right between the two circles. Better yet, look at the two parts through a pair of tubes, such as the tubes inside rolls of paper towels or toilet paper. You will see red and black vertical lines with your left eye and green and black horizontal lines with your right eye. (Close one eye and then the other to make sure you see completely different patterns with the two eyes.) Seeing something is closely related to seeing where it is, and the red vertical lines cannot be in the same place as the green horizontal lines. Because your brain cannot perceive both patterns in the same location, your perception alternates. For a while, you
see the red and black lines, and then gradually the green and black invade your consciousness. Then your perception shifts back to the red and black. Sometimes you will see red lines in part of the visual field and green lines in the other. These shifts, known as binocular rivalry, are slow and gradual, sweeping from one side to another. The stimulus seen by each eye evokes a particular pattern of brain response, which researchers can measure with the brain scanning devices described in Chapter 4. As that first perception fades and the stimulus seen by the other eye replaces it, the first pattern of brain activity fades also, and a different pattern of activity replaces it. Each shift in perception is accompanied by a shift in the pattern of activity over a large portion of the brain (Cosmelli et al., 2004; Lee, Blake, & Heeger, 2005). (A detail of procedure: One way to mark a pattern of brain activity is to use a stimulus that oscillates. For example, someone might watch a stationary pattern with one eye and something flashing with the other. When the person perceives the flashtry it ing stimulus, brain activity has a rhythm yourself that matches the rate of flash.) By monitoring brain activity, a researcher can literally “read your mind” in this limited way—knowing which of two views you perceive at a given moment. What this result says about consciousness is that not every physical stimulus reaches consciousness. To become conscious, it has to control the activity over a significant area of the brain. The overall point is that research on the biological basis of consciousness may be possible after all. Technological advances enable us to do research that would have been impossible in the past; future methods may facilitate still more possibilities.
Career Opportunities
Figure 1.6 Binocular rivalry If possible, look at the two parts through tubes, such as those from the inside of rolls of toilet paper or paper towels. Otherwise, touch your nose to the paper between the two parts so that your left eye sees one pattern while your right eye sees the other. The two views will compete for your consciousness, and your perception will alternate between them.
8
Chapter 1
The Major Issues
If you want to consider a career related to biological psychology, you have a range of options. The relevant careers fall into two categories—research and therapy. Table 1.1 describes some of the major fields. A research position ordinarily requires a PhD in psychology, biology, neuroscience, or other related field. People with a master’s or bachelor’s degree might work in a research laboratory but would not direct it. Many people with a PhD hold college or university positions in which they perform some combination of teaching and research. Depending on the institution and the individual, the balance can range from almost all teaching to almost all research. Other individuals have pure research positions in laboratories sponsored by the government, drug companies, or other industries. Fields of therapy include clinical psychology, counseling psychology, school psychology, several specializations of medicine, and allied medical practice such
Table 1.1 Fields of Specialization Specialization
Description
Research Fields
Research positions ordinarily require a PhD. Researchers are employed by universities, hospitals, pharmaceutical firms, and research institutes.
Neuroscientist
Studies the anatomy, biochemistry, or physiology of the nervous system. (This broad term includes any of the next five, as well as other specialties not listed.)
Behavioral neuroscientist (almost synonyms: psychobiologist, biopsychologist, or physiological psychologist).
Investigates how functioning of the brain and other organs influences behavior.
Cognitive neuroscientist
Uses brain research, such as scans of brain anatomy or activity, to analyze and explore people’s knowledge, thinking, and problem solving.
Neuropsychologist
Conducts behavioral tests to determine the abilities and disabilities of people with various kinds of brain damage and changes in their condition over time. Most neuropsychologists have a mixture of psychological and medical training; they work in hospitals and clinics.
Psychophysiologist
Measures heart rate, breathing rate, brain waves, and other body processes and how they vary from one person to another or one situation to another.
Neurochemist
Investigates the chemical reactions in the brain.
Comparative psychologist (almost synonyms: ethologist, animal behaviorist)
Compares the behaviors of different species and tries to relate them to their habitats and ways of life.
Evolutionary psychologist (almost synonym: sociobiologist)
Relates behaviors, especially social behaviors, including those of humans, to the functions they have served and, therefore, the presumed selective pressures that caused them to evolve.
Practitioner Fields of Psychology
In most cases, their work is not directly related to neuroscience. However, practitioners often need to understand it enough to communicate with a client’s physician.
Clinical psychologist
Requires PhD or PsyD. Employed by hospital, clinic, private practice, or college. Helps people with emotional problems.
Counseling psychologist
Requires PhD or PsyD. Employed by hospital, clinic, private practice, or college. Helps people make educational, vocational, and other decisions.
School psychologist
Requires master’s degree or PhD. Most are employed by a school system. Identifies educational needs of schoolchildren, devises a plan to meet the needs, and then helps teachers implement it.
Medical Fields
Practicing medicine requires an MD plus about 4 years of additional study and practice in a specialization. Physicians are employed by hospitals, clinics, medical schools and in private practice. Some conduct research in addition to seeing patients.
Neurologist
Treats people with brain damage or diseases of the brain.
Neurosurgeon
Performs brain surgery.
Psychiatrist
Helps people with emotional distress or troublesome behaviors, sometimes using drugs or other medical procedures.
Allied Medical Field
These fields ordinarily require a master’s degree or more. Practitioners are employed by hospitals, clinics, private practice, and medical schools.
Physical therapist
Provides exercise and other treatments to help people with muscle or nerve problems, pain, or anything else that impairs movement.
Occupational therapist
Helps people improve their ability to perform functions of daily life, for example, after a stroke.
Social worker
Helps people deal with personal and family problems. The activities of a clinical social worker overlap those of a clinical psychologist.
as physical therapy. These various fields of practice range from neurologists (who deal exclusively with brain disorders) to social workers and clinical psychologists (who need to distinguish between adjustment problems and possible signs of brain disorder).
Anyone who pursues a career in research needs to stay up to date on new developments by attending conventions, consulting with colleagues, and reading the primary research journals, such as Journal of Neuroscience, Neurology, Behavioral Neuroscience, Brain 1.1
The Mind–Brain Relationship
9
Research, Nature Neuroscience, and Archives of General Psychiatry. However, what if you are entering a field on the outskirts of neuroscience, such as clinical psychology, school psychology, social work, or physical therapy? In that case, you probably don’t want to wade through technical journal articles, but you do want to stay current on major developments, at least enough to converse intelligently with medical colleagues. I recommend the journal Cerebrum, published by the Dana Press, 745 Fifth Avenue, Suite 700, New York, NY 10151. Their website is http://www.dana.org and their e-mail address is [emailprotected]. Cerebrum provides well-written, thought-provoking articles related to neuroscience or biological psychology, accessible to nonspecialists. In many ways, it is like Scientific American but limited to the topic of brain and behavior.
Module 1.1 In Closing: Your Brain and Your Experience In many ways, I have been “cheating” in this module, like giving you dessert first and saving your vegetables for later. The mind–brain issue is an exciting and challenging question, but we cannot go far with it until we back up and discuss the elements of how the nervous system works. The goals in this module have been to preview the kinds of questions researchers hope to answer and to motivate the disciplined study you will need in the next few chapters. Biological psychologists are ambitious, hoping to explain as much as possible of psychology in terms of brain processes, genes, and the like. The guiding assumption is that the pattern of activity that occurs in your brain when you see a rabbit is your perception of a rabbit; the pattern that occurs when you feel fear is your fear. This is not to say that “your brain physiology controls you” any more than one should say that “you control your brain.” Rather, your brain is you! The rest of this book explores how far we can go with this guiding assumption.
Summary 1. Biological psychologists try to answer four types of questions about any given behavior: How does it relate to the physiology of the brain and other organs? How does it develop within the individual? How did the capacity for the behavior evolve? And why did the capacity for this behavior evolve? (That is, what function does it serve?) (p. 3)
10
Chapter 1
The Major Issues
2. Biological explanations of behavior do not necessarily assume that the individual understands the purpose or function of the behavior. (p. 3) 3. Philosophers and scientists continue to address the mind–brain or mind–body relationship. Dualism, the popular view that the mind exists separately from the brain, is opposed by the principle that only matter and energy can affect other matter and energy. (p. 5) 4. Nearly all philosophers and scientists who have addressed the mind–brain problem favor some version of monism, the belief that the universe consists of only one kind of substance. (p. 6) 5. No one has found a way to answer the “hard question” of why brain activity is related to mental experience at all. However, new research techniques facilitate studies on what types of brain activity are necessary for consciousness. For example, a stimulus that becomes conscious activates the relevant brain areas more strongly than a similar stimulus that does not reach consciousness. (p. 6)
Answers to STOP
&
CHECK
Questions 1. An evolutionary explanation states what evolved from what. For example, humans evolved from earlier primates and therefore have certain features that we inherited from those ancestors, even if the features are not useful to us today. A functional explanation states why something was advantageous and therefore evolutionarily selected. (p. 5) 2. The three major versions of monism are materialism (everything can be explained in physical terms), mentalism (only minds exist), and identity (the mind and the brain are the same thing). (p. 7) 3. The “hard problem” is why minds exist at all in a physical world, why there is such a thing as consciousness, and how it relates to brain activity. (p. 7) 4. The conscious and unconscious stimuli were physically the same (a word flashed on the screen for 29 ms). The difference was that a stimulus did not become conscious if it was preceded and followed by an interfering pattern. (p. 8) 5. If a stimulus became conscious, it activated the same brain areas as an unconscious stimulus, but more strongly. (p. 8)
Thought Questions
3
1. What would you say or do to try to convince a solipsist that you are conscious? 2. Now suppose a robot just said and did the same things you did in question 1. Will you be convinced that it is conscious?
Author’s Answer About Machine Consciousness (p. 7) Here is a possibility similar to a proposal by J. R. Searle (1992): Suppose someone suffers damage to part of the visual cortex of the brain and becomes blind to part of the visual field. Now, engineers design artificial brain circuits to replace the damaged cells. Impulses from the eyes are routed to this device, which processes the 3 Thought Questions are intended to spark thought and discussion. The text does not directly answer any of them, although it may imply or suggest an answer in some cases. In other cases, there may be several possible answers.
information and sends electrical impulses to healthy portions of the brain that ordinarily get input from the damaged brain area. After this device is installed, the person sees the field that used to be blind, remarking, “Ah! Now I can see it again! I see shapes, colors, movement—the whole thing, just as I used to!” Evidently, the machine has enabled conscious perception of vision. Then, the person suffers still more brain damage, and engineers replace even more of the visual cortex with artificial circuits. Once again, the person assures us that everything looks the same as before. Next, engineers install a machine to replace a damaged auditory cortex, and the person reports normal hearing. One by one, additional brain areas are damaged and replaced by machines; in each case, the behavior returns to normal and the person reports having normal experiences, just as before the damage. Piece by piece, the entire brain is replaced. At that point, I would say that the machine itself is conscious. Note that all this discussion assumes that these artificial brain circuits and transplants are possible. No one knows whether they will be. The point is merely to show what kind of evidence might persuade us that a machine is conscious.
1.1
The Mind–Brain Relationship
11
Module 1.2
The Genetics of Behavior
E
verything you do depends on both your genes and your environment. Without your genes or without an adequate environment, you would not exist. So far, no problem. The controversies arise when we discuss how strongly genes and environment affect various differences among people. For example, do differences in human intelligence depend mostly on genetic differences, mostly on environmental influences, or on both about equally? Similar issues arise for sexual orientation, alcoholism, psychological disorders, weight gain, and so forth. This module certainly does not resolve the controversies, but it should help you understand them as they arise later in this text or in other texts. We begin with a review of elementary genetics. Readers already familiar with the concepts may skim over the first three pages.
the egg simply mixed, much as one might mix two colors of paint. Mendel demonstrated that inheritance occurs through genes, units of heredity that maintain their structural identity from one generation to another. As a rule, genes come in pairs because they are aligned along chromosomes (strands of genes), which also come in pairs. (As an exception to this rule, a male has unpaired X and Y chromosomes, with different genes.) A gene is a portion of a chromosome, which is composed of the double-stranded molecule deoxyribonucleic acid (DNA). A strand of DNA serves as a template (model) for the synthesis of ribonucleic acid (RNA) molecules. RNA is a single-strand chemical; one type of RNA molecule serves as a template for the synthesis of protein molecules. Figure 1.7 summarizes the main steps in translating information from DNA through RNA into proteins, which then determine the development of the organism. Some proteins form part of the structure of the body; others serve as enzymes, biological catalysts that regulate chemical reactions in the body. Anyone with an identical pair of genes on the two chromosomes is homozygous for that gene. An individual with an unmatched pair of genes is heterozygous
Mendelian Genetics Prior to the work of Gregor Mendel, a late-19th-century monk, scientists thought that inheritance was a blending process in which the properties of the sperm and
Figure 1.7 How DNA controls development of the organism The sequence of bases along a strand of DNA determines the order of bases along a strand of RNA; RNA in turn controls the sequence of amino acids in a protein molecule.
DNA Self-replicating molecule
Each base determines one base of the RNA.
RNA Copy of one strand of the DNA
A triplet of bases determines one amino acid.
...
12
Chapter 1
The Major Issues
Protein Some proteins become part of the body’s structure. Others are enzymes that control the rate of chemical reactions.
for that gene. For example, you might have a gene for blue eyes on one chromosome and a gene for brown eyes on the other. Certain genes are dominant or recessive. A dominant gene shows a strong effect in either the homozygous or heterozygous condition; a recessive gene shows its effects only in the homozygous condition. For example, someone with a gene for brown eyes (dominant) and one for blue eyes (recessive) has brown eyes but is a “carrier” for the blue-eye gene and can transmit it to a child. For a behavioral example, the gene for ability to taste moderate concentrations of phenylthiocarbamide (PTC) is dominant; the gene for low sensitivity is recessive. Only someone with two recessive genes has trouble tasting it (Wooding et al., 2004). Figure 1.8 illustrates the possible results of a mating between people who are both heterozygous for the PTCtasting gene. Because each of them has one high-taste sensitivity (T) gene,4 each can taste PTC. However, each parent transmits either a taster gene (T) or a nontaster gene (t) to a given child. Therefore, a child in this family has a 25% chance of being a homozygous (TT) taster, a 50% chance of being a heterozygous (Tt) taster, and a 25% chance of being a homozygous (tt) nontaster.
Chromosomes and Crossing Over Each chromosome participates in reproduction independently of the others, and each species has a certain number of chromosomes—for example, 23 pairs in humans, 4 pairs in fruit flies. If you have a BbCc genotype, and the B and C genes are on different chromosomes, your contribution of a B or b gene is independent of whether you contribute C or c. But suppose B and C are on the same chromosome. If one chromosome has the BC combination and the other has bc, then if you contribute a B, you probably also contribute C. The exception comes about as a result of crossing over: A pair of chromosomes may break apart during reproduction and reconnect such that part of one chromosome attaches to the other part of the second chromosome. If one chromosome has the BC combination and the other chromosome has the bc combination, crossing over between the B locus (location) and the C locus leaves new chromosomes with the combinations Bc and bC. The closer the B locus is to the C locus, the less often crossing over occurs between them.
Sex-Linked and Sex-Limited Genes The genes located on the sex chromosomes are known as sex-linked genes. All other chromosomes are autosomal chromosomes, and their genes are known as autosomal genes. 4Among geneticists, it is customary to use a capital letter to indicate the dominant gene and a lowercase letter to indicate the recessive gene.
Father
Mother
Genes Tt Heterozygous Taster
Genes Tt Heterozygous Taster t
T
T
t
T t
T
t
Child 1
Child 2
Child 3
Child 4
Genes T T Homozygous Taster
Genes Tt Heterozygous Taster
Genes Tt Heterozygous Taster
Genes tt Homozygous Nontaster
Figure 1.8 Four equally likely outcomes of a mating between parents who are heterozygous for a given gene (Tt) A child in this family has a 25% chance of being homozygous for the dominant gene (TT), a 25% chance of being homozygous for the recessive gene (tt), and a 50% chance of being heterozygous (Tt).
In mammals, the two sex chromosomes are designated X and Y: A female mammal has two X chromosomes; a male has an X and a Y. (Unlike the arbitrary symbols B and C that I introduced to illustrate gene pairs, X and Y are standard symbols used by all geneticists.) During reproduction, the female necessarily contributes an X chromosome, and the male contributes either an X or a Y. If he contributes an X, the offspring is female; if he contributes a Y, the offspring is male. The Y chromosome is small. The human Y chromosome has genes for only 27 proteins, far fewer than other chromosomes. The X chromosome, by contrast, has genes for about 1,500 proteins (Arnold, 2004). Thus, when biologists speak of sex-linked genes, they usually mean X-linked genes. An example of a human sex-linked gene is the recessive gene for red-green color vision deficiency. Any man with this gene on his X chromosome has red-green color deficiency because he has no other X chromosome. A woman, however, is color deficient only if she has that recessive gene on both of her X chromosomes. So, for example, if 8% of human X chromosomes contain the gene for color vision deficiency, then 8% of all men will be color-deficient, but fewer than 1% of women will be (.08 .08). 1.2
The Genetics of Behavior
13
Distinct from sex-linked genes are the sex-limited genes, which are present in both sexes but have effects mainly or exclusively for one sex. For instance, genes control the amount of chest hair in men, breast size in women, the amount of crowing in roosters, and the rate of egg production in hens. Both sexes have those genes, but sex hormones activate them, so their effects depend on male or female hormones.
STOP
&
CHECK
1. Suppose you can taste PTC. If your mother can also taste it, what (if anything) can you predict about your father’s ability to taste it? If your mother cannot taste it, what (if anything) can you predict about your father’s ability to taste it? 2. How does a sex-linked gene differ from a sex-limited gene? Check your answers on page 21.
Sources of Variation If reproduction always produced offspring that were exact copies of the parents, evolution would not occur. One source of variation is recombination, a new combination of genes, some from one parent and some from the other, that yields characteristics not found in either parent. For example, a mother with curly blonde hair and a father with straight black hair could have a child with curly black hair or straight blonde hair. A more powerful source of variation is a mutation, or change in a single gene. For instance, a gene for brown eyes might mutate into a gene for blue eyes. Mutation of a given gene is a rare, random event, independent of the needs of the organism. A mutation is analogous to having an untrained person add, remove, or distort something on the blueprints for your new house. A mutation leading to an altered protein is almost always disadvantageous. A mutation that modifies the amount or timing of protein production is closer to neutral and sometimes advantageous. Many of the differences among individuals and even among species depend on quantitative variations in the expression of genes.
Heredity and Environment Unlike PTC sensitivity and color vision deficiency, most variations in behavior depend on the combined influence of many genes and environmental influences.
14
Chapter 1
The Major Issues
You may occasionally hear someone ask about a behavior, “Which is more important, heredity or environment?” That question as stated is meaningless. Every behavior requires both heredity and environment. However, we can rephrase it meaningfully: Do the observed differences among individuals depend more on differences in heredity or differences in environment? For example, if you sing better than I do, the reason could be that you have different genes, that you had better training, or both. To determine the contributions of heredity and environment, researchers rely mainly on two kinds of evidence. First, they compare monozygotic (“from one egg,” i.e., identical) twins and dizygotic (“from two eggs,” i.e., fraternal) twins. A stronger resemblance between monozygotic than dizygotic twins suggests a genetic contribution. Second, researchers examine adopted children. Any tendency for adopted children to resemble their biological parents suggests a hereditary influence. If the variations in some characteristic depend largely on hereditary influences, the characteristic has high heritability. Based on these kinds of evidence, researchers have found evidence for a significant heritability of almost every behavior they have tested (Bouchard & McGue, 2003). A few examples are loneliness (McGuire & Clifford, 2000), neuroticism (Lake, Eaves, Maes, Heath, & Martin, 2000), television watching (Plomin, Corley, DeFries, & Fulker, 1990), and social attitudes (S. F. Posner, Baker, Heath, & Martin, 1996). About the only behavior anyone has tested that has not shown a significant heritability is religious affiliation—such as Jewish, Protestant, Catholic, or Buddhist (Eaves, Martin, & Heath, 1990).
Possible Complications Humans are difficult research animals. Investigators cannot control people’s heredity or environment, and even their best methods of estimating hereditary influences are subject to error (Bouchard & McGue, 2003; Rutter, Pickles, Murray, & Eaves, 2001). For example, it is sometimes difficult to distinguish between hereditary and prenatal influences. Consider the studies showing that biological children of parents with low IQs, criminal records, or mental illness are likely to have similar problems themselves, even if adopted by excellent parents. The parents with low IQs, criminal records, or mental illness gave the children their genes, but they also gave them their prenatal environment. In many cases, those mothers had poor diets and poor medical care during pregnancy. Many of them smoked cigarettes, drank alcohol, and used other drugs that affect a fetus’s brain development. Therefore, what looks like a genetic effect could reflect influences of the prenatal environment.
Another complication: Certain environmental factors can inactivate a gene by attaching a methyl group (CH3) to it. In some cases, an early experience such as malnutrition or severe stress inactivates a gene, and then the individual passes on the inactivated gene to the next generation. Experiments have occasionally shown behavioral changes in rats based on experiences that happened to their mothers or grandmothers (Harper, 2005). Such results blur the distinction between hereditary and environmental. Genes can also influence your behavior indirectly by changing your environment. For example, suppose your genes lead you to frequent temper tantrums. Other people—including your parents—will react harshly, giving you still further reason to feel hostile. Dickens and Flynn (2001) call this tendency a multiplier effect: If genetic or prenatal influences produce even a small increase in some activity, the early tendency will change the environment in a way that magnifies that tendency. Genes or prenatal influences
Image not available due to copyright restrictions
Increase of some tendency Environment that facilitates
For a sports example, imagine a child born with genes promoting greater than average height, running speed, and coordination. The child shows early success at basketball, so parents and friends encourage the child to play basketball more and more. The increased practice improves skill, the skill leads to more success, and the success leads to more practice and coaching. What started as a small advantage becomes larger and larger. The same process could apply to schoolwork or any other endeavor. The outcome started with a genetic basis, but environmental reactions magnified it.
Environmental Modification Even a trait with a strong hereditary influence can be modified by environmental interventions. For example, different genetic strains of mice behave differently in the elevated plus maze (Figure 1.9). Some stay almost entirely in the walled arms, like the mouse shown in the figure; others (less nervous?) venture onto the open arms. But even when different laboratories use the same genetic strains and nearly the same procedures, strains that are adventuresome in one laboratory are less active in another (Crabbe, Wahlsten, & Dudek, 1999). Evidently, the effects of the genes depend on subtle differences in procedure, such as how the investigators handle the mice or maybe even the investigators’ odors. (Most behaviors do not show this much variability; the elevated plus maze appears to be an extreme example.)
For a human example, phenylketonuria (FEE-nilKEET-uhn-YOOR-ee-uh), or PKU, is a genetic inability to metabolize the amino acid phenylalanine. If PKU is not treated, the phenylalanine accumulates to toxic levels, impairing brain development and leaving children mentally retarded, restless, and irritable. Approximately 1% of Europeans carry a recessive gene for PKU; fewer Asians and almost no Africans have the gene (T. Wang et al., 1989). Although PKU is a hereditary condition, environmental interventions can modify it. Physicians in many countries routinely measure the level of phenylalanine or its metabolites in babies’ blood or urine. If a baby has high levels, indicating PKU, physicians advise the parents to put the baby on a strict low-phenylalanine diet to minimize brain damage (Waisbren, Brown, de Sonneville, & Levy, 1994). Our ability to prevent PKU provides particularly strong evidence that heritable does not mean unmodifiable. A couple of notes about PKU: The required diet is difficult. People have to avoid meats, eggs, dairy products, grains, and especially aspartame (NutraSweet), which is 50% phenylalanine. Instead, they eat an expensive formula containing all the other amino acids. Physicians long believed that children with PKU could quit the diet after a few years. Later experience has shown that high phenylalanine levels damage teenage and adult brains, too. A woman with PKU should be especially careful during pregnancy and when nursing. Even a genetically normal baby cannot handle the enormous amounts of phenylalanine that an affected mother might pass through the placenta.
1.2
The Genetics of Behavior
15
STOP
&
CHECK
3. Adopted children whose biological parents were alcoholics have an increased probability of becoming alcoholics themselves. One possible explanation is heredity. What is another? 4. What example illustrates the point that even if some characteristic is highly heritable, a change in the environment can alter it? Check your answers on page 21.
How Genes Affect Behavior A biologist who speaks of a “gene for brown eyes” does not mean that the gene directly produces brown eyes. Rather, the gene produces a protein that makes the eyes brown, assuming normal health and nutrition. If we speak of a “gene for alcoholism,” we should not imagine that the gene itself causes alcoholism. Rather, it produces a protein that under certain circumstances increases the probability of alcoholism. It is important to specify these circumstances as well as we can. Exactly how a gene increases the probability of a given behavior is a complex issue. In later chapters, we encounter a few examples of genes that control brain chemicals. However, genes also can affect behavior indirectly—for example, by changing the way other people treat you (Kendler, 2001). Suppose you have genes causing you to be unusually attractive. As a result, strangers smile at you, many people invite you to parties, and so forth. Their reactions to your appearance may change your personality, and if so, the genes produced their behavioral effects by altering your environment! Consequently, we should not be amazed by reports that almost every human behavior has some heritability. A gene that affects almost anything in your body will indirectly affect your choice of activities and the way other people respond.
The Evolution of Behavior Every gene is subject to evolution by natural selection. Evolution is a change over generations in the frequencies of various genes in a population. Note that, by this definition, evolution includes any change in gene frequencies, regardless of whether it is helpful or harmful to the species in the long run. We must distinguish two questions about evolution: How did some species evolve, and how do species evolve? To ask how a species did evolve is to ask what evolved from what, basing our answers on inferences from fossils and comparisons of living species.
16
Chapter 1
The Major Issues
For example, biologists find that humans are more similar to chimpanzees than to other species, and they infer a common ancestor. Similarly, humans and chimpanzees resemble monkeys in certain ways and presumably shared an ancestor with monkeys in the remoter past. Using similar reasoning, evolutionary biologists have constructed an “evolutionary tree” that shows the relationships among various species (Figure 1.10). As new evidence becomes available, biologists change their opinions of how closely any two species are related; thus, all evolutionary trees are tentative. Nevertheless, the question of how species do evolve is a question of how the process works, and that process is, in its basic outlines, a logical necessity. That is, given what we know about reproduction, evolution must occur. The reasoning goes as follows: • Offspring generally resemble their parents for genetic reasons. • Mutations and recombinations of genes occasionally introduce new heritable variations, which help or harm an individual’s chance of surviving and reproducing. • Some individuals reproduce more abundantly than others. • Certain individuals successfully reproduce more than others do, thus passing on their genes to the next generation. The percentages of various genes in the next generation reflect the kinds of individuals who reproduced in the previous generation. That is, any gene that is consistently associated with reproductive success will become more O N L I N E prevalent in later generations. You can witness and explore this principle with try it the Online Try It Yourself activity “Ge- yourself netic Generations.” Because plant and animal breeders have long known this principle, they choose individuals with a desired trait and make them the parents of the next generation. This process is called artificial selection, and over many generations, breeders have produced exceptional race horses, hundreds of kinds of dogs, chickens that lay huge numbers of eggs, and so forth. Charles Darwin’s (1859) insight was that nature also selects. If certain individuals are more successful than others in finding food, escaping enemies, attracting mates, or protecting their offspring, then their genes will become more prevalent in later generations.
Common Misunderstandings About Evolution Let us clarify the principles of evolution by addressing a few misconceptions. • Does the use or disuse of some structure or behavior cause an evolutionary increase or decrease in
Dinosaurs
Turtles
Mammals
Lizards
Snakes
Crocodiles
Birds
(a) Early mammal-like reptiles 195
Millions of years ago
135
65 54 38 Bats 25
Horses and rhinoceroses
Monkeys
7 Whales and dolphins
2.5 0 (b)
Platypus Elephants
Carnivores
Cattle and sheep
Rabbits
Humans
Insectivores
Apes
Rodents
Marsupials
Figure 1.10 Evolutionary trees (a) Evolutionary relationships among mammals, birds, and several kinds of reptiles. (b) Evolutionary relationships among various species of mammals.
that feature? You may have heard people say something like, “Because we hardly ever use our little toes, they will get smaller and smaller in each succeeding generation.” This idea is a carryover of the biologist
Jean Lamarck’s theory of evolution through the inheritance of acquired characteristics, known as Lamarckian evolution. According to this idea, if giraffes stretch their necks as far out as possible, their offspring will 1.2
The Genetics of Behavior
17
© F. J. Hierschel/Okapia/Photo Researchers
be born with longer necks. Similarly, if you exercise your arm muscles, your children will be born with bigger arm muscles, and if you fail to use your little toes, your children’s little toes will be smaller than yours. However, biologists have found no mechanism for Lamarckian evolution to occur and no evidence that it does. Using or failing to use some body structure does not change the genes. (It is possible that people’s little toes might shrink in future evolution if people with even smaller little toes have some advantage over other people. But we would have to wait for a mutation that decreases little toe size—without causing some other problem—and then we would have to wait for people with this mutation to outreproduce people with other genes.) • Have humans stopped evolving? Because modern medicine can keep almost anyone alive, and because welfare programs in prosperous countries provide the necessities of life for almost everyone, some people assert that humans are no longer subject to the principle of “survival of the fittest.” Therefore, the argument goes, human evolution has slowed or stopped. The flaw in this argument is that the key to evolution is not survival but reproduction. For you to spread your genes, of course you have to survive long enough to reproduce, but what counts is how many healthy children (and nieces and nephews) you have. Thus, keeping everyone alive doesn’t stop human evolution. If some people have more children than others do, their genes will spread in the population. • Does “evolution” mean “improvement”? It depends on what you mean by “improvement.” By definition, evolution improves the average fitness of the pop-
© Alain Le Garsmeur/CORBIS
Sometimes a sexual display, such as a peacock’s spread of its tail feathers, leads to great reproductive success and therefore to the spread of the associated genes. In a slightly changed environment, this gene could become maladaptive. For example, if an aggressive predator with good color vision enters the range of the peacock, the bird’s slow movement and colorful feathers could seal its doom.
It is possible to slow the rate of evolution but not just by keeping everyone alive. China has enacted a policy that attempts to limit each family to one child. Successful enforcement of this policy would certainly limit the possibility of genetic changes between generations.
18
Chapter 1
The Major Issues
ulation, which is operationally defined as the number of copies of one’s genes that endure in later generations. For example, if you have more children than average, you are by definition evolutionarily fit, regardless of whether you are successful in any other sense. You also increase your fitness by supporting your brother, sister, nieces and nephews, or anyone else with the same genes you have. Any gene that spreads is by definition fit. However, genes that increase fitness at one time and place might be disadvantageous after a change in the environment. For example, the colorful tail feathers of the male peacock enable it to attract females but might become disadvantageous in the presence of a new predator that responds to bright colors. In other words, the genes of the current generation evolved because they were fit for previous generations; they may or may not be adaptive in the future. • Does evolution act to benefit the individual or the species? Neither: It benefits the genes! In a sense, you don’t use your genes to reproduce yourself; rather, your genes use you to reproduce themselves (Dawkins, 1989). A gene spreads through a population if and only if the individuals bearing that gene reproduce more than other individuals do. For example, imagine a gene that causes you to risk your life to protect your children. That gene will spread through the population, even though it endangers you personally, provided
that it enables you to leave behind more surviving children than you would have otherwise. A gene that causes you to attack other members of your species to benefit your children could also spread, even though it harms the species in general, presuming that the behavior really does benefit your children, and that others of your species do not attack you or your children in retaliation.
STOP
&
CHECK
5. Many people believe the human appendix is useless. Should we therefore expect that it will grow smaller from one generation to the next? Check your answer on page 21.
Evolutionary Psychology Evolutionary psychology, or sociobiology, deals with how behaviors have evolved, especially social behaviors. The emphasis is on evolutionary and functional explanations, as defined earlier—the presumed behaviors of our ancestors and why natural selection might have favored certain behavioral tendencies. The assumption is that any behavior that is characteristic of a species must have arisen through natural selection and must have provided some advantage. Although exceptions to this assumption are possible, it is at least a helpful guide to research. Consider a few examples: • Some animal species have better color vision than others, and some have better peripheral vision. Presumably, the species with better vision need it for their way of life (see Chapter 7). • We have brain mechanisms that cause us to sleep for a few hours each day and to cycle through several different stages of sleep. Presumably, we would not have such mechanisms unless sleep provided benefits (see Chapter 9). • Mammals and birds devote more energy to maintaining body temperature than to all other activities combined. We would not have evolved such an expensive mechanism unless it gave us major advantages (see Chapter 11). • Bears eat all the food they can find; small birds eat only enough to satisfy their immediate needs. Humans generally take a middle path. The different eating habits presumably relate to different needs by different species (see Chapter 11). On the other hand, some characteristics of a species have a more debatable relationship to natural selection. Consider two examples:
• People grow old and die, with an average survival time of about 70 or 80 years under favorable circumstances. Do we deteriorate because we have genes that cause us to die and get out of the way so that we don’t compete against our own children and grandchildren? Or are aging and death inevitable? Different people do age at different rates, largely for genetic reasons (Puca et al., 2001), so it is not ridiculous to hypothesize that our tendency to age and die is controlled by selective pressures of evolution. But the conclusion is hardly obvious either. • More men than women enjoy the prospect of casual sex with multiple partners. Theorists have related this tendency to the fact that a man can have many children by impregnating many women, whereas a woman cannot multiply her children by having more sexual partners (Buss, 1994). So, can we conclude that men and women are prewired to have different sexual behaviors? As we shall explore in Chapter 11, the answer is debatable. To further illustrate evolutionary psychology, let’s consider the theoretically interesting example of altruistic behavior, an action that benefits someone other than the actor. Any gene spreads within a population if individuals with that gene reproduce more than those without it. However, a gene that encourages altruistic behavior would help other individuals to survive and perhaps spread their genes. How could a gene for altruism spread, if at all? We should begin with the question of how common altruism is. It certainly occurs in humans: We contribute to charities; we try to help people in distress; a student may explain something to a classmate who is competing for a good grade in a course. Among nonhumans, we observe abundant examples of parents devoting much effort and even risking their lives to protect their young, but altruism toward nonrelatives is rare. Even apparent altruism often has a selfish motive. For example, when a crow finds food on the ground, it caws loudly, attracting other crows that will share the food. Altruism? Not really. A bird on the ground is vulnerable to attack by cats and other enemies, and when it lowers its head to eat, it cannot see the dangers. Having other crows around means more eyes to watch for dangers. Similarly, consider meerkats (a kind of mongoose). Periodically, one or another member of any meerkat colony stands, and if it sees danger, emits an alarm call that warns the others (Figure 1.11). Its alarm call helps the others (probably including its relatives), but the one who sees the danger first and emits the alarm call is the one most likely to escape (Clutton-Brock et al., 1999). Even in humans, we have no evidence that altruism is under genetic control. Still, for the sake of illustration, suppose some gene increases altruistic behavior. 1.2
The Genetics of Behavior
19
© Nigel J. Dennis; Gallo Images/CORBIS
Figure 1.11 Sentinel behavior: altruistic or not? As in many other prey species, meerkats sometimes show sentinel behavior in watching for danger and warning the others. However, the meerkat that emits the alarm is the one most likely to escape the danger.
only those who return the favors. Otherwise, it is easy for an uncooperative individual to accept favors, prosper greatly, and never repay the favors. In other words, reciprocal altruism requires good sensory organs and a well-developed brain. (Perhaps we now see why altruism is more common in humans than in other species.) Another explanation is kin selection, selection for a gene because it benefits the individual’s relatives. For example, a gene could spread if it caused you to risk your life to protect your children, who share many of your genes, including perhaps the altruism genes. Natural selection can favor altruism toward less close relatives—such as cousins, nephews, or nieces—if the benefit to them outweighs the cost to you (Dawkins, 1989; Hamilton, 1964; Trivers, 1985). In both humans and nonhumans, cooperative or altruistic behavior is more common toward relatives than toward unrelated individuals (Bowles & Posel, 2005; Krakauer, 2005). At its best, evolutionary psychology leads to research that helps us understand a behavior. For example, someone notices that males of one species help with infant care and males of another species do not. The search for a functional explanation can direct researchers to explore the species’ different habitats and ways of life until we understand why they behave differently. However, this approach is criticized, often with justification, when its practitioners assume that every behavior must be adaptive and then propose an explanation without testing it (Schlinger, 1996).
STOP Is there any way it could spread within a population? One common reply is that most altruistic behaviors cost very little. True, but being almost harmless is not good enough; a gene spreads only if the individuals with it reproduce more than those without it. Another common reply is that the altruistic behavior benefits the species. True again, but the rebuttal is the same. A gene that benefits the species but fails to help the individual dies out with that individual. A suggestion that sounds good at first is group selection. According to this idea, altruistic groups survive better than less cooperative ones (D. S. Wilson & Sober, 1994). However, what will happen when a mutation favoring uncooperative behavior occurs within a cooperative group? If the uncooperative individual has a reproductive advantage within its group, its genes will spread. At best, group selection would produce an unstable outcome. A better explanation is reciprocal altruism, the idea that animals help those who help them in return. Clearly, two individuals who cooperate with each other will prosper; however, reciprocal altruism requires that individuals recognize one another and learn to help
20
Chapter 1
The Major Issues
&
CHECK
6. What are two plausible ways for possible altruistic genes to spread in a population? Check your answer on page 21.
Module 1.2 In Closing: Genes and Behavior In the control of behavior, genes are neither all important nor irrelevant. Certain behaviors have a high heritability, such as the ability to taste PTC. Many other behaviors are influenced by genes but also subject to strong influence by experience. Our genes and our evolution make it possible for humans to be what we are today, but they also give us the flexibility to change our behavior as circumstances warrant. Understanding the genetics of human behavior is particularly important, but also particularly difficult. Separating the roles of heredity and environment is always difficult, but especially so with humans, because
researchers have such limited control over environmental influences. Inferring human evolution is also difficult, partly because we do not know enough about the lives of our ancient ancestors. Finally, we should remember that the way things are is not necessarily the same as the way they should be. For example, even if our genes predispose us to behave in a particular way, we can still decide to try to overcome those predispositions if they do not suit the needs of modern life.
reasonable to look for ways in which that characteristic is or has been adaptive. However, we cannot take it for granted that all common behaviors are adaptive; we need to do the research to test this hypothesis. (p. 19)
Answers to STOP
&
CHECK
Questions Summary 1. Genes are chemicals that maintain their integrity from one generation to the next and influence the development of the individual. A dominant gene affects development regardless of whether a person has pairs of that gene or only a single copy per cell. A recessive gene affects development only in the absence of the dominant gene. (p. 12) 2. Some behavioral differences demonstrate simple effects of dominant and recessive genes. More often, however, behavioral variations reflect the combined influences of many genes and many environmental factors. Heritability is an estimate of the amount of variation that is due to genetic variation as opposed to environmental variation. (p. 14) 3. Researchers estimate heritability of a human condition by comparing monozygotic and dizygotic twins and by comparing adopted children to their biological and adoptive parents. (p. 14) 4. The results sometimes overestimate human heritability. First, most adoption studies do not distinguish between the effects of genes and those of prenatal environment. Second, after genes produce an early increase in some behavioral tendency, that behavior may lead to a change in the environment that magnifies the tendency. (p. 14) 5. The fact that some behavior shows high heritability for a given population does not deny the possibility that a change in the environment might significantly alter the behavioral outcome. (p. 15) 6. Genes influence behavior directly by altering brain chemicals and indirectly by affecting other aspects of the body and therefore the way other people react to us. (p. 16) 7. The process of evolution through natural selection is a logical necessity because mutations sometimes occur in genes, and individuals with certain sets of genes reproduce more successfully than others do. (p. 16) 8. Evolution spreads the genes of the individuals who have reproduced the most. Therefore, if some characteristic is widespread within a population, it is
1. If your mother can taste PTC, we can make no predictions about your father. You may have inherited a gene from your mother that enables you to taste PTC, and because the gene is dominant, you need only one copy of the gene to taste PTC. However, if your mother cannot taste PTC, you must have inherited your ability to taste it from your father, so he must be a taster. (p. 14) 2. A sex-linked gene is on a sex chromosome (almost always the X chromosome). A sex-limited gene is on one of the other chromosomes, but it is activated by sex hormones and therefore makes its effects evident only in one sex or the other. (p. 14) 3. If the mother drank much alcohol during pregnancy, the prenatal environment may have predisposed the child to later alcoholism. (p. 16) 4. Keeping a child with the PKU gene on a strict lowphenylalanine diet prevents the mental retardation that the gene ordinarily causes. The general point is that sometimes a highly heritable condition can be modified environmentally. (p. 16) 5. No. Failure to use or need a structure does not make it become smaller in the next generation. The appendix will shrink only if people with a gene for a smaller appendix reproduce more successfully than other people do. (p. 19) 6. Altruistic genes could spread because they facilitate care for one’s kin or because they facilitate exchanges of favors with others (reciprocal altruism). (p. 20)
Thought Questions 1. What human behaviors are you sure have a heritability of 0? 2. Genetic differences probably account for part of the difference between people who age slowly and gracefully and others who grow old more rapidly and die younger. Given that the genes controlling old age have their onset long after people have stopped having children, how could evolution have any effect on such genes?
1.2
The Genetics of Behavior
21
Module 1.3
The Use of Animals in Research
C
ertain ethical disputes resist agreement. One is abortion; another is the death penalty; still another is the use of animals in research. In each case, well-meaning people on each side of the issue insist that their position is proper and ethical. The dispute is not a matter of the good guys against the bad guys; it is between two views of what is good. The animal welfare controversy is critical for biological psychology. As you will see throughout this book, research done on laboratory animals is responsible for a great deal of what we know about the brain and behavior. That research ranges from mere observation through painless experiments to studies that do inflict stress and pain. How shall we deal with the fact that on the one hand we want more knowledge and on the other hand we wish to minimize animal distress?
Reasons for Animal Research
© Explorer/Photo Researchers, Inc.
Given that most biological psychologists and neuroscientists are primarily interested in the human brain and human behavior, why do they study nonhuman animals? Here are four reasons.
Animals are used in many kinds of research studies, some dealing with behavior and others with the functions of the nervous system.
22
Chapter 1
The Major Issues
© David M. Barron/Animals Animals
1. The underlying mechanisms of behavior are similar across species and sometimes easier to study in a nonhuman species. If you wanted to understand a complex machine, you might begin by examining a simpler machine. We also learn about brain– behavior relationships by starting with simpler cases. The brains and behavior of nonhuman vertebrates resemble those of humans in their chemistry and anatomy (Figure 1.12). Even invertebrate nerves follow the same basic principles as our own. Much research has been conducted on squid nerves, which are thicker than human nerves and therefore easier to study. 2. We are interested in animals for their own sake. Humans are naturally curious. We want to understand why the Druids built Stonehenge, where the moon came from, how the rings of Saturn formed, and why certain animals act the way they do. Some of this research might produce practical applications, but even if it doesn’t, we would like to understand the universe just for the sake of understanding. 3. What we learn about animals sheds light on human evolution. What is our place in nature? How did we come to be the way we are? One way of approaching such questions is by examining other species to see how we are the same and how we are different.
4. Certain experiments cannot use humans because of legal or ethical restrictions. For example, investigators insert electrodes into the brain cells of rats and other animals to determine the relationship between brain activity and behavior. Such experiments answer questions that investigators cannot
Cerebellum
Cerebrum
Brainstem
Rat
Cerebrum Cerebellum Brainstem
Cat Cerebrum
Cerebellum Brainstem
Monkey
Cerebrum
Brainstem Cerebellum Spinal cord
Human
Figure 1.12 Brains of several species The general plan and organization of the brain are similar for all mammals, even though the size varies from species to species.
address in any other way. They also raise an ethical issue: If the research is unacceptable with humans, shouldn’t we also object to it with nonhumans?
The Ethical Debate In some cases, researchers simply observe animals in nature as a function of different times of day, different seasons of the year, changes in diet, and so forth. These procedures do not even inconvenience the animals and raise no ethical problems. In other experiments, however, including many discussed in this book, animals have been subjected to brain damage, electrode implantation, injections of drugs or hormones, and so forth. Many people regard such experimentation as cruelty to animals and have reacted with tactics ranging from peaceful demonstrations to vandalizing laboratories and threatening researchers with death (Schiermeier, 1998). The issues are difficult. On the one hand, many laboratory animals do undergo painful or debilitating procedures that are admittedly not for their own benefit. Anyone with a conscience (including scientists) is bothered by this fact. On the other hand, experimentation with animals has been critical to the medical research that led to methods for the prevention or treatment of polio, diabetes, measles, smallpox, massive burns, heart disease, and other serious conditions. Most Nobel prizes in physiology or medicine have been awarded for research conducted on nonhuman animals. The hope of finding methods to treat or prevent AIDS and various brain diseases (e.g., Alzheimer’s disease) depends largely on animal research. For many questions in biological psychology, our choice is to conduct research on animals or to make much slower progress or, for certain kinds of questions, no progress at all (Figure 1.13). Opposition to animal research ranges considerably in degree. “Minimalists” tolerate animal research under certain conditions. That is, they accept some kinds of research but wish to prohibit others depending on the probable value of the research, the amount of distress to the animal, and the type of animal. (Most people have fewer qualms about hurting an insect, say, than a dolphin.) They favor firm regulations on research. The “abolitionists” take a more extreme position and see no room for compromise. Abolitionists maintain that all animals have the same rights as humans. They regard killing an animal as murder, regardless of whether the intention is to eat it, use its fur, or gain scientific knowledge. Keeping an animal (presumably even a pet) in a cage is, in their view, slavery. Because animals cannot give informed consent to research, abolitionists insist it is wrong to use them in any way, regardless of the circumstances. According to one op1.3
The Use of Animals in Research
23
when the motive is to protect another species from extinction (Williams, 1999). Similar objections were raised when conservationists proposed to kill the pigs (again a humanintroduced species) that were destroying the habitat of native Hawaiian wildlife. At times in the animal rights dispute, people on both sides have taken shrill “us versus them” positions. Some defenders of animal research have claimed that such research is almost always useful and seldom painful, and some opponents have argued that the research is usually painful and never useful. In fact, the truth is messier (D. Blum, 1994): Much research is both useful and painful. Those of us who value both knowledge and animal life look for compromises instead of either–or solutions. Nearly all animal researchers sympathize with the desire to minimize painful research. That is, just about everyone draws a line somewhere and says, “I will not do this experiment. The knowledge I might gain is not worth that much distress to the animals.” To be sure, different researchers draw that line at different places. An organization of European researchers offered a series of proposals, which you can read at this website (see also van Zutphen, 2001):
Image not available due to copyright restrictions
http://www.esf.org/ftp/pdf/SciencePolicy/ ESPB9.pdf
Here are a few highlights:
ponent of animal research, “We have no moral option but to bring this research to a halt. Completely. . . . We will not be satisfied until every cage is empty” (Regan, 1986, pp. 39–40). Advocates of this position sometimes claim that most animal research is painful and that it never leads to important results. However, for a true abolitionist, neither of those points really matters. Their moral imperative is that people have no right to use animals, even if the research is useful and even if it is painless. Some abolitionists have opposed environmental protection groups as well. For example, red foxes, which humans introduced into California, so effectively rob bird nests that they have severely endangered California’s least terns and clapper rails. To protect the endangered birds, the U.S. Fish and Wildlife Service began trapping and killing red foxes in the areas where endangered birds breed. Their efforts were thwarted by a ballot initiative organized by animal rights activists, who argued that killing any animal is immoral even
24
Chapter 1
The Major Issues
•
• •
•
• •
• Laboratory animals have both an instrumental value (as a means to an end) and an intrinsic value (for their own sake), which must be respected. While accepting the need for animal research, the European Science Foundation endorses the principles of reduction (using fewer animals), replacement (using other methods not requiring animals, when possible), and refinement (using less painful procedures). Research to improve animal welfare should be encouraged. Before any research starts, someone other than the researchers themselves should evaluate the research plan to consider likely benefits and suffering. Investigators should assume that a procedure that is painful to humans is also painful to animals, unless they have evidence to the contrary. Investigators should be trained in animal care, including ethics and alternative research methods. Journals should include in their publication policy a statement about the ethical use of animals.
Is this sort of compromise satisfactory? It is to researchers and minimalists, but true abolitionists have no interest in compromise. If you believe that keeping any animal in any cage is the moral equivalent of slavery, you won’t endorse doing it in moderation. The disagreement between abolitionists and animal researchers is a dispute between two ethical positions: “Never knowingly harm an innocent” and “Sometimes a little harm leads to a greater good.” On the one hand, permitting research has the undeniable consequence of inflicting pain or distress. On the other hand, banning the use of animals for human purposes means a great setback in medical research as well as the end of animal-to-human transplants (e.g., using pig heart valves to help people with heart diseases). For this reason, many victims of serious diseases have organized to oppose animal rights groups (Feeney, 1987). The principles of moderation and compromise are now the legal standard. In the United States, every college or other research institution that receives federal funds is required to have an Institutional Animal Care and Use Committee, composed of veterinarians, community representatives, and scientists, that evaluates proposed experiments, decides whether they are acceptable, and specifies procedures designed to minimize pain and discomfort. Similar regulations and committees govern research on human subjects. In addition, all research laboratories must abide by national laws requiring certain standards of cleanliness and animal care. Similar laws apply in other countries, and scientific journals require a statement that researchers followed all the laws and regulations in their research. Professional organizations such as the Society for Neuroscience publish guidelines for the use of animals in research (see Appendix B). The following website describes U.S. regulations and advice on animal care: http://oacu.od.nih.gov/index.htm
STOP
&
CHECK
1. Describe reasons biological psychologists conduct much of their research on nonhuman animals. 2. How does the “minimalist” position differ from the “abolitionist” position? Check your answers on this page.
Module 1.3 In Closing: Humans and Animals We began this chapter with a quote from the Nobel Prize–winning biologist Niko Tinbergen. Tinbergen argued that no fundamental gulf separates humans from other animal species. Because we are similar in many ways to other species, we can learn much about ourselves from animal studies. Also because of that similarity, we identify with animals, and we wish not to hurt them. Neuroscience researchers who decide to conduct animal research do not, as a rule, take this decision lightly. They want to minimize harm to animals, but they also want to increase knowledge. They believe it is better to inflict limited distress under controlled conditions than to permit ignorance and disease to inflict much greater distress. In some cases, however, it is a difficult decision.
Summary 1. Researchers study animals because the mechanisms are sometimes easier to study in nonhumans, because they are interested in animal behavior for its own sake, because they want to understand the evolution of behavior, and because certain kinds of experiments are difficult or impossible with humans. (p. 22) 2. The ethics of using animals in research is controversial. Some research does inflict stress or pain on animals; however, many research questions can be investigated only through animal research. (p. 23) 3. Animal research today is conducted under legal and ethical controls that attempt to minimize animal distress. (p. 24)
Answers to STOP
&
CHECK
Questions 1. Sometimes the mechanisms of behavior are easier to study in a nonhuman species. We are curious about animals for their own sake. We study animals to understand human evolution. Certain procedures are illegal or unethical with humans. (p. 25) 2. A “minimalist” wishes to limit animal research to studies with little discomfort and much potential value. An “abolitionist” wishes to eliminate all animal research, regardless of how the animals are treated or how much value the research might produce. (p. 25)
1.3
The Use of Animals in Research
25
Chapter Ending
Key Terms and Activities Terms altruistic behavior (p. 19)
evolutionary psychology (p. 19)
monozygotic twins (p. 14)
artificial selection (p. 16)
fitness (p. 18)
multiplier effect (p. 15)
autosomal gene (p. 13)
functional explanation (p. 4)
mutation (p. 14)
binocular rivalry (p. 8)
gene (p. 12)
ontogenetic explanation (p. 4)
biological psychology (p. 2)
hard problem (p. 6)
phenylketonuria (PKU) (p. 15)
chromosome (p. 12)
heritability (p. 14)
physiological explanation (p. 3)
crossing over (p. 13)
heterozygous (p. 12)
problem of other minds (p. 6)
deoxyribonucleic acid (DNA) (p. 12)
homozygous (p. 12)
recessive (p. 13)
identity position (p. 6)
reciprocal altruism (p. 20)
dizygotic twins (p. 14)
kin selection (p. 20)
recombination (p. 14)
dominant (p. 13)
Lamarckian evolution (p. 17)
ribonucleic acid (RNA) (p. 12)
dualism (p. 5)
materialism (p. 5)
sex-limited gene (p. 14)
easy problems (p. 6)
mentalism (p. 6)
sex-linked gene (p. 13)
enzyme (p. 12)
mind–body or mind–brain problem (p. 5)
solipsism (p. 6)
evolution (p. 16) evolutionary explanation (p. 4)
monism (p. 5)
Y chromosome (p. 13)
Suggestions for Further Reading Gazzaniga, M. S. (1998). The mind’s past. Berkeley: University of California Press. A noted neuroscientist’s attempt to explain the physical origins of consciousness. This book includes a number of fascinating examples. Koch, C. (2004). The quest for consciousness. Englewood, CO: Roberts. A scientist’s attempt to make sense of the mind–brain relationship. Sunstein, C. R., & Nussbaum, M. C. (Eds.). (2004). Animal rights: Current debates and new directions. New York: Oxford University Press. A series of essays arguing both sides of the debate about animal rights and welfare.
26
Chapter Ending
X chromosome (p. 13)
Websites to Explore5 You can go to the Biological Psychology Study Center at this address: http://psychology.wadsworth.com/book/kalatbiopsych9e/
It would help to set a bookmark for this site because it will be helpful for each chapter. In addition to sample quiz items, a dictionary of terms, and other information, it includes links to many other websites. One way to reach any of these sites is to go to the Biological Psychology Study Cen5Websites
arise and disappear without warning. The suggestions listed in this book were available at the time the book went to press; I cannot guarantee how long they will last.
ter, click the appropriate chapter, and then find the appropriate links to additional sites. You can also check for suggested articles available on InfoTrac College Edition. The sites for this chapter are: National Society for Phenylketonuria Home Page http://www.nspku.org
Statement on Use of Animals in Research http://www.esf.org/ftp/pdf/SciencePolicy/ESPB9.pdf
U.S. government statement on animal care and use http://oacu.od.nih.gov/index.htm
http://www.thomsonedu.com Go to this site for the link to ThomsonNOW, your one-stop study shop. Take a Pre-Test for this chapter, and ThomsonNOW will generate a Personalized Study Plan based on your test results. The Study Plan will identify the topics you need to review and direct you to online resources to help you master these topics. You can then take a Post-Test to help you determine the concepts you have mastered and what you still need to work on.
Here are three sites that you may find helpful at many points throughout the text: Dana Foundation for brain information http://www.dana.org
Biomedical terms. If you read journal articles about biological psychology, you will encounter many terms, some of which are not defined in this text or in the online Biological Psychology dictionary for this text. To look up these additional terms, try this site: http://medical.webends.com
Founders of Neurology (biographies of major researchers) http://www.uic.edu/depts/mcne/founders The CD includes animations and short videos, like this one.
Exploring Biological Psychology CD Binocular Rivalry (Try It Yourself) Genetics and Evolution (Try It Yourself) Evolutionary Studies (video) Offspring of Parents Homozygous and Heterozygous for Brown Eyes (animation) RNA, DNA, and Protein (animation) Selection and Random Drift (Try It Yourself) Critical Thinking (essay questions) Chapter Quiz (multiple-choice questions) Here is one example of a Try It Yourself activity available both online and on the CD.
Chapter Ending
27
2
Nerve Cells and Nerve Impulses Chapter Outline
Main Ideas
Module 2.1
1. The nervous system is composed of two kinds of cells: neurons and glia. Only the neurons transmit impulses from one location to another.
The Cells of the Nervous System Anatomy of Neurons and Glia The Blood-Brain Barrier The Nourishment of Vertebrate Neurons In Closing: Neurons Summary Answers to Stop & Check Questions Module 2.2
The Nerve Impulse The Resting Potential of the Neuron The Action Potential Propagation of the Action Potential The Myelin Sheath and Saltatory Conduction Local Neurons In Closing: Neural Messages Summary Answers to Stop & Check Questions Thought Questions Terms Suggestions for Further Reading Websites to Explore Exploring Biological Psychology CD ThompsonNOW
2. The larger neurons have branches, known as axons and dendrites, which can change their branching pattern as a function of experience, age, and chemical influences. 3. Many molecules in the bloodstream that can enter other body organs cannot enter the brain. 4. The action potential, an all-or-none change in the electrical potential across the membrane of a neuron, is caused by the sudden flow of sodium ions into the neuron and is followed by a flow of potassium ions out of the neuron. 5. Local neurons are small and do not have axons or action potentials. Instead, they convey information to nearby neurons by graded potentials.
A
nervous system, composed of many individual cells, is in some regards like a society of people who work together and communicate with one another or even like elements that form a chemical compound. In each case, the combination has properties that are unlike those of its individual components. We begin our study of the nervous system by examining single cells; later, we examine how cells act together. Advice: Parts of this chapter and the next assume that you understand basic chemical concepts such as positively charged ions. If you need to refresh your memory, read Appendix A.
Opposite: A neuron has a long, straight axon that branches at its end and many widely branching dendrites. Source: 3D4Medical.com/Getty Images
29
Module 2.1
The Cells of the Nervous System
B
efore you could build a house, you would first assemble bricks or other construction materials. Similarly, before we can address the great philosophical questions such as the mind–brain relationship or the great practical questions of abnormal behavior, we have to start with the building blocks of the nervous system—the cells.
Cerebral cortex and associated areas:12 to 15 billion neurons
Cerebellum: 70 billion neurons
Anatomy of Neurons and Glia The nervous system consists of two kinds of cells: neurons and glia. Neurons receive information and transmit it to other cells. Glia provide a number of functions that are difficult to summarize, and we shall defer that discussion until later in the chapter. According to one estimate, the adult human brain contains approximately 100 billion neurons (R. W. Williams & Herrup, 1988) (Figure 2.1). An accurate count would be more difficult than it is worth, and the actual number varies from person to person. The idea that the brain is composed of individual cells is now so well established that we take it for granted. However, the idea was in doubt as recently as the early 1900s. Until then, the best microscopic views revealed little detail about the organization of the brain. Observers noted long, thin fibers between one neuron’s cell body and another, but they could not see whether each fiber merged into the next cell or stopped before it (Albright, Jessell, Kandel, & Posner, 2001). Then, in the late 1800s, Santiago Ramón y Cajal used newly developed staining techniques to show that a small gap separates the tips of one neuron’s fibers from the surface of the next neuron. The brain, like the rest of the body, consists of individual cells.
E X T E N S I O N S A N D A P P L I C AT I O N S
Santiago Ramón y Cajal, a Pioneer of Neuroscience Two scientists are widely recognized as the main founders of neuroscience. One was Charles Sherrington,
30
Chapter 2
Nerve Cells and Nerve Impulses
Spinal cord: 1 billion neurons
Figure 2.1 Estimated numbers of neurons in humans Because of the small size of many neurons and the variation in cell density from one spot to another, obtaining an accurate count is difficult. (Source: R. W. Williams & Herrup, 1988)
whom we shall discuss in Chapter 3; the other was the Spanish investigator Santiago Ramón y Cajal (1852– 1934). (See photo and quote on the pages inside the back cover.) Cajal’s early career did not progress altogether smoothly. At one point, he was imprisoned in a solitary cell, limited to one meal a day, and taken out daily for public floggings—at the age of 10—for the crime of not paying attention during his Latin class (Cajal, 1937). (And you thought your teachers were strict!)
Cajal wanted to become an artist, but his father insisted that he study medicine as a safer way to make a living. He managed to combine the two fields, becoming an outstanding anatomical researcher and illustrator. His detailed drawings of the nervous system are still considered definitive today. Before the late 1800s, microscopy could reveal few details about the nervous system. Then the Italian investigator Camillo Golgi discovered a method of using silver salts to stain nerve cells. This method, which completely stained some cells without affecting others at all, enabled researchers to examine the structure of a single cell. Cajal used Golgi’s methods but applied them to infant brains, in which the cells are smaller and therefore easier to examine on a single slide. Cajal’s research demonstrated that nerve cells remain separate instead of merging into one another. Philosophically, we can see the appeal of the idea that neurons merge. We each experience our consciousness as undivided, not as the sum of separate parts, so it seems that all the cells in the brain should be joined together physically as one unit. How the individual cells combine their influences is a complicated and still somewhat mysterious process.
The Structures of an Animal Cell Figure 2.2 illustrates a neuron from the cerebellum of a mouse (magnified enormously, of course). A neuron has much in common with any other cell in the body, although its shape is certainly distinctive. Let us begin with the properties that all animal cells have in common. The edge of a cell is a membrane (often called a plasma membrane), a structure that separates the inside of the cell from the outside environment. It is composed of two layers of fat molecules that are free to flow around one another, as illustrated in Figure 2.3. Most chemicals cannot cross the membrane. A few charged ions, such as sodium, potassium, calcium, and chloride, cross through specialized openings in the membrane called protein channels. Small uncharged chemicals, such as water, oxygen, carbon dioxide, and urea can diffuse across the membrane. Except for mammalian red blood cells, all animal cells have a nucleus, the structure that contains the chromosomes. A mitochondrion (pl.: mitochondria) is the structure that performs metabolic activities, providing the energy that the cell requires for all its other activities. Mitochondria require fuel and oxygen to
(ribosomes)
(nuclear envelope) (nucleolus)
Endoplasmic reticulum (isolation, modification, transport of proteins and other substances)
Nucleus (membrane-enclosed region containing DNA; hereditary control)
Plasma membrane (control of material exchanges, mediation of cellenvironment interactions)
Mitochondrion (aerobic energy metabolism)
Figure 2.2 An electron micrograph of parts of a neuron from the cerebellum of a mouse The nucleus, membrane, and other structures are characteristic of most animal cells. The plasma membrane is the border of the neuron. Magnification approximately x 20,000. (Source: Micrograph courtesy of Dennis M. D. Landis)
2.1
The Cells of the Nervous System
31
Phospholipid molecules
function. Ribosomes are the sites at which the cell synthesizes new protein molecules. Proteins provide building materials for the cell and facilitate various chemical reactions. Some ribosomes float freely within the cell; others are attached to the endoplasmic reticulum, a network of thin tubes that transport newly synthesized proteins to other locations.
Protein molecules
The Structure of a Neuron
Figure 2.3 The membrane of a neuron
Courtesy of Bob Jacobs, Colorado College
Embedded in the membrane are protein channels that permit certain ions to cross through the membrane at a controlled rate.
Figure 2.4 Neurons, stained to appear dark Note the small fuzzy-looking spines on the dendrites.
Figure 2.5 The components of a vertebrate motor neuron
Dendrite
The cell body of a motor neuron is located in the spinal cord. The various parts are not drawn to scale; in particular, a real axon is much longer in proportion to the soma.
Nucleus
Chapter 2
Myelin sheath Axon
Presynaptic terminals
Axon hillock
Soma
32
A neuron (Figure 2.4) contains a nucleus, a membrane, mitochondria, ribosomes, and the other structures typical of animal cells. The distinctive feature of neurons is their shape. The larger neurons have these major components: dendrites, a soma (cell body), an axon, and presynaptic terminals. (The tiniest neurons lack axons and some lack well-defined dendrites.) Contrast the motor neuron in Figure 2.5 and the sensory neuron in Figure 2.6. A motor neuron has its soma in the spinal cord. It receives excitation from other neurons through its dendrites and conducts impulses along its axon to a muscle. A sensory neuron is specialized at one end to be highly sensitive to a particular type of stimulation, such as touch information from the skin. Different kinds of sensory neurons have different structures; the one shown in Figure 2.6 is a neuron conducting touch information from the skin to the spinal cord. Tiny branches lead directly from the receptors into the axon, and the cell’s soma is located on a little stalk off the main trunk. Dendrites are branching fibers that get narrower near their ends. (The term dendrite comes from a Greek root word meaning “tree”; a dendrite is shaped like a tree.) The dendrite’s surface is lined with specialized synaptic receptors, at which the dendrite receives information from other neurons. (Chapter 3 focuses on the synapses.) The greater the surface area of a dendrite, the more information it can receive. Some dendrites branch widely and therefore have a large surface area. Some also contain dendritic spines, the short outgrowths that increase the surface area available for
Nerve Cells and Nerve Impulses
Dendritic spines
Muscle fiber
Cross-section of skin
Sensory endings
Axon Soma
Nucleus
Skin surface
Figure 2.6 A vertebrate sensory neuron Note that the soma is located on a stalk off the main trunk of the axon. (As in Figure 2.5, the various structures are not drawn to scale.)
synapses (Figure 2.7). The shape of dendrites varies enormously from one neuron to another and can even vary from one time to another for a given neuron. The shape of the dendrite has much to do with how the dendrite combines different kinds of input (Häusser, Spruston, & Stuart, 2000). The cell body, or soma (Greek for “body”; pl.: somata), contains the nucleus, ribosomes, mitochondria, and other structures found in most cells. Much of the metabolic work of the neuron occurs here. Cell bodies of neurons range in diameter from 0.005 mm to 0.1 mm in mammals and up to a full millimeter in certain invertebrates. Like the dendrites, the cell body is covered with synapses on its surface in many neurons. The axon is a thin fiber of constant diameter, in most cases longer than the dendrites. (The term axon comes from a Greek word meaning “axis.”) The axon is the information sender of the neuron, conveying an impulse toward either other neurons or a gland or muscle. Many vertebrate axons are covered with an insulating material called a myelin sheath with interruptions known as nodes of Ranvier. Invertebrate axons do not have myelin sheaths. An axon has many branches, each of which swells at its tip, forming a presynaptic terminal, also known as an end bulb or bouton (French for “button”).1 This is the point from which the axon releases chemicals that cross through the junction between one neuron and the next. A neuron can have any number of dendrites, but no more than one axon, which may have branches. Axons can range to a meter or more in length, as in the case of axons from your spinal cord to your feet. In 1Unfortunately,
many structures in the nervous system have several names. As Candace Pert (1997, p. 64) has put it, “Scientists would rather use each other’s toothbrushes than each other’s terminology.”
1µm
Shaft Spine
Figure 2.7 Dendritic spines The dendrites of certain neurons are lined with spines, short outgrowths that receive specialized incoming information. That information apparently plays a key role in long-term changes in the neuron that mediate learning and memory. (Source: From K. M. Harris and J. K. Stevens, Society for Neuroscience, “Dendritic spines of CA1 pyramidal cells in the rat hippocampus: Serial electron microscopy with reference to their biophysical characteristics.” Journal of Neuroscience, 9, 1989, 2982–2997. Copyright © 1989 Society for Neuroscience. Reprinted by permission.)
most cases, branches of the axon depart from its trunk far from the cell body, near the terminals. Other terms associated with neurons are afferent, efferent, and intrinsic. An afferent axon brings information into a structure; an efferent axon carries information away from a structure. Every sensory neuron is an afferent to the rest of the nervous system; every motor neuron is an efferent from the nervous sys2.1
The Cells of the Nervous System
33
and axon are entirely contained within a single structure, the cell is an interneuron or intrinsic neuron of that structure. For example, an intrinsic neuron of the thalamus has all its dendrites or axons within the thalamus; it communicates only with other cells of the thalamus.
B Afferent (to B)
A Efferent (from A)
Variations Among Neurons
Figure 2.8 Cell structures and axons It all depends on the point of view. An axon from A to B is an efferent axon from A and an afferent axon to B, just as a train from Washington to New York is exiting Washington and approaching New York.
tem. Within the nervous system, a given neuron is an efferent from the standpoint of one structure and an afferent from the standpoint of another. (You can remember that efferent starts with e as in exit; afferent starts with a as in admission.) For example, an axon that is efferent from the thalamus may be afferent to the cerebral cortex (Figure 2.8). If a cell’s dendrites
Neurons vary enormously in size, shape, and function. The shape of a given neuron determines its connections with other neurons and thereby determines its contribution to the nervous system. The wider the branching, the more connections with other neurons. The function of a neuron is closely related to its shape (Figure 2.9). For example, the dendrites of the Purkinje cell of the cerebellum (Figure 2.9a) branch extremely widely within a single plane; this cell is capable of integrating an enormous amount of incoming information. The neurons in Figures 2.9c and 2.9e also have widely branching dendrites that receive and integrate information from many sources. By contrast, certain cells in the retina (Figure 2.9d) have only short branches on their dendrites and therefore pool input from only a few sources.
Apical dendrite Dendrites Basilar dendrites
Axon (a)
Axon (c)
10 m (b)
(d)
Figure 2.9 The diverse shapes of neurons (a) Purkinje cell, a cell type found only in the cerebellum; (b) sensory neurons from skin to spinal cord; (c) pyramidal cell of the motor area of the cerebral cortex; (d) bipolar cell of retina of the eye; (e) Kenyon cell, from a honeybee. (Source: Part e, from R. G. Coss, Brain Research, October 1982. Reprinted by permission of R. G. Coss.)
34
Chapter 2
Nerve Cells and Nerve Impulses
(e)
Axon Schwann cell
Astrocyte
Capillary (small blood vessel)
Schwann cell
Astrocyte
Photos © Nancy Kedersha/UCLA/SLP/Photo Researchers
Radial glia Oligodendrocyte Myelin sheath Axon
Migrating neuron Microglia
Microglia
Figure 2.10 Shapes of some glia cells Oligodendrocytes produce myelin sheaths that insulate certain vertebrate axons in the central nervous system; Schwann cells have a similar function in the periphery. The oligodendrocyte is shown here forming a segment of myelin sheath for two axons; in fact, each oligodendrocyte forms such segments for 30 to 50 axons. Astrocytes pass chemicals back and forth between neurons and blood and among neighboring neurons. Microglia proliferate in areas of brain damage and remove toxic materials. Radial glia (not shown here) guide the migration of neurons during embryological development. Glia have other functions as well.
Glia Glia (or neuroglia), the other major cellular components of the nervous system, do not transmit information over long distances as neurons do, although they do exchange chemicals with adjacent neurons. In some cases, that exchange produces oscillations in the activity of those neurons (Nadkarni & Jung, 2003). The term glia, derived from a Greek word meaning “glue,” reflects early investigators’ idea that glia were like glue that held the neurons together (Somjen, 1988). Although that concept is obsolete, the term remains. Glia are smaller but also more numerous than neurons, so overall, they occupy about the same volume (Figure 2.10). Glia have many functions (Haydon, 2001). One type of glia, the star-shaped astrocytes, wrap around the presynaptic terminals of a group of functionally related axons, as shown in Figure 2.11. By taking up chemicals released by those axons and later releasing them back to the axons, an astrocyte helps synchro-
nize the activity of the axons, enabling them to send messages in waves (Angulo, Kozlov, Charpak, & Audinat, 2004; Antanitus, 1998). Astrocytes also remove waste material created when neurons die and help control the amount of blood flow to a given brain area (Mulligan & MacVicar, 2004). Microglia, very small cells, also remove waste material as well as viruses, fungi, and other microorganisms. In effect, they function like part of the immune system (Davalos et al., 2005). Oligodendrocytes (OL-igo-DEN-druh-sites) in the brain and spinal cord and Schwann cells in the periphery of the body are specialized types of glia that build the myelin sheaths that surround and insulate certain vertebrate axons. Radial glia, a type of astrocyte, guide the migration of neurons and the growth of their axons and dendrites during embryonic development. Schwann cells perform a related function after damage to axons in the periphery, guiding a regenerating axon to the appropriate target.
2.1
The Cells of the Nervous System
35
Neuron Astrocyte Synapse enveloped by astrocyte
Figure 2.11 How an astrocyte synchronizes associated axons Branches of the astrocyte (in the center) surround the presynaptic terminals of related axons. If a few of them are active at once, the astrocyte absorbs some of the chemicals they release. It then temporarily inhibits all the axons to which it is connected. When the inhibition ceases, all of the axons are primed to respond again in synchrony. (Source: Based on Antanitus, 1998)
STOP
&
CHECK
1. Identify the four major structures that compose a neuron. 2. Which kind of glia cell wraps around the synaptic terminals of axons? Check your answers on page 38.
The Blood-Brain Barrier Although the brain, like any other organ, needs to receive nutrients from the blood, many chemicals— ranging from toxins to medications—cannot cross from the blood to the brain (Hagenbuch, Gao, & Meier, 2002). The mechanism that keeps most chemicals out of the vertebrate brain is known as the blood-brain barrier. Before we examine how it works, let’s consider why we need it.
Why We Need a Blood-Brain Barrier From time to time, viruses and other harmful substances enter the body. When a virus enters a cell, mechanisms within the cell extrude a virus particle
36
Chapter 2
Nerve Cells and Nerve Impulses
through the membrane so that the immune system can find it. When the immune system cells attack the virus, they also kill the cell that contains it. In effect, a cell exposing a virus through its membrane says, “Look, immune system, I’m infected with this virus. Kill me and save the others.” This plan works fine if the virus-infected cell is, say, a skin cell or a blood cell, which the body replaces easily. However, with few exceptions, the vertebrate brain does not replace damaged neurons. To minimize the risk of irreparable brain damage, the body literally builds a wall along the sides of the brain’s blood vessels. This wall keeps out most viruses, bacteria, and harmful chemicals. “What happens if a virus does enter the brain?” you might ask. After all, certain viruses do break through the blood-brain barrier. The brain has ways to attack viruses or slow their reproduction (Binder & Griffin, 2001) but doesn’t kill them or the cells they inhabit. Consequently, a virus that enters your nervous system probably remains with you for life. For example, herpes viruses (responsible for chicken pox, shingles, and genital herpes) enter spinal cord cells. No matter how much the immune system attacks the herpes virus outside the nervous system, virus particles remain in the spinal cord and can emerge decades later to reinfect you. A structure called the area postrema, which is not protected by the blood-brain barrier, monitors blood chemicals that could not enter other brain areas. This structure is responsible for triggering nausea and vomiting—important responses to toxic chemicals. It is, of course, exposed to the risk of being damaged itself.
How the Blood-Brain Barrier Works The blood-brain barrier (Figure 2.12) depends on the arrangement of endothelial cells that form the walls of the capillaries (Bundgaard, 1986; Rapoport & Robinson, 1986). Outside the brain, such cells are separated by small gaps, but in the brain, they are joined so tightly that virtually nothing passes between them. Chemicals therefore enter the brain only by crossing the membrane itself. Two categories of molecules cross the blood-brain barrier passively (without the expenditure of energy). First, small uncharged molecules, such as oxygen and carbon dioxide, cross freely. Water, a very important small molecule, crosses through special protein channels that regulate its flow (Amiry-Moghaddam & Ottersen, 2003). Second, molecules that dissolve in the fats of the membrane also cross passively. Examples include vitamins A and D, as well as various drugs that affect the brain, ranging from heroin and marijuana to antidepressant drugs. However, the blood-brain barrier excludes most viruses, bacteria, and toxins.
Brain tissue Fat-
Gluc
Ami
no-a
cid
tran
ose
spor
tran
solu
spor
STOP ble
mol
ecu
le
&
CHECK
3. What is one major advantage of having a blood-brain barrier? 4. What is a disadvantage of the blood-brain barrier?
t
5. Which chemicals cross the blood-brain barrier on their own? 6. Which chemicals cross the blood-brain barrier by active transport?
t
Check your answers on page 38. Charged molecules
–
CO2
+ Cell wall tight junction
CO2 Endothelial cell O2 Large molecule
Blood vessel
O2
Brain tissue
Figure 2.12 The blood-brain barrier Most large molecules and electrically charged molecules cannot cross from the blood to the brain. A few small, uncharged molecules such as O2 and CO2 cross easily; so can certain fat-soluble molecules. Active transport systems pump glucose and amino acids across the membrane.
“If the blood-brain barrier is such a good defense,” you might ask, “why don’t we have similar walls around our other organs?” The answer is that the barrier that keeps out harmful chemicals also keeps out many useful ones, including sources of nutrition. For organs that can afford to risk a viral infection, a tight barrier would be more costly than it is worth. Getting nutrition into the brain requires an active transport, a protein-mediated process that expends energy to pump chemicals from the blood into the brain. Chemicals that are actively transported into the brain include glucose (the brain’s main fuel), amino acids (the building blocks of proteins), and certain vitamins and hormones (Brightman, 1997). The brain also has an active transport system for moving certain chemicals from the brain to the blood (King, Su, Chang, Zuckerman, & Pasternak, 2001).
The Nourishment of Vertebrate Neurons Most cells use a variety of carbohydrates and fats for nutrition, but vertebrate neurons depend almost entirely on glucose, a simple sugar. (Cancer cells and the testis cells that make sperm also rely overwhelmingly on glucose.) The metabolic pathway that uses glucose requires oxygen; consequently, the neurons consume an enormous amount of oxygen compared with cells of other organs (Wong-Riley, 1989). Why do neurons depend so heavily on glucose? Although neurons have the enzymes necessary to metabolize fats and several sugars, glucose is practically the only nutrient that crosses the blood-brain barrier in adults. The exceptions to this rule are ketones (a kind of fat), but ketones are seldom available in large amounts (Duelli & Kuschinsky, 2001), and large amounts of ketones cause medical complications. Although neurons require glucose, a glucose shortage is rarely a problem. The liver can make glucose from many kinds of carbohydrates and amino acids, as well as from glycerol, a breakdown product from fats. An inability to use glucose can be a problem, however. Many chronic alcoholics have a diet deficient in vitamin B1, thiamine, a chemical that is necessary for the use of glucose. Prolonged thiamine deficiency can lead to death of neurons and a condition called Korsakoff’s syndrome, marked by severe memory impairments (Chapter 13).
Module 2.1 In Closing: Neurons What does the study of individual neurons tell us about behavior? Perhaps the main lesson is that our experience and behavior do not follow from the properties of any one neuron. Just as a chemist must know about 2.1
The Cells of the Nervous System
37
atoms to make sense of compounds, a biological psychologist or neuroscientist must know about cells to understand the nervous system. However, the nervous system is more than the sum of the individual cells, just as water is more than the sum of oxygen and hydrogen. Our behavior emerges from the communication among neurons.
STOP
1. In the late 1800s, Santiago Ramón y Cajal used newly discovered staining techniques to establish that the nervous system is composed of separate cells, now known as neurons. (p. 30) 2. Neurons receive information and convey it to other cells. The nervous system also contains glia. (pp. 30, 35) 3. Neurons have four major parts: a cell body, dendrites, an axon, and presynaptic terminals. Their shapes vary greatly depending on their functions and their connections with other cells. (p. 32) 4. Glia do not convey information over great distances, but they aid the functioning of neurons in many ways. (p. 35)
Chapter 2
6. Adult neurons rely heavily on glucose, the only nutrient that can cross the blood-brain barrier. They need thiamine (vitamin B1) to use glucose. (p. 37)
Answers to
Summary
38
5. Because of the blood-brain barrier, many molecules, especially large ones, cannot enter the brain. (p. 36)
Nerve Cells and Nerve Impulses
&
CHECK
Questions 1. Dendrites, soma (cell body), axon, and presynaptic terminal (p. 36) 2. Astrocytes (p. 36) 3. The blood-brain barrier keeps out most viruses, bacteria, and other harmful substances. (p. 37) 4. The blood-brain barrier also keeps out most nutrients. (p. 37) 5. Small, uncharged molecules such as oxygen and carbon dioxide cross the blood-brain barrier passively. So do chemicals that dissolve in the fats of the membrane. (p. 37) 6. Glucose, amino acids, and some vitamins and hormones cross by active transport. (p. 37)
Module 2.2
The Nerve Impulse
T
hink about the axons that convey information from your feet’s touch receptors toward your spinal cord and brain. If the axons used electrical conduction, they could transfer information at a velocity approaching the speed of light. However, given that your body is made of carbon compounds and not copper wire, the strength of the impulse would decay greatly on the way to your spinal cord and brain. A touch on your shoulder would feel much stronger than a touch on your abdomen. Short people would feel their toes more strongly than tall people could. The way your axons actually function avoids these problems. Instead of simply conducting an electrical impulse, the axon regenerates an impulse at each point. Imagine a long line of people holding hands. The first person squeezes the second person’s hand, who then squeezes the third person’s hand, and so forth. The impulse travels along the line without weakening because each person generates it anew. Although the axon’s method of transmitting an impulse prevents a touch on your shoulder from feeling stronger than one on your toes, it introduces a different problem: Because axons transmit information at only moderate speeds (varying from less than 1 meter/ second to about 100 m/s), a touch on your shoulder will reach your brain sooner than will a touch on your toes. If you get someone to touch you simultaneously on your shoulder and your toe, you probably will not notice that your brain received one stimulus before the other. In fact, if someone touches you on one hand and then the other, you won’t be sure which hand you felt first, unless the delay between touches exceeds 70 milliseconds (ms) (S. Yamamoto & Kitazawa, 2001). Your brain is not set up to register small differences in the time of arrival of touch messages. After all, why should it be? You almost never need to know whether a touch on one part of your body try it occurred slightly before or after a touch yourself somewhere else. In vision, however, your brain does need to know whether one stimulus began slightly before or after another one. If two adjacent spots on your retina—let’s call them A and B—send impulses at almost the same time, an extremely small difference in timing indicates whether a flash of light moved from A to B or
from B to A. To detect movement as accurately as possible, your visual system compensates for the fact that some parts of the retina are slightly closer to your brain than other parts are. Without some sort of compensation, simultaneous flashes arriving at two spots on your retina would reach your brain at different times, and you might perceive a flash of light moving from one spot to the other. What prevents that illusion is the fact that axons from more distant parts of your retina transmit impulses slightly faster than those closer to the brain (L. R. Stanford, 1987)! In short, the properties of impulse conduction in an axon are well adapted to the exact needs for information transfer in the nervous system. Let’s now examine the mechanics of impulse transmission.
The Resting Potential of the Neuron The membrane of a neuron maintains an electrical gradient, a difference in electrical charge between the inside and outside of the cell. All parts of a neuron are covered by a membrane about 8 nanometers (nm) thick (just less than 0.00001 mm), composed of two layers (an inner layer and an outer layer) of phospholipid molecules (containing chains of fatty acids and a phosphate group). Embedded among the phospholipids are cylindrical protein molecules (see Figure 2.3, p. 32). The structure of the membrane provides it with a good combination of flexibility and firmness and retards the flow of chemicals between the inside and the outside of the cell. In the absence of any outside disturbance, the membrane maintains an electrical polarization, meaning a difference in electrical charge between two locations. Specifically, the neuron inside the membrane has a slightly negative electrical potential with respect to the outside. This difference in voltage in a resting neuron is called the resting potential. The resting potential is mainly the result of negatively charged proteins inside the cell. Researchers can measure the resting potential by inserting a very thin microelectrode into the cell body, 2.2
The Nerve Impulse
39
Axons from other neurons Intracellular microelectrode
Amplifier
Axon
© Fritz Goro
Reference microelectrode
Soma
Computer
(a)
(b)
Figure 2.13 Methods for recording activity of a neuron (a) Diagram of the apparatus and a sample recording. (b) A microelectrode and stained neurons magnified hundreds of times by a light microscope.
as Figure 2.13 shows. The diameter of the electrode must be as small as possible so that it can enter the cell without causing damage. By far the most common electrode is a fine glass tube filled with a concentrated salt solution and tapering to a tip diameter of 0.0005 mm or less. This electrode, inserted into the neuron, is connected to recording equipment. A reference electrode placed somewhere outside the cell completes the circuit. Connecting the electrodes to a voltmeter, we find that the neuron’s interior has a negative potential relative to its exterior. The actual potential varies from one neuron to another; a typical level is –70 millivolts (mV), but it can be either higher or lower than that.
As we shall see in Chapter 3, certain kinds of stimulation can open the sodium channels. When the membrane is at rest, potassium channels are nearly but not entirely closed, so potassium flows slowly. Sodium ions are more than ten times more concentrated outside the membrane than inside because
Forces Acting on Sodium and Potassium Ions If charged ions could flow freely across the membrane, the membrane would depolarize at once. However, the membrane is selectively permeable—that is, some chemicals can pass through it more freely than others can. (This selectivity is analogous to the blood-brain barrier, but it is not the same thing.) Most large or electrically charged ions and molecules cannot cross the membrane at all. Oxygen, carbon dioxide, urea, and water cross freely through channels that are always open. A few biologically important ions, such as sodium, potassium, calcium, and chloride, cross through membrane channels (or gates) that are sometimes open and sometimes closed. When the membrane is at rest, the sodium channels are closed, preventing almost all sodium flow. These channels are shown in Figure 2.14.
40
Chapter 2
Nerve Cells and Nerve Impulses
Membrane of neuron
K+
K+ K+ K+
Ion pathways Ion pores Na+ Na+
Figure 2.14 Ion channels in the membrane of a neuron When a channel opens, it permits one kind of ion to cross the membrane. When it closes, it prevents passage of that ion.
rest, so almost no sodium flows except for the sodium pushed out of the cell by the sodium-potassium pump. Potassium, however, is subject to competing forces. Potassium is positively charged and the inside of the cell is negatively charged, so the electrical gradient tends to pull potassium in. However, potassium is more concentrated inside the cell than outside, so the concentration gradient tends to drive it out. If the potassium gates were wide open, potassium would flow mostly out of the cell but not rapidly. That is, for potassium, the electrical gradient and concentration gradient are almost in balance. (The sodium-potassium pump keeps pulling potassium in, so the two gradients cannot get completely in balance.) The cell has negative ions too, of course, especially chloride. However, chloride is not actively pumped in or out, and its channels are not voltage dependent, so chloride ions are not the key to the action potential.
of the sodium-potassium pump, a protein complex that repeatedly transports three sodium ions out of the cell while drawing two potassium ions into it. The sodium-potassium pump is an active transport requiring energy. Various poisons can stop it, as can an interruption of blood flow. The sodium-potassium pump is effective only because of the selective permeability of the membrane, which prevents the sodium ions that were pumped out of the neuron from leaking right back in again. As it is, the sodium ions that are pumped out stay out. However, some of the potassium ions pumped into the neuron do leak out, carrying a positive charge with them. That leakage increases the electrical gradient across the membrane, as shown in Figure 2.15. When the neuron is at rest, two forces act on sodium, both tending to push it into the cell. First, consider the electrical gradient. Sodium is positively charged and the inside of the cell is negatively charged. Opposite electrical charges attract, so the electrical gradient tends to pull sodium into the cell. Second, consider the concentration gradient, the difference in distribution of ions across the membrane. Sodium is more concentrated outside than inside, so just by the laws of probability, sodium is more likely to enter the cell than to leave it. (By analogy, imagine two rooms connected by a door. There are 100 cats are in room A and only 10 in room B. Cats are more likely to move from A to B than from B to A. The same principle applies to the movement of sodium.) Given that both the electrical gradient and the concentration gradient tend to move sodium ions into the cell, sodium certainly would move rapidly if it had the chance. However, the sodium channels are closed when the membrane is at
Distribution of Ions Na+ Na+ Na+ Na+ + Na+ + Na+ Na+ Na Na+ Na + + Na+ Na+ K Na+ Na Na+
Na+
K+
Na+ Na+ Na+
Na+
+ Na+ Na
K+
K+
Presumably, evolution could have equipped us with neurons that were electrically neutral at rest. The resting potential must provide enough benefit to justify the energy cost of the sodium-potassium pump. The advantage is that the resting potential prepares the neuron to respond rapidly to a stimulus. As we shall see in the next section, excitation of the neuron opens channels that let sodium enter the cell explosively. Because the membrane did its work in advance by maintaining the concentration gradient for sodium, the cell is prepared to respond strongly and rapidly to a stimulus. The resting potential of a neuron can be compared to a poised bow and arrow: An archer who pulls the
Movement of Ions Na+ Na+ Na+
Sodiumpotassium pump
K+ leaves
cell because of concentration gradient
K+
K+ + K Na+ K+ + K+ K+ K + K K+ Na+ K+ Na+ K+ K+
Why a Resting Potential?
K+ K+
K+ enters cell because of electrical gradient
Figure 2.15 The sodium and potassium gradients for a resting membrane Sodium ions are more concentrated outside the neuron; potassium ions are more concentrated inside. Protein and chloride ions (not shown) bear negative charges inside the cell. At rest, very few sodium ions cross the membrane except by the sodiumpotassium pump. Potassium tends to flow into the cell because of an electrical gradient but tends to flow out because of the concentration gradient.
Na+ Na+ Na+ Na+ + Na+ Na Na+ + Na+ Na+ Na
2.2
The Nerve Impulse
41
bow in advance and then waits is ready to fire as soon as the appropriate moment comes. Evolution has applied the same strategy to the neuron.
&
–60
CHECK
mV
STOP
With a slightly stronger depolarizing current, the potential rises slightly higher, but again, it returns to the resting level as soon as the stimulation ceases:
–70
1. When the membrane is at rest, are the sodium ions more concentrated inside the cell or outside? Where are the potassium ions more concentrated?
Check your answers on page 48.
The Action Potential The resting potential remains stable until the neuron is stimulated. Ordinarily, stimulation of the neuron takes place at synapses, which we consider in Chapter 3. In the laboratory, it is also possible to stimulate a neuron by inserting an electrode into it and applying current. We can measure a neuron’s potential with a microelectrode, as shown in Figure 2.13b. When an axon’s membrane is at rest, the recordings show a steady negative potential inside the axon. If we now use an additional electrode to apply a negative charge, we can further increase the negative charge inside the neuron. The change is called hyperpolarization, which means increased polarization. As soon as the artificial stimulation ceases, the charge returns to its original resting level. The recording looks like this: 0
mV
– 20 – 40 – 60 – 80 – 100
Time
Now let us see what happens when we apply a still stronger current: Any stimulation beyond a certain level, called the threshold of excitation, produces a sudden, massive depolarization of the membrane. When the potential reaches the threshold, the membrane suddenly opens its sodium channels and permits a rapid, massive flow of ions across the membrane. The potential then shoots up far beyond the strength of the stimulus: 50 40 20 0 mV
2. When the membrane is at rest, what tends to drive the potassium ions out of the cell? What tends to draw them into the cell?
–65
– 20 – 40 – 60 – 80 Time
Any subthreshold stimulation produces a small response proportional to the amount of current. Any stimulation beyond the threshold, regardless of how far beyond, produces the same response, like the one just shown. That response, a rapid depolarization and slight reversal of the usual polarization, is referred to as an action potential. The peak of the action potential, shown as +30 mV in this illustration, varies from one axon to another, but it is nearly constant for a given axon.
Time
mV
Now, let us apply a current for a slight depolarization of the neuron—that is, reduction of its polarization toward zero. If we apply a small depolarizing current, we get a result like this:
&
CHECK
–60
3. What is the difference between a hyperpolarization and a depolarization?
–65
4. What is the relationship between the threshold and an action potential? Check your answers on page 48.
–70 Time
42
STOP
Chapter 2
Nerve Cells and Nerve Impulses
The Molecular Basis of the Action Potential
+40 Electrical potential (in mV)
Remember that both the electrical gradient and the concentration gradient tend to drive sodium ions into the neuron. If sodium ions could flow freely across the membrane, they would enter rapidly. Ordinarily, the membrane is almost impermeable to sodium, but during the action potential, its permeability increases sharply. The membrane proteins that control sodium entry are voltage-activated channels, membrane channels whose permeability depends on the voltage difference across the membrane. At the resting potential, the channels are closed. As the membrane becomes slightly depolarized, the sodium channels begin to open and sodium flows more freely. If the depolarization is less than the threshold, sodium crosses the membrane only slightly more than usual. When the potential across the membrane reaches threshold, the sodium channels open wide. Sodium ions rush into the neuron explosively until the electrical potential across the membrane passes beyond zero to a reversed polarity, as shown in the following diagram:
+50 + 30
Resulting electrical potential
+ 20 + 10 0 – 10 – 20 – 30 – 40 – 50 – 60 – 70 1 ms
Time
Rate of entry of sodium into neuron Rate of exit of potassium from neuron
60 40
Reversed polarity
20 mV
0 – 20
1 ms Time
– 40
Figure 2.16 The movement of sodium and potassium ions during an action potential
– 60 – 80
Time
Compared to the total number of sodium ions in and around the axon, only a tiny percentage cross the membrane during an action potential. Even at the peak of the action potential, sodium ions continue to be far more concentrated outside the neuron than inside. An action potential increases the sodium concentration inside a neuron by far less than 1%. Because of the persisting concentration gradient, sodium ions should still tend to diffuse into the cell. However, at the peak of the action potential, the sodium gates quickly close and resist reopening for about the next millisecond. After the peak of the action potential, what brings the membrane back to its original state of polarization? The answer is not the sodium-potassium pump, which is too slow for this purpose. After the action potential is underway, the potassium channels open. Potassium ions flow out of the axon simply because they are much more concentrated inside than outside and they are no longer held inside by a negative charge. As they flow
Sodium ions cross during the peak of the action potential and potassium ions cross later in the opposite direction, returning the membrane to its original polarization.
out of the axon, they carry with them a positive charge. Because the potassium channels open wider than usual and remain open after the sodium channels close, enough potassium ions leave to drive the membrane beyond the normal resting level to a temporary hyperpolarization. Figure 2.16 summarizes the movements of ions during an action potential. At the end of this process, the membrane has returned to its resting potential and everything is back to normal, except that the inside of the neuron has slightly more sodium ions and slightly fewer potassium ions than before. Eventually, the sodium-potassium pump restores the original distribution of ions, but that process takes time. In fact, after an unusually rapid series of action potentials, the pump cannot keep up with the action, and sodium may begin to accumulate 2.2
The Nerve Impulse
43
within the axon. Excessive buildup of sodium can be toxic to a cell. (Excessive stimulation occurs only under abnormal conditions, however, such as during a stroke or after the use of certain drugs. Don’t worry that thinking too hard will explode your brain cells!) For the neuron to function properly, sodium and potassium must flow across the membrane at just the right pace. Scorpion venom attacks the nervous system by keeping sodium channels open and closing potassium channels (Pappone & Cahalan, 1987; Strichartz, Rando, & Wang, 1987). As a result, the membrane goes into a prolonged depolarization and accumulates dangerously high amounts of sodium. Local anesthetic drugs, such as Novocain and Xylocaine, attach to the sodium channels of the membrane, preventing sodium ions from entering (Ragsdale, McPhee, Scheuer, & Catterall, 1994). In doing so, the drugs block action potentials. If anesthetics are applied to sensory nerves carrying pain messages, they prevent the messages from reaching the brain. To explore the action potential further and try some virtual experiments on the membrane, explore this website: http://www2.neuroscience.umn.edu/eanwebsite/ metaneuron.htm
STOP
&
CHECK
5. During the rise of the action potential, do sodium ions move into the cell or out of it? Why? 6. As the membrane reaches the peak of the action potential, what ionic movement brings the potential down to the original resting potential? Check your answers on page 48.
The All-or-None Law Action potentials occur only in axons and cell bodies. When the voltage across an axon membrane reaches a certain level of depolarization (the threshold), voltageactivated sodium channels open wide to let sodium enter rapidly, and the incoming sodium depolarizes the membrane still further. Dendrites can be depolarized, but they don’t have voltage-activated sodium channels, so opening the channels a little, letting in a little sodium, doesn’t cause them to open even more and let in still more sodium. Thus, dendrites don’t produce action potentials. For a given neuron, all action potentials are approximately equal in amplitude (intensity) and velocity under normal circumstances. This is the all-or-none law: The amplitude and velocity of an action potential are independent of the intensity of the stimulus
44
Chapter 2
Nerve Cells and Nerve Impulses
that initiated it. By analogy, imagine flushing a toilet: You have to make a press of at least a certain strength (the threshold), but pressing even harder does not make the toilet flush any faster or more vigorously. The all-or-none law puts some constraints on how an axon can send a message. To signal the difference between a weak stimulus and a strong stimulus, the axon can’t send bigger or faster action potentials. All it can change is the timing. By analogy, suppose you agree to exchange coded messages with someone in another building who can see your window by occasionally flicking your lights on and off. The two of you might agree, for example, to indicate some kind of danger by the frequency of flashes. (The more flashes, the more danger.) You could also convey information by a rhythm. Flash-flash . . .
long pause . . .
flash-flash
might mean something different from Flash . . . pause . . . flash . . . pause . . . flash . . . pause . . . flash. The nervous system uses both of these kinds of codes. Researchers have long known that a greater frequency of action potentials per second indicates “stronger” stimulus. In some cases, a different rhythm of response also carries information (Ikegaya et al., 2004; Oswald, Chacron, Doiron, Bastian, & Maler, 2004). For example, an axon might show one rhythm of responses for sweet tastes and a different rhythm for bitter tastes (Di Lorenzo, Hallock, & Kennedy, 2003).
The Refractory Period While the electrical potential across the membrane is returning from its peak toward the resting point, it is still above the threshold. Why doesn’t the cell produce another action potential during this period? Immediately after an action potential, the cell is in a refractory period during which it resists the production of further action potentials. In the first part of this period, the absolute refractory period, the membrane cannot produce an action potential, regardless of the stimulation. During the second part, the relative refractory period, a stronger than usual stimulus is necessary to initiate an action potential. The refractory period is based on two mechanisms: The sodium channels are closed, and potassium is flowing out of the cell at a faster than usual rate. Most of the neurons that have been tested have an absolute refractory period of about 1 ms and a relative refractory period of another 2–4 ms. (To return to the toilet analogy, there is a short time right after you flush a toilet when you cannot make it flush again—an absolute refractory period. Then follows a period when it is possible but difficult to flush it again—a relative refractory period—before it returns to normal.)
STOP
&
K+
CHECK
7. State the all-or-none law.
+ +
10. Distinguish between the absolute refractory period and the relative refractory period.
K+ Na+
8. Does the all-or-none law apply to dendrites? 9. Suppose researchers find that axon A can produce up to 1,000 action potentials per second (at least briefly, with maximum stimulation), but axon B can never produce more than 200 per second (regardless of the strength of the stimulus). What could we conclude about the refractory periods of the two axons?
+
_
K+
Na+
K+
Stimulus Na+ _ _ _
_
+ Na+
Check your answers on page 48. (a)
K+
Propagation of the Action Potential Up to this point, we have dealt with the action potential at one location on the axon. Now let us consider how it moves down the axon toward some other cell. Remember that it is important for axons to convey impulses without any loss of strength over distance. In a motor neuron, an action potential begins on the axon hillock,2 a swelling where the axon exits the soma (see Figure 2.5, p. 32). Each point along the membrane regenerates the action potential in much the same way that it was generated initially. During the action potential, sodium ions enter a point on the axon. Temporarily, that location is positively charged in comparison with neighboring areas along the axon. The positive ions flow down the axon and across the membrane, as shown in Figure 2.17. Other things being equal, the greater the diameter of the axon, the faster the ions flow (because of decreased resistance). The positive charges now inside the membrane slightly depolarize the adjacent areas of the membrane, causing the next area to reach its threshold and regenerate the action potential. In this manner, the action potential travels like a wave along the axon. The term propagation of the action potential describes the transmission of an action potential down an axon. The propagation of an animal species is the production of offspring; in a sense, the action potential gives birth to a new action potential at each point along the axon. In this manner, the action potential 2One exception is known to the rule that action potentials start at the axon hillock. One kind of neuron initiates its action potential at the first node of Ranvier (B. A. Clark, Monsivais, Branco, London, & Häuser, 2005). Almost any generalization about the nervous system has an exception, somewhere.
K+ + K+
+
Na+ Na+
K+
Na+ _ _ _
_
+
n of Directio
action
al
potenti
(b)
Figure 2.17 Current that enters an axon during the action potential flows down the axon, depolarizing adjacent areas of the membrane. The current flows more easily through thicker axons. Behind the area of sodium entry, potassium ions exit.
can be just as strong at the end of the axon as it was at the beginning. The action potential is much slower than electrical conduction because it requires the diffusion of sodium ions at successive points along the axon. Electrical conduction in a copper wire with free electrons travels at a rate approaching the speed of light, 300 million meters per second (m/s). In an axon, transmission relies on the flow of charged ions through a water medium. In thin axons, action potentials travel at a velocity of less than 1 m/s. Thicker axons and those covered with an insulating shield of myelin conduct with greater velocities. Let us reexamine Figure 2.17 for a moment. What is to prevent the electrical charge from flowing in the 2.2
The Nerve Impulse
45
direction opposite that in which the action potential is traveling? Nothing. In fact, the electrical charge does flow in both directions. In that case, what prevents an action potential near the center of an axon from reinvading the areas that it has just passed? The answer is that the areas just passed are still in their refractory period.
The Myelin Sheath and Saltatory Conduction The thinnest axons conduct impulses at less than 1 m/s. Increasing the diameters increases conduction velocity but only up to about 10 m/s. At that speed, an impulse from a giraffe’s foot takes about half a second to reach its brain. At the slower speeds of thinner unmyelinated axons, a giraffe’s brain could be seconds out of date on what was happening to its feet. In some vertebrate axons, sheaths of myelin, an insulating material composed of fats and proteins, increase speed up to about 100 m/s. Consider the following analogy. Suppose it is my job to carry written messages over a distance of 3 kilometers (km) without using any mechanical device. Taking each message and running with it would be reliable but slow, like the propagation of an action potential along an unmyelinated axon. I could try tying each message to a ball and throwing it, but I cannot throw a ball even close to 3 km. The ideal compromise is to station people at moder-
ate distances along the 3 km and throw the messagebearing ball from person to person until it reaches its destination. The principle behind myelinated axons, those covered with a myelin sheath, is the same. Myelinated axons, found only in vertebrates, are covered with a coating composed mostly of fats. The myelin sheath is interrupted at intervals of approximately 1 mm by short unmyelinated sections of axon called nodes of Ranvier (RAHN-vee-ay), as shown in Figure 2.18. Each node is only about 1 micrometer wide. Suppose that an action potential is initiated at the axon hillock and propagated along the axon until it reaches the first myelin segment. The action potential cannot regenerate along the membrane between nodes because sodium channels are virtually absent between nodes (Catterall, 1984). After an action potential occurs at a node, sodium ions that enter the axon diffuse within the axon, repelling positive ions that were already present and thus pushing a chain of positive ions along the axon to the next node, where they regenerate the action potential (Figure 2.19). This flow of ions is considerably faster than the regeneration of an action potential at each point along the axon. The jumping of action potentials from node to node is referred to as saltatory conduction, from the Latin word saltare, meaning “to jump.” (The same root shows up in the word somersault.) In addition to providing very rapid conduction of impulses, saltatory conduction has the benefit of conserving energy: Instead of admitting sodium ions at every point along the axon and then having to pump them out via the sodium-
Node of Ranvier
Myelin
Axon
Axon
Na+ – + + –
Myelin sheath Axon
46
Chapter 2
Nerve Cells and Nerve Impulses
+ – – +
+ – – +
+ – – +
+ – – +
(a)
+ + – +– K – – + +
Cutaway view of axon wrapped in myelin
The inset shows a cross-section through both the axon and the myelin sheath. Magnification approximately x 30,000. The anatomy is distorted here to show several nodes; in fact, the distance between nodes is generally about 100 times as large as the nodes themselves.
Local current flow
Na+
Node of Ranvier
Figure 2.18 An axon surrounded by a myelin sheath and interrupted by nodes of Ranvier
– + + –
Na+ – – + + + + – –
Local current flow
+ – – +
+ – – +
Na+ (b)
Figure 2.19 Saltatory conduction in a myelinated axon An action potential at the node triggers flow of current to the next node, where the membrane regenerates the action potential.
potassium pump, a myelinated axon admits sodium only at its nodes. Some diseases, including multiple sclerosis, destroy myelin sheaths, thereby slowing action potentials or stopping them altogether. An axon that has lost its myelin is not the same as one that has never had myelin. A myelinated axon loses its sodium channels between the nodes (Waxman & Ritchie, 1985). After the axon loses myelin, it still lacks sodium channels in the areas previously covered with myelin, and most action potentials die out between one node and the next. People with multiple sclerosis suffer a variety of impairments, including poor muscle coordination. For an additional review of action potentials, visit this website: http://faculty.washington.edu/chudler/ap .html
STOP
&
CHECK
11. In a myelinated axon, how would the action potential be affected if the nodes were much closer together? How might it be affected if the nodes were much farther apart? Check your answers on page 49.
Local Neurons The principles we have been describing so far, especially the action potential, apply to neurons with lengthy axons. Not all neurons fall into that category.
2005). They have no action potentials, but they rapidly exchange chemicals back and forth with neighboring neurons.
E X T E N S I O N S A N D A P P L I C AT I O N S
Small Neurons and Big Misconceptions Local neurons are somewhat difficult to study; it is almost impossible to insert an electrode into a tiny cell without damaging it. A disproportionate amount of our knowledge, therefore, has come from large neurons, and that bias in our research methods may have led to an enduring misconception. Many years ago, long before neuroscientists could investigate local neurons, all they knew about them was that they were small. Given that nearly all knowledge about the nervous system was based on the activities of large neurons, the small neurons seemed unimportant. Many scientists assumed that they were “baby” or immature neurons. As one textbook author put it, “Many of these [neurons] are small and apparently undeveloped, as if they constituted a reserve stock not yet utilized in the individual’s cerebral activity” (Woodworth, 1934, p. 194). In other words, the small cells would contribute to behavior only if they grew. Perhaps this misunderstanding was the origin of that widespread, nonsensical belief that “we use only 10% of our brain.” It is difficult to imagine any reasonable justification for this belief. Surely, no one maintained that anyone could lose 90% of the brain and still behave normally or that only 10% of neurons are active at any given moment. Whatever its source, the belief became popular, presumably because people wanted to believe it. Eventually, they were simply quoting one another long after everyone forgot what evidence they had (or didn’t have) for it in the first place.
Graded Potentials Many neurons have only short axons, if any. They exchange information only with their closest neighbors and are therefore known as local neurons. A local neuron does not produce an action potential. It receives information from other neurons in its immediate vicinity and produces graded potentials, membrane potentials that vary in magnitude and do not follow the allor-none law. When a local neuron is stimulated, it depolarizes or hyperpolarizes in proportion to the intensity of the stimulus. The change in membrane potential is conducted to adjacent areas of the cell, in all directions, gradually decaying as it travels. Those various areas of the cell make direct contact onto other neurons without going through an axon. In Chapter 6, we discuss in some detail a particular local neuron, the horizontal cell, which is essential for local interactions within the retina of the eye. In some ways, astrocytes, although they are glia cells, operate like local neurons (Volterra & Meldolesi,
Module 2.2 In Closing: Neural Messages We have examined what happens within a single neuron, as if each neuron acted independently. It does not, of course; all of its functions depend on communication with other neurons, as we consider in the next chapter. We may as well admit from the start, however, that neural communication is amazing. Unlike human communication, in which a speaker sometimes presents a complicated message to an enormous audience, a neuron delivers only an action potential—a mere on/off message—to only that modest number of other neurons that receive branches of its axon. At various receiving neurons, an “on” message can be converted into either excitation or inhibition (yes or no). From this limited system, all of our behavior and experience emerge. 2.2
The Nerve Impulse
47
the threshold does not produce an action potential. (p. 42)
Summary 1. The inside of a resting neuron has a negative charge with respect to the outside. Sodium ions are actively pumped out of the neuron, and potassium ions are pumped in. Potassium ions flow slowly across the membrane of the neuron, but sodium ions hardly cross it at all while the membrane is at rest. (p. 39) 2. When the charge across the membrane is reduced, sodium ions can flow more freely across the membrane. When the membrane potential reaches the threshold of the neuron, sodium ions enter explosively, suddenly reducing and reversing the charge across the membrane. This event is known as the action potential. (p. 42) 3. The magnitude of the action potential is independent of the size of the stimulus that initiated it; this statement is the all-or-none law. (p. 44)
5. During the action potential, sodium ions move into the cell. The voltage-dependent sodium gates have opened, so sodium can move freely. Sodium is attracted to the inside of the cell by both an electrical and a concentration gradient. (p. 44) 6. After the peak of the action potential, potassium ions exit the cell, driving the membrane back to the resting potential. (The sodium-potassium pump is not the answer here; it is too slow.) (p. 44) 7. According to the all-or-none law, the size and shape of the action potential are independent of the intensity of the stimulus that initiated it. That is, every depolarization beyond the threshold of excitation produces an action potential of about the same amplitude and velocity for a given axon. (p. 45) 8. The all-or-none law does not apply to dendrites because they do not have action potentials. (p. 45)
4. Immediately after an action potential, the membrane enters a refractory period during which it is resistant to starting another action potential. (p. 44)
9. Axon A must have a shorter absolute refractory period, about 1 ms, whereas B has a longer absolute refractory period, about 5 ms. (p. 45)
5. The action potential is regenerated at successive points along the axon by sodium ions flowing through the core of the axon and then across the membrane. The action potential maintains a constant magnitude as it passes along the axon. (p. 45)
10. During the absolute refractory period, the sodium gates are locked and no amount of stimulation can produce another action potential. During the relative refractory period, a greater than usual stimulation is needed to produce an action potential. (p. 45)
6. In axons that are covered with myelin, action potentials form only in the nodes that separate myelinated segments. Transmission in myelinated axons is much faster than in unmyelinated axons. (p. 46) 7. Many small local neurons transmit messages over relatively short distances by graded potentials, which decay over time and space, instead of by action potentials. (p. 47)
11. If the nodes were closer, the action potential would travel more slowly. If they were much farther apart, the current might not be able to diffuse from one node to the next and still remain above threshold, so the action potentials might stop. (p. 47)
Thought Questions Answers to STOP
&
CHECK
Questions
1. Suppose that the threshold of a neuron were the same as its resting potential. What would happen? At what frequency would the cell produce action potentials?
3. A hyperpolarization is an exaggeration of the usual negative charge within a cell (to a more negative level than usual). A depolarization is a decrease in the amount of negative charge within the cell. (p. 42)
2. In the laboratory, researchers can apply an electrical stimulus at any point along the axon, making action potentials travel in both directions from the point of stimulation. An action potential moving in the usual direction, away from the axon hillock, is said to be traveling in the orthodromic direction. An action potential traveling toward the axon hillock is traveling in the antidromic direction. If we started an orthodromic action potential at the axon hillock and an antidromic action potential at the opposite end of the axon, what would happen when they met at the center? Why? What research might make use of antidromic impulses?
4. A depolarization that passes the threshold produces an action potential. One that falls short of
3. If a drug partly blocks a membrane’s potassium channels, how does it affect the action potential?
1. Sodium ions are more concentrated outside the cell; potassium is more concentrated inside. (p. 42) 2. When the membrane is at rest, the concentration gradient tends to drive potassium ions out of the cell; the electrical gradient draws them into the cell. The sodium-potassium pump also draws them into the cell. (p. 42)
48
Chapter 2
Nerve Cells and Nerve Impulses
Chapter Ending
Key Terms and Activities Terms absolute refractory period (p. 44)
glucose (p. 37)
oligodendrocyte (p. 35)
action potential (p. 42)
graded potential (p. 47)
polarization (p. 39)
active transport (p. 37)
hyperpolarization (p. 42)
presynaptic terminal (p. 33)
afferent axon (p. 33)
interneuron (p. 34)
all-or-none law (p. 44)
intrinsic neuron (p. 34)
propagation of the action potential (p. 45)
astrocyte (p. 35)
local anesthetic (p. 44)
radial glia (p. 35)
axon (p. 33)
local neuron (p. 47)
refractory period (p. 44)
axon hillock (p. 45)
membrane (p. 31)
relative refractory period (p. 44)
blood-brain barrier (p. 36)
microglia (p. 35)
resting potential (p. 39)
cell body, or soma (p. 33)
ribosome (p. 32)
concentration gradient (p. 41)
mitochondrion (pl.: mitochondria) (p. 31)
dendrite (p. 32)
motor neuron (p. 32)
Schwann cell (p. 35)
dendritic spine (p. 32)
myelin (p. 46)
selective permeability (p. 40)
depolarization (p. 42)
myelin sheath (p. 33)
sensory neuron (p. 32)
efferent axon (p. 33)
myelinated axon (p. 46)
sodium-potassium pump (p. 41)
electrical gradient (p. 39)
neuron (p. 30)
thiamine (vitamin B1) (p. 37)
endoplasmic reticulum (p. 32)
node of Ranvier (p. 33)
threshold of excitation (p. 42)
glia (p. 35)
nucleus (p. 31)
voltage-activated channel (p. 43)
Suggestion for Further Reading Smith, C. U. M. (2002). Elements of molecular neurobiology (3rd ed.). Hoboken, NJ: Wiley. A detailed treatment of the molecular biology of neurons.
Websites to Explore You can go to the Biological Psychology Study Center and click this link. While there, you can also check for suggested articles available on InfoTrac College Edition. The Biological Psychology Internet address is: http://psychology.wadsworth.com/book/kalatbiopsych9e/
MetaNeuron Program Here you can vary temperatures, ion concentrations, membrane permeability, and so forth to see the effects on action potentials. http://www2.neuroscience.umn.edu/eanwebsite/ metaneuron.htm
saltatory conduction (p. 46)
Exploring Biological Psychology CD The Parts of a Neuron (animation) Virtual Reality Neuron (virtual reality) Neuron Puzzle (drag & drop) Resting Potential (animation) Action Potential (animation) Action Potential: Na+ Ions (animation) Neuron Membrane at Rest (animation) Propagation of the Action Potential (animation) Critical Thinking (essay questions) Chapter Quiz (multiple-choice questions)
http://www.thomsonedu.com Go to this site for the link to ThomsonNOW, your one-stop study shop, Take a Pre-Test for this chapter, and ThomsonNOW will generate a Personalized Study Plan based on your test results. The Study Plan will identify the topics you need to review and direct you to online resources to help you master these topics. You can then take a Post-Test to help you determine the concepts you have mastered and what you still need work on.
Chapter Ending
49
3
Synapses
Chapter Outline
Main Ideas
Module 3.1
1. At a synapse, a neuron releases a chemical known as a neurotransmitter that excites or inhibits another cell or alters its response to additional input.
The Concept of the Synapse The Properties of Synapses Relationship Among EPSP, IPSP, and Action Potential In Closing: The Neuron as Decision Maker Summary Answers to Stop & Check Questions Thought Questions Module 3.2
Chemical Events at the Synapse The Discovery of Chemical Transmission at Synapses The Sequence of Chemical Events at a Synapse In Closing: Neurotransmitters and Behavior Summary Answers to Stop & Check Questions Thought Questions Module 3.3
Drugs and Synapses Drug Mechanisms Common Drugs and Their Synaptic Effects In Closing: Drugs and Behavior Summary Answers to Stop & Check Questions Thought Question Terms Suggestions for Further Reading Websites to Explore Exploring Biological Psychology CD ThomsonNOW
2. A single release of neurotransmitter produces only a subthreshold response in the receiving cell. This response summates with other subthreshold responses to determine whether or not the cell produces an action potential. 3. Because different neurotransmitters contribute to behavior in different ways, excessive or deficient transmission at a particular type of synapse can lead to abnormal behavior. 4. Most drugs that affect behavior or experience do so by acting at synapses. 5. Nearly all abused drugs increase the release of dopamine in certain brain areas.
I
f you had to communicate with someone without using sound, what would you do? Chances are, your first choice would be a visual code, such as written words or sign language. Your second choice would probably be some sort of touch code or a system of electrical impulses. You might not even think of passing chemicals back and forth. Chemical communication is, however, the primary method of communication for your neurons. Considering how well the human nervous system works, chemical communication is evidently more versatile than we might have guessed. Neurons communicate by transmitting chemicals at specialized junctions called synapses, which are central to all information processing in the brain.
Opposite: This electron micrograph, with color added artificially, shows the synapses formed by axons onto another neuron. Source: © Eye of Science/Photo Researchers, Inc.
51
Module 3.1
The Concept of the Synapse
I
n the late 1800s, Ramón y Cajal anatomically demonstrated a narrow gap separating one neuron from the next. In 1906, Charles Scott Sherrington physiologically demonstrated that communication between one neuron and the next differs from communication along a single axon. (See photo and quote on the pages inside the back cover.) He inferred a specialized gap between neurons and introduced the term synapse to describe it. Cajal and Sherrington are regarded as the great pioneers of modern neuroscience, and their nearly simultaneous discoveries supported each other: If communication between one neuron and another was special in some way, then no doubt could remain that neurons were anatomically separate from one another. Sherrington’s discovery was an amazing feat of scientific reasoning, as he used behavioral observa-
Skin
Sensory neuron
Muscle
Figure 3.1 A reflex arc for leg flexion The anatomy has been simplified to show the relationship among sensory neuron, intrinsic neuron, and motor neuron.
52
Chapter 3
Synapses
tions to infer the major properties of synapses about half a century before researchers had the technology to measure those properties directly.
The Properties of Synapses Sherrington conducted his research on reflexes, automatic muscular responses to stimuli. In a leg flexion reflex, a sensory neuron excites a second neuron, which in turn excites a motor neuron, which excites a muscle, as in Figure 3.1. The circuit from sensory neuron to muscle response is called a reflex arc. If one neuron is separate from another, as Cajal had demonstrated, a reflex must require communication between neurons, and therefore, measurements of Brain neuron reflexes might reveal some of Intrinsic neuron the special properties of that Axon branch communication. to other neurons In a typical experiment, Sherrington strapped a dog into a harness above the ground and pinched one of the dog’s feet. After a short delay—less than a second but long enough to measure—the dog flexed (raised) the pinched leg and Motor neuron extended the others. Sherrington found the same reflexive movements after he made a cut that disconnected the spinal cord from the brain; evidently, the spinal cord controlled the flexion and extension reflexes. In fact, the movements were more consistent after he separated the spinal cord from the brain. (In an intact animal, messages descending from the brain inhibit or modify the reflexes.) Sherrington observed several properties of reflexes suggesting special processes at the junctions between neurons: (a) Reflexes are slower than conduction along an axon. (b) Several weak stimuli presented at slightly different times or slightly different locations produce a stronger reflex than a single stimulus does. (c) When one set of muscles becomes excited, a different set be-
comes relaxed. Let us consider each of these points and their implications.
Speed of a Reflex and Delayed Transmission at the Synapse When Sherrington pinched a dog’s foot, the dog flexed that leg after a very short but measurable delay. During that delay, an impulse had to travel up an axon from the skin receptor to the spinal cord, and then an impulse had to travel from the spinal cord back down the leg to a muscle. Sherrington measured the total distance that the impulse traveled from skin receptor to spinal cord to muscle and calculated the speed at which the impulse must have traveled to produce the response within the measured delay. He found that the speed of conduction through the reflex arc varied but was never more than about 15 meters per second (m/s). In contrast, previous research had measured action potential velocities along sensory or motor nerves at about 40 m/s. Even if the measurements were not exactly accurate, Sherrington concluded that some process was slowing conduction through the reflex, and he inferred that the delay must occur where one neuron communicates with another (Figure 3.2). This idea is critical, as it established the existence of synapses. Sherrington, in fact, introduced the term synapse.
Temporal Summation Sherrington’s work with reflex arcs suggested that repeated stimuli within a brief time have a cumulative effect. He referred to this phenomenon as temporal summation. A light pinch of the dog’s foot did not evoke a reflex, but when Sherrington rapidly repeated the pinch several times, the leg flexed. Sherrington surmised that a single pinch produced a synaptic trans-
A
B
The speed of conduction along an axon is about 40 m/s.
C
D
E
The speed of conduction through a reflex arc is slower and more variable, sometimes 15 m/s or less. Presumably, the delay occurs at the synapse.
Figure 3.2 Sherrington’s evidence for synaptic delay An impulse traveling through a synapse in the spinal cord is slower than one traveling a similar distance along an uninterrupted axon.
mission too weak to reach the threshold for an action potential in the postsynaptic neuron, the cell that receives the message. (The neuron that delivers the synaptic transmission is the presynaptic neuron.) Sherrington proposed that this subthreshold excitation begins to decay shortly after it starts but can combine with a second excitation that quickly follows it. A rapid succession of pinches produces a series of weak activations at the synapse, each adding its effect to what was left of the previous ones. If enough excitations occur rapidly enough, they combine to exceed the threshold of the postsynaptic neuron. Decades after Sherrington, John Eccles (1964) inserted microelectrodes into neurons to measure changes in the electrical potential across the membrane. He attached stimulating electrodes to axons of presynaptic neurons while recording from the postsynaptic neuron. For example, after he had briefly stimulated an axon, Eccles recorded a slight depolarization of the membrane of the postsynaptic cell (point 1 in Figure 3.3). Note that this partial depolarization is a graded potential. Unlike action potentials, which are always depolarizations, graded potentials may be either depolarizations (excitatory) or hyperpolarizations (inhibitory). A graded depolarization is known as an excitatory postsynaptic potential (EPSP). Like the action potentials discussed in Chapter 2, an EPSP occurs when sodium ions enter the cell. However, in most cases, transmission at a single synapse does not open enough sodium gates to reach the threshold. Unlike an action potential, an EPSP decays over time and space; that is, its magnitude fades rapidly. When Eccles stimulated an axon twice in close succession, he recorded two consecutive EPSPs in the postsynaptic cell. If the delay between EPSPs was short enough, temporal summation occurred; that is, the second EPSP added to what was left of the first one (point 2 in Figure 3.3). The summation of two EPSPs might or might not exceed the threshold of the postsynaptic cell depending on the size of the EPSPs, the time between them, and the threshold of the postsynaptic cell. At point 3 in Figure 3.3, three consecutive EPSPs combine to exceed the threshold and produce an action potential.
Spatial Summation Sherrington’s work with reflex arcs also suggested that synapses have the property of spatial summation: Several synaptic inputs originating from separate locations combine their effects on a neuron. Sherrington again began with a pinch too weak to elicit a reflex. This time, instead of pinching one point twice, he pinched two points at the same time. Although neither pinch alone elicited a response, the two together did. Sherrington concluded that pinching two points on the foot acti-
3.1
The Concept of the Synapse
53
Electrical potential across membrane (millivolts)
Figure 3.3 Recordings from a postsynaptic neuron during synaptic activation
+30
Threshold
–50 –55 –60 –65 –70 1. EPSP
2. Temporal 3. 3 EPSPs combine 4. Simultaneous EPSPs 5. IPSP summation to exceed threshold combine spatially to of 2 EPSPs exceed threshold Time
vated two sensory neurons, whose axons converged onto one neuron in the spinal cord. Excitation from either axon excited that neuron but did not reach the threshold. Two excitations exceeded the threshold and elicited an action potential (point 4 in Figure 3.3). Again, Eccles confirmed Sherrington’s inference, demonstrating that several axons were capable of producing EPSPs that summate their effects on a postsynaptic cell. Note that temporal and spatial summation
Temporal summation (several impulses from one neuron over time)
Resting potential
both increase the depolarization of the postsynaptic cell and therefore increase the probability of an action potential (Figure 3.4). You might guess that the synapses closer to the cell body of the postsynaptic cell might have a bigger effect than synapses on more distant parts of the dendrites because the distant inputs might decay in strength as they travel toward the cell body. Surprisingly, however, the synapses on remoter parts of the dendrites produce larger EPSPs, so their contribution to the cell’s response approximately equals that of closer synapses (Magee & Cook, 2000).
Inhibitory Synapses
Action potential travels along axon
Spatial summation (impulses from several neurons at the same time)
Figure 3.4 Temporal and spatial summation
54
Chapter 3
Synapses
When Sherrington vigorously pinched a dog’s foot, the flexor muscles of that leg contracted and so did the extensor muscles of the other three legs (Figure 3.5). At the same time, the dog relaxed the extensor muscles of the stimulated leg and the flexor muscles of the other legs. Sherrington explained these results by assuming certain connections in the spinal cord: A pinch on the foot sends a message along a sensory neuron to an interneuron (an intermediate neuron) in the spinal cord, which in turn excites the motor
Flexor muscles contract
Eccles and later researchers physiologically demonstrated the inhibitory synapses that Sherrington had inferred. At these synapses, input from the axon hyperpolarizes the postsynaptic cell. That is, it increases the negative charge within the cell, moving it further from the threshold, and thus decreasing the probability of an action potential (point 5 in Figure 3.3). This temporary hyperpolarization of a membrane—called an inhibitory postsynaptic potential, or IPSP—resembles an EPSP in many ways. An IPSP occurs when synaptic input selectively opens the gates for potassium ions to leave the cell (carrying a positive charge with them) or for chloride ions to enter the cell (carrying a negative charge). Inhibition is more than just the absence of excitation; it is an active “brake” that suppresses excitation. Today, we take the concept of inhibition for granted, but at Sherrington’s time, the idea was controversial, as no one could imagine a mechanism to accomplish it. Establishing the idea of inhibition was critical not just for neuroscience but for psychology as well. For example, Sigmund Freud, who developed his theories in the same era as Sherrington, proposed that when a sexual “energy” was blocked from its desired outlet, it would spill over into some other activity. Today, we would simply talk about inhibiting an unwelcome activity; we see no need for the associated energy to flow somewhere else.
Extensor muscles contract
Figure 3.5 Antagonistic muscles Flexor muscles draw an extremity toward the trunk of the body, whereas extensor muscles move an extremity away from the body.
neurons connected to the flexor muscles of that leg (Figure 3.6). Sherrington surmised that the interneuron also sends a message to block activity of motor neurons connected to the extensor muscles in the same leg, as well as the flexor muscles of the three other legs.
Brain neuron
Excitatory synapse
Skin
Excitatory synapse Intrinsic neuron
Inhibitory synapse
Excitatory synapse
Sensory neuron
Muscle
Motor neuron axon to extensor muscle
Motor neuron axon to flexor muscle
Figure 3.6 Sherrington’s inference of inhibitory synapses When a flexor muscle is excited, the probability of excitation decreases in the paired extensor muscle. Sherrington inferred that the interneuron that excited a motor neuron to the flexor muscle also inhibited a motor neuron connected to the extensor muscle.
3.1
The Concept of the Synapse
55
STOP
&
CHECK
1. What evidence led Sherrington to conclude that transmission at a synapse is different from transmission along an axon? 2. What is the difference between temporal summation and spatial summation? 3. What was Sherrington’s evidence for inhibition in the nervous system? 4. What ion gates in the membrane open during an EPSP? What gates open during an IPSP? Check your answers on pages 56–57.
The neuron’s response to synaptic input can be compared to a thermostat, a smoke detector, or any other device that detects something and triggers a response: When input reaches a certain level, the neuron triggers an action potential. That is, synapses enable the postsynaptic neuron to integrate information. The EPSPs and IPSPs reaching a neuron at a given moment compete against one another, and the net result is a complicated, not exactly algebraic summation of the two effects. We could regard the summation of EPSPs and IPSPs as a “decision” because it determines whether or not the postsynaptic cell fires an action potential. However, do not imagine that any single neuron decides what to eat for breakfast. Complex behaviors depend on the contributions from a huge network of neurons.
Summary
Relationship Among EPSP, IPSP, and Action Potential A given neuron may have anywhere from a few synapses on its surface to thousands, some of them excitatory and others inhibitory. Any number and combination of synapses may be active at any time, yielding both temporal and spatial summation. The probability of an action potential depends on the ratio of EPSPs to IPSPs at a given moment. Most neurons have a spontaneous firing rate, a periodic production of action potentials even without synaptic input. In such neurons, the EPSPs increase the frequency of action potentials above the spontaneous rate, whereas IPSPs decrease it below that rate. For example, if the neuron’s spontaneous firing rate is 10 action potentials per second, a stream of EPSPs might increase the rate to 15 or more, whereas a preponderance of IPSPs might decrease it to 5 or fewer.
Module 3.1 In Closing: The Neuron as Decision Maker When we learn the basics of any scientific field, we sometimes take them for granted, as if people always knew them. For example, we are taught that Earth and the other planets rotate around the sun, and we don’t always pause to marvel at how Copernicus, hundreds of years ago, drew that conclusion from some observations recorded before the invention of telescopes. Sherrington’s accomplishment is also amazing. Just imagine measuring reflexive leg movements and inferring the existence of synapses and their major properties long before the invention of oscilloscopes or electron microscopes.
56
Chapter 3
Synapses
1. The synapse is the point of communication between two neurons. Charles S. Sherrington’s observations of reflexes enabled him to infer the properties of synapses. (p. 52) 2. Because transmission through a reflex arc is slower than transmission through an equivalent length of axon, Sherrington concluded that some process at the synapses delays transmission. (p. 53) 3. Graded potentials (EPSPs and IPSPs) summate their effects. The summation of graded potentials from stimuli at different times is temporal summation. The summation of graded potentials from different locations is spatial summation. (p. 53) 4. A single stimulation at a synapse produces a brief graded potential in the postsynaptic cell. An excitatory graded potential (depolarizing) is an EPSP. An inhibitory graded potential (hyperpolarizing) is an IPSP. (pp. 53, 55) 5. An EPSP occurs when gates open to allow sodium to enter the neuron’s membrane; an IPSP occurs when gates open to allow potassium to leave or chloride to enter. (pp. 53, 55) 6. The EPSPs on a neuron compete with the IPSPs; the balance between the two increases or decreases the neuron’s frequency of action potentials. (p. 56)
Answers to STOP
&
CHECK
Questions 1. Sherrington found that the velocity of conduction through a reflex arc was significantly slower than
the velocity of an action potential along an axon. Therefore, some delay must occur at the junction between one neuron and the next. (p. 56) 2. Temporal summation is the combined effect of quickly repeated stimulation at a single synapse. Spatial summation is the combined effect of several nearly simultaneous stimulations at several synapses onto one neuron. (p. 56) 3. Sherrington found that a reflex that stimulates a flexor muscle sends a simultaneous message that inhibits nerves to the extensor muscles of the same limb. (p. 56) 4. During an EPSP, sodium gates open. During an IPSP, potassium or chloride gates open. (p. 56)
Thought Questions 1. When Sherrington measured the reaction time of a reflex (i.e., the delay between stimulus and response), he found that the response occurred faster after a strong stimulus than after a weak one. Can
you explain this finding? Remember that all action potentials—whether produced by strong or weak stimuli—travel at the same speed along a given axon. 2. A pinch on an animal’s right hind foot excites a sensory neuron that excites an interneuron that excites the motor neurons to the flexor muscles of that leg. The interneuron also inhibits the motor neurons connected to the extensor muscles of the leg. In addition, this interneuron sends impulses that reach the motor neuron connected to the extensor muscles of the left hind leg. Would you expect the interneuron to excite or inhibit that motor neuron? (Hint: The connections are adaptive. When an animal lifts one leg, it must put additional weight on the other legs to maintain balance.) 3. Suppose neuron X has a synapse onto neuron Y, which has a synapse onto Z. Presume that no other neurons or synapses are present. An experimenter finds that stimulating neuron X causes an action potential in neuron Z after a short delay. However, she determines that the synapse of X onto Y is inhibitory. Explain how the stimulation of X might produce excitation of Z.
3.1
The Concept of the Synapse
57
Module 3.2
Chemical Events at the Synapse
A
lthough Charles Sherrington accurately inferred many properties of the synapse, he was wrong in one important point: Although he knew that synaptic transmission was slower than transmission along an axon, he thought it was still too fast to depend on a chemical process and therefore concluded that it must be electrical. We now know that the great majority of synapses rely on chemical processes, which are much faster and more versatile than Sherrington or anyone else of his era would have guessed.
The Discovery of Chemical Transmission at Synapses A set of nerves called the sympathetic nervous system accelerates the heartbeat, relaxes the stomach muscles, dilates the pupils of the eyes, and regulates many other organs. T. R. Elliott, a young British scientist, reported in 1905 that applying the hormone adrenaline directly to the surface of the heart, the stomach, and the pupils produces the same effects as those of the sympathetic nervous system. Elliott therefore suggested that the sympathetic nerves stimulate muscles by releasing adrenaline or a similar chemical. However, Elliott’s evidence was not decisive; perhaps adrenaline merely mimicked effects that are ordinarily electrical in nature. At the time, Sherrington’s prestige was so great that most scientists ignored Elliott’s results and continued to assume that synapses transmitted electrical impulses. Otto Loewi, a German physiologist, liked the idea of chemical synapses but did not see how to demonstrate it more decisively. Then in 1920, he awakened one night with a sudden idea. He wrote himself a note and went back to sleep. Unfortunately, the next morning he could not read his note. The following night he awoke at 3 A.M. with the same idea, rushed to the laboratory, and performed the experiment. Loewi repeatedly stimulated the vagus nerve, thereby decreasing the frog’s heart rate. He then collected fluid from that heart, transferred it to a second frog’s heart, and found that the second heart also decreased its rate of beating. (Figure 3.7 illustrates this experiment.) In a later experiment, Loewi stimulated
58
Chapter 3
Synapses
the accelerator nerve to the first frog’s heart, increasing the heart rate. When he collected fluid from that heart and transferred it to the second heart, its heart rate increased. That is, stimulating one nerve released something that inhibited heart rate, and stimulating a different nerve released something that increased heart rate. He knew he was collecting and transferring chemicals, not loose electricity. Therefore, Loewi concluded, nerves send messages by releasing chemicals. Loewi later remarked that if he had thought of this experiment in the light of day, he probably would not have tried it (Loewi, 1960). Even if synapses did release chemicals, his daytime reasoning went, they probably did not release much. Fortunately, by the time he realized that the experiment was unlikely to work, he had already completed it, for which he later won the Nobel Prize. Despite Loewi’s work, most researchers over the next three decades continued to believe that most synapses were electrical and that chemical transmission
Fluid transfer Vagus nerve
Stimulator
Heart rate Without stimulation With stimulation
Figure 3.7 Loewi’s experiment demonstrating that nerves send messages by releasing chemicals Loewi stimulated the vagus nerve to one frog’s heart, decreasing the heartbeat. When he transferred fluid from that heart to another frog’s heart, he observed a decrease in its heartbeat.
was the exception. Finally, in the 1950s, researchers established that chemical transmission is the predominant type of communication throughout the nervous system. That discovery revolutionized our understanding and led to research developing new drugs for psychiatric uses (Carlsson, 2001).
4.
5.
6.
The Sequence of Chemical Events at a Synapse Understanding the chemical events at a synapse is fundamental to biological psychology. Here are the major events at a synapse: 1. The neuron synthesizes chemicals that serve as neurotransmitters. It synthesizes the smaller neurotransmitters in the axon terminals and the larger ones (peptides) in the cell body. 2. The neuron transports the peptide neurotransmitters to the axon terminals. (It doesn’t have to transport smaller neurotransmitters because they are formed in the terminals.) 3. Action potentials travel down the axon. At the presynaptic terminal, an action potential enables calcium to enter the cell. Calcium releases neurotransmitters from the terminals and into the synaptic
7.
cleft, the space between the presynaptic and postsynaptic neurons. The released molecules diffuse across the cleft, attach to receptors, and alter the activity of the postsynaptic neuron. The neurotransmitter molecules separate from their receptors. Depending on the neurotransmitter, it may be converted into inactive chemicals. The neurotransmitter molecules may be taken back into the presynaptic neuron for recycling or may diffuse away. In some cases, empty vesicles are returned to the cell body. Although the research is not yet conclusive, it is likely that some postsynaptic cells send negative feedback messages to slow further release of neurotransmitter by the presynaptic cell.
Figure 3.8 summarizes these steps. Let’s now consider each step in more detail.
Types of Neurotransmitters At a synapse, one neuron releases chemicals that affect a second neuron. Those chemicals are known as neurotransmitters. Research has gradually identified more and more chemicals believed or suspected to be neurotransmitters; the current count is a hundred or more (Borodinsky et al., 2004). We shall consider many
Figure 3.8 Some major events in transmission at a synapse
Cell body 1a Synthesis of peptide neurotransmitters and vesicles
Vesicle
1b Synthesis of smaller neurotransmitters such as acetylcholine Presynaptic terminal
2 Transport of peptide neurotransmitter
Transporter protein 3 Action potential causes calcium to enter, releasing neurotransmitter
8
Synaptic cleft
6
4 Neurotransmitter binds to receptor
5 Separation from receptors 7 8 Negative feedback sites respond to retrograde transmitter or to presynaptic cell’s own transmitter.
Postsynaptic neuron
Glia cell
6 Reuptake of neurotransmitter 7 Postsynaptic cell releases by transporter retrograde transmitters that protein slow further release from presynaptic cell
3.2
Chemical Events at the Synapse
59
Table 3.1 Neurotransmitters Amino Acids glutamate, GABA, glycine, aspartate, maybe others A Modified Amino Acid acetylcholine Monoamines (also modified from amino acids) indoleamines: serotonin catecholamines: dopamine, norepinephrine, epinephrine Peptides (chains of amino acids) endorphins, substance P, neuropeptide Y, many others Purines
Synthesis of Transmitters Like any other cell in the body, a neuron synthesizes the chemicals it needs from substances provided by the diet. Figure 3.9 illustrates the chemical steps in the synthesis of acetylcholine, serotonin, dopamine, epinephrine, and norepinephrine. Note the relationship among epinephrine, norepinephrine, and dopamine— three closely related compounds known as catecholamines—because they contain a catechol group and an amine group, as shown here: NH2 amine C ––– (other)
ATP, adenosine, maybe others
C ––– (other)
Gases NO (nitric oxide), maybe others
catechol HO
of them throughout this text; for now, you should familiarize yourself with some of their names (Table 3.1). Some major categories are: amino acids acids containing an amine group (NH2) peptides chains of amino acids (A long chain of amino acids is called a polypeptide; a still longer chain is a protein. The distinctions among peptide, polypeptide, and protein are not firm.) acetylcholine (a one-member “family”) a chemical similar to an amino acid, except that the NH2 group has been replaced by an N(CH3)3 group monoamines neurotransmitters containing one amine group (NH2), formed by a metabolic change in certain amino acids purines a category of chemicals including adenosine and several of its derivatives gases nitric oxide and possibly others Note that most of the neurotransmitters are amino acids, derivatives of amino acids, or chains of amino acids. The most surprising exception is nitric oxide (chemical formula NO), a gas released by many small local neurons. (Do not confuse nitric oxide, NO, with nitrous oxide, N2O, sometimes known as “laughing gas.”) Nitric oxide is poisonous in large quantities and difficult to make in a laboratory. Yet many neurons contain an enzyme that enables them to make it with relatively little energy. One special function of nitric oxide relates to blood flow: When a brain area becomes highly active, blood flow to that area increases. How does the blood “know” that a brain area has become more active? The message comes from nitric oxide. Many neurons release nitric oxide when they are stimulated. In addition to influencing other neurons, the nitric oxide dilates the nearby blood vessels, thereby increasing blood flow to that area of the brain (Dawson, GonzalezZulueta, Kusel, & Dawson, 1998).
60
Chapter 3
Synapses
OH
Each pathway in Figure 3.9 begins with substances found in the diet. Acetylcholine, for example, is synthesized from choline, which is abundant in milk, eggs, and peanuts. The amino acids phenylalanine and tyrosine, present in virtually any protein, are precursors of dopamine, norepinephrine, and epinephrine. The amino acid tryptophan, the precursor to serotonin, crosses the blood-brain barrier by a special transport system that it shares with other large amino acids. The amount of tryptophan in the diet controls the amount of serotonin in the brain (Fadda, 2000), so serotonin levels are higher after someone eats foods richer in tryptophan, such as soy, than after something with less tryptophan, such as maize (American corn). However, another factor limiting tryptophan entry to the brain is that it has to compete with other large amino acids, such as phenylalanine, that are more abundant than tryptophan in most proteins. One way to let more tryptophan enter the brain is to eat carbohydrates. Carbohydrates increase release of the hormone insulin, which takes several of the competing amino acids out of the bloodstream and into body cells, thus decreasing the competition against tryptophan (Wurtman, 1985).
STOP
&
CHECK
1. What was Loewi’s evidence that neurotransmission depends on the release of chemicals? 2. Name the three catecholamine neurotransmitters. 3. What does a highly active brain area do to increase its blood supply? Check your answers on page 68.
Acetyl coenzyme A (from metabolism)
Figure 3.9 Pathways in the synthesis of acetylcholine, dopamine, norepinephrine, epinephrine, and serotonin
Phenylalanine (from diet)
Tryptophan (from diet)
Choline (from metabolism or diet)
Tyrosine
5-hydroxytryptophan
ACETYLCHOLINE
Dopa
SEROTONIN (5-hydroxytryptamine)
+
O CH3C
O
HO
CH2CH2N(CH3)3
N H
DOPAMINE HO
Arrows represent chemical reactions.
CH2CH2NH2
CH2CH2NH2
HO
NOREPINEPHRINE OH HO
CHCH2NH2
HO
EPINEPHRINE OH HO
CHCH2NH
CH3
HO
Transport of Transmitters Many of the smaller neurotransmitters, such as acetylcholine, are synthesized in the presynaptic terminal, near their point of release. However, the larger neurotransmitters, including peptides, are synthesized in the cell body and then transported down the axon to the terminal. The speed of transport varies from only 1 millimeter (mm) per day in thin axons to more than 100 mm per day in thicker ones. Even at the highest speeds, transport from cell body to terminal may take hours or days in a long axon. Consequently, after releasing peptides, neurons replenish their supply slowly. Furthermore, neurons reabsorb and recycle many other transmitters but not the peptides. For these reasons, a neuron can exhaust its supply of a peptide faster than that of other transmitters.
tric oxide as soon as they form it, instead of storing it for future use.) The presynaptic terminal also maintains much neurotransmitter outside the vesicles. When an action potential reaches the end of an axon, the action potential itself does not release the neurotransmitter. Rather, the depolarization opens voltage-dependent calcium gates in the presynaptic terminal. Within 1 or 2 milliseconds (ms) after calcium enters the presynaptic terminal, it causes exocytosis—release of neurotransmitter in bursts from the presynaptic neuron into the synaptic cleft that separates this neuron from the postsynaptic neuron. The result is not always the same; an action potential often fails to release any transmitter, and even when it does, the amount varies (Craig & Boudin, 2001). After its release from the presynaptic cell, the neurotransmitter diffuses across the synaptic cleft to the postsynaptic membrane, where it attaches to a receptor. The neurotransmitter takes no more than 0.01 ms to diffuse across the cleft, which is only 20 to 30 nano-
Images not available due to copyright restrictions
Release and Diffusion of Transmitters The presynaptic terminal stores high concentrations of neurotransmitter molecules in vesicles, tiny nearly spherical packets (Figure 3.10). (Nitric oxide, the gaseous neurotransmitter mentioned earlier, is an exception to this rule. Neurons release ni3.2 Chemical Events at the Synapse Copyright 2007 Thomson Learning, Inc. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
61
meters (nm) wide. The total delay in transmission across the synapse, including the time for the presynaptic cell to release the neurotransmitter, is 2 ms or less (A. R. Martin, 1977; Takeuchi, 1977). Remember, Sherrington did not believe chemical processes could be fast enough to account for the activity at synapses. Obviously, he did not imagine such a narrow gap through which chemicals could diffuse so quickly. Although the brain as a whole uses many neurotransmitters, no single neuron releases them all. For many years, investigators believed that each neuron released just one neurotransmitter, but later researchers found that many, perhaps most, neurons release a combination of two or more transmitters (Hökfelt, Johansson, & Goldstein, 1984). Still later researchers found that at least one kind of neuron releases different transmitters from different branches of its axon: Motor neurons in the spinal cord have one branch to the muscles, where they release acetylcholine, and another branch to other spinal cord neurons, where they release both acetylcholine and glutamate (Nishimaru, Restrepo, Ryge, Yanagawa, & Kiehn, 2005). If one kind of neuron can release different transmitters at different branches, maybe others can too. Why does a neuron release a combination of transmitters instead of just one? Presumably, the combination makes the neuron’s message more complex, such as brief excitation followed by slight but prolonged inhibition (Jonas, Bischofberger, & Sandkühler, 1998). Although a neuron releases only a limited number of neurotransmitters, it may receive and respond to many neurotransmitters at different synapses. For example, at various locations on its membrane, it might have receptors for glutamate, serotonin, acetylcholine, and so forth.
Activation of Receptors of the Postsynaptic Cell In English, a fern is a kind of plant. In German, fern means “far away.” In French, the term is meaningless. The meaning of any word depends on the listener. Similarly, the meaning of a neurotransmitter depends on its receptor. Each of the well-studied neurotransmitters interacts with several different kinds of receptors, with different functions. Therefore, a drug or a genetic mutation that affects one receptor type may affect behavior in a specific way. For example, one type of serotonin receptor mediates nausea, and the drug ondansetron that blocks this receptor helps cancer patients undergo treatment without nausea. A neurotransmitter receptor is a protein embedded in the membrane. When the neurotransmitter attaches to the active site of the receptor, the receptor can directly open a channel—exerting an ionotropic effect—or it can produce slower but longer effects—a metabotropic effect.
62
Chapter 3
Synapses
Ionotropic Effects Some neurotransmitters exert ionotropic effects on the postsynaptic neuron: When the neurotransmitter binds to a receptor on the membrane, it almost immediately opens the gates for some type of ion. Ionotropic effects begin within one to a few milliseconds (Scannevin & Huganir, 2000), and they last only about 20 ms (North, 1989; Westbrook & Jahr, 1989). Most of the brain’s excitatory ionotropic synapses use the neurotransmitter glutamate. Most of the inhibitory ionotropic synapses use the neurotransmitter GABA,1 which opens chloride gates, enabling chloride ions, with their negative charge, to cross the membrane into the cell more rapidly than usual. Glycine is another common inhibitory transmitter (Moss & Smart, 2001). Acetylcholine, also a transmitter at many ionotropic synapses, has mostly excitatory effects, which have been extensively studied. Figure 3.11a shows a cross-section through an acetylcholine receptor as it might be seen from the synaptic cleft. Its outer portion (red) is embedded in the neuron’s membrane; its inner portion (blue) surrounds the sodium channel. When at rest (unstimulated), the inner portion of the receptor coils together tightly enough to block sodium passage. When acetylcholine attaches, the receptor folds outward, widening the sodium channel. Figure 3.11b shows a side view of the receptor with acetylcholine attached (Miyazawa, Fujiyoshi, & Unwin, 2003). For a detailed treatment of ionotropic mechanisms, visit this website: http://www.npaci.edu/features/98/Dec/ index.html
Metabotropic Effects and Second Messenger Systems At other synapses, neurotransmitters exert metabotropic effects by initiating a sequence of metabolic reactions that are slower and longer lasting than ionotropic effects (Greengard, 2001). Metabotropic effects emerge 30 ms or more after the release of the transmitter (North, 1989) and last seconds, minutes, or even longer. Whereas most ionotropic effects depend on just glutamate and GABA, metabotropic synapses use a much larger variety of transmitters. When the neurotransmitter attaches to a metabotropic receptor, it bends the rest of the protein, enabling a portion of the protein inside the neuron to react with other molecules, as shown in Figure 3.12 (Levitzki, 1988; O’Dowd, Lefkowitz, & Caron, 1989). The portion inside the neuron activates a G-protein—one that is coupled to guanosine triphosphate (GTP), an energystoring molecule. The activated G-protein in turn increases the concentration of a second messenger, such as cyclic adenosine monophosphate (cyclic AMP), inside the cell. Just as the “first messenger” (the neurotransmitter) carries information to the postsynaptic cell, 1GABA
(GA-buh) is an abbreviation for gamma-amino-butyric acid.
Images not available due to copyright restrictions
the second messenger communicates to areas within the cell. The second messenger may open or close ion channels in the membrane or alter the production of proteins or activate a portion of a chromosome. Note the contrast: An ionotropic synapse has effects localized to one point on the membrane, whereas a metabotropic synapse, by way of its second messenger, influences activity in a larger area of the cell and over a longer time. Ionotropic and metabotropic synapses contribute to different aspects of behavior. For vision and hearNonstimulated metabotropic receptor
ing, the brain needs rapid, quickly changing information, the kind that ionotropic synapses bring. In contrast, hunger, thirst, fear, and anger constitute long-term changes in the probabilities of many behaviors. Metabotropic synapses are better suited for that kind of function. Metabotropic synapses also mediate at least some of the input for taste (Huang et al., 2005) and pain (Levine, Fields, & Basbaum, 1993), which are slower and more enduring experiences than vision or hearing. 1 Transmitter molecule 1. attaches to receptor
Membrane
2. 2 Receptor bends, releasing G-protein G-protein
Figure 3.12 Sequence of events at a metabolic synapse, using a second messenger within the postsynaptic neuron
3. 3 G-protein activates a “second messenger” such as cyclic AMP, which alters a metabolic pathway, turns on a gene in the nucleus, or opens or closes an ion channel
3.2 Chemical Events at the Synapse Copyright 2007 Thomson Learning, Inc. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part.
63
Researchers sometimes describe some metabotropic neurotransmitters, mainly the peptide neurotransmitters, as neuromodulators, with the implication that they do not directly excite or inhibit the postsynaptic cell but increase or decrease the release of other transmitters or alter the response of postsynaptic cells to various inputs. The same description applies to metabotropic synapses in general. Peptide transmitters do tend to diffuse widely enough to affect several cells, and sometimes their effects are very enduring, so the term neuromodulator is useful to emphasize these special characteristics.
STOP
&
CHECK
4. When the action potential reaches the presynaptic terminal, which ion must enter the presynaptic terminal to evoke release of the neurotransmitter? 5. How do ionotropic and metabotropic synapses differ in speed and duration of effects? 6. What are second messengers, and which type of synapse relies on them? Check your answers on page 68.
Table 3.2 Partial List of Hormone-Releasing Glands Organ
Hormone
Hormone Functions
Hypothalamus
Various releasing hormones
Promote or inhibit release of various hormones by pituitary
Anterior pituitary
Thyroid-stimulating hormone (TSH)
Stimulates thyroid gland
Luteinizing hormone (LH)
Increases production of progesterone (female), testosterone (male); stimulates ovulation
Follicle-stimulating hormone (FSH)
Increases production of estrogen and maturation of ovum (female) and sperm production (male)
ACTH
Increases secretion of steroid hormones by adrenal gland
Prolactin
Increases milk production
Growth hormone (GH), also known as somatotropin
Increases body growth, including the growth spurt during puberty
Oxytocin
Controls uterine contractions, milk release, certain aspects of parental behavior, and sexual pleasure
Vasopressin (also known as antidiuretic hormone)
Constricts blood vessels and raises blood pressure, decreases urine volume
Pineal
Melatonin
Increases sleepiness, influences sleep–wake cycle, also has role in onset of puberty
Thyroid
Thyroxine Triiodothyronine
Increase metabolic rate, growth, and maturation
Parathyroid
Parathyroid hormone
Increases blood calcium and decreases potassium
Adrenal cortex
Aldosterone
Reduces secretion of salts by the kidneys
Cortisol, corticosterone
Stimulate liver to elevate blood sugar, increase metabolism of proteins and fats
Adrenal medulla
Epinephrine, norepinephrine
Similar to effects of sympathetic nervous system
Pancreas
Insulin
Increases entry of glucose to cells and increases storage as fats
Glucagon
Increases conversion of stored fats to blood glucose
Ovary
Estrogens
Promote female sexual characteristics
Progesterone
Maintains pregnancy
Testis
Androgens
Promote sperm production, growth of pubic hair, and male sexual characteristics
Liver
Somatomedins
Stimulate growth
Kidney
Renin
Converts a blood protein into angiotensin, which regulates blood pressure and contributes to hypovolemic thirst
Thymus
Thymosin (and others)
Support immune responses
Fat cells
Leptin
Decreases appetite, increases activity, necessary for onset of puberty
Posterior pituitary
64
Chapter 3
Synapses
Hormones A hormone is a chemical that is secreted, in most cases by a gland but also by other kinds of cells, and conveyed by the blood to other organs, whose activity it influences. A neurotransmitter is like a signal on a telephone line: It conveys a message directly and exclusively from the sender to the receiver. Hormones function more like a radio station: They convey a message to any receiver that happens to be tuned in to the right station. Figure 3.13 presents the major endocrine (hormone-producing) glands. Table 3.2 lists some important hormones and their principal effects. Hormones are particularly useful for coordinating long-lasting changes in multiple parts of the body. For example, birds that are preparing to migrate secrete hormones that change their eating and digestion to store extra energy for a long journey. Among the various types of hormones are protein hormones and peptide hormones, composed of chains of amino acids. (Proteins are longer chains and peptides are shorter.)
Protein and peptide hormones attach to membrane receptors where they activate a second messenger within the cell—exactly the same process as at a metabotropic synapse. In fact, many chemicals—including epinephrine, norepinephrine, insulin, and oxytocin—serve as both neurotransmitters and hormones. Just as circulating hormones modify brain activity, hormones secreted by the brain control the secretion of many other hormones. The pituitary gland, attached to the hypothalamus (Figure 3.14), consists of two distinct glands, the anterior pituitary and the posterior pituitary, which release different sets of hormones (see Table 3.2). The posterior pituitary, composed of neural tissue, can be considered an extension of the hypothalamus. Neurons in the hypothalamus synthesize the hormones oxytocin and vasopressin (also known as antidiuretic hormone), which migrate down axons to the posterior pituitary, as shown in Figure 3.15. Later, the posterior pituitary releases these hormones into the blood. The anterior pituitary, composed of glandular tissue, synthesizes six hormones, although the hypo-
Hypothalamus Pineal gland Pituitary gland
Parathyroid glands Thyroid glands Thymus
Liver
Optic chiasm
Adrenal gland
Third ventricle Hypothalamus
Kidney Pancreas Pituitary stalk Membrane covering around brain
Ovary (in female) Placenta (in female during pregnancy)
Bone at base of cranial cavity Testis (in male) Anterior lobe of pituitary
Posterior lobe of pituitary
Figure 3.13 Location of some major endocrine glands
Figure 3.14 Location of the hypothalamus and pituitary gland in the human brain
(Source: Starr & Taggart, 1989)
(Source: Starr & Taggart, 1989)
3.2
Chemical Events at the Synapse
65
Hypothalamus secretes releasing hormones and inhibiting hormones that control anterior pituitary. Also synthesizes vasopressin and oxytocin, which travel to posterior pituitary.
Hypothalamus TSH-releasing hormone Anterior pituitary TSH Thyroid gland Thyroxine and triiodothyronine
Excitatory effect
(Arterial flow)
Inhibitory effect
Anterior pituitary
Posterior pituitary
Vasopressin and oxytocin GH, ACTH, TSH, FSH,LH, and prolactin
(Arterial flow)
Figure 3.15 Pituitary hormones The hypothalamus produces vasopressin and oxytocin, which travel to the posterior pituitary (really an extension of the hypothalamus). The posterior pituitary releases those hormones in response to neural signals. The hypothalamus also produces releasing hormones and inhibiting hormones, which travel to the anterior pituitary, where they control the release of six hormones synthesized there.
thalamus controls their release (see Figure 3.15). The hypothalamus secretes releasing hormones, which flow through the blood to the anterior pituitary. There they stimulate or inhibit the release of the following hormones: Adrenocorticotropic hormone (ACTH) Thyroid-stimulating hormone (TSH) Prolactin Somatotropin, also known as growth hormone (GH) Gonadotropins Follicle-stimulating hormone (FSH) Luteinizing hormone (LH)
Controls secretions of the adrenal cortex Controls secretions of the thyroid gland Controls secretions of the mammary glands Promotes growth throughout the body Control secretions of the gonads
The hypothalamus maintains fairly constant circulating levels of certain hormones through a nega-
66
Chapter 3
Synapses
Figure 3.16 Negative feedback in the control of thyroid hormones The hypothalamus secretes a releasing hormone that stimulates the anterior pituitary to release TSH, which stimulates the thyroid gland to release its hormones. Those hormones in turn act on the hypothalamus to decrease its secretion of the releasing hormone.
tive feedback system. For example, when the level of thyroid hormone is low, the hypothalamus releases TSH-releasing hormone, which stimulates the anterior pituitary to release TSH, which in turn causes the thyroid gland to secrete more thyroid hormones (Figure 3.16). For more information about hormones in general, visit this site: http://www.endo-society.org/
STOP
&
CHECK
7. Which has longer lasting effects, a neurotransmitter or a hormone? Which affects more organs? 8. Which part of the pituitary—anterior or posterior—is neural tissue, similar to the hypothalamus? Which part is glandular tissue and produces hormones that control the secretions by other endocrine organs? Check your answers on page 68.
Inactivation and Reuptake of Neurotransmitters A neurotransmitter does not linger at the postsynaptic membrane. If it did, it might continue exciting or inhibiting the receptor. Various neurotransmitters are inactivated in different ways. After acetylcholine activates a receptor, it is broken down by the enzyme acetylcholinesterase (a-SEEtil-ko-lih-NES-teh-raze) into two fragments: acetate and choline. The choline diffuses back to the presyn-
aptic neuron, which takes it up and reconnects it with acetate already in the cell to form acetylcholine again. Although this recycling process is highly efficient, it takes time, and the presynaptic neuron does not reabsorb every molecule it releases. A sufficiently rapid series of action potentials at any synapse can deplete the neurotransmitter faster than the presynaptic cell replenishes it, thus slowing or halting transmission (Liu & Tsien, 1995). In the absence of acetylcholinesterase, acetylcholine remains and continues stimulating its receptor. Drugs that block acetylcholinesterase can be helpful for people with diseases that impair acetylcholine transmission, such as myasthenia gravis. Serotonin and the catecholamines (dopamine, norepinephrine, and epinephrine) do not break down into inactive fragments at the postsynaptic membrane but simply detach from the receptor. The presynaptic neuron takes up most of these neurotransmitter molecules intact and reuses them. This process, called reuptake, occurs through special membrane proteins called transporters. Many of the familiar antidepressant drugs, such as fluoxetine (Prozac), block reuptake and thereby prolong the effects of the neurotransmitter on its receptor. (Chapter 15 discusses antidepressants in more detail.) Some of the serotonin and catecholamine molecules, either before or after reuptake, are converted into inactive chemicals that cannot stimulate the receptor. The enzymes that convert catecholamine transmitters into inactive chemicals are COMT (catechol-o-methyltransferase) and MAO (monoamine oxidase), which affects serotonin as well as catecholamines. We return to MAO in the discussion of antidepressant drugs. The peptide neurotransmitters (or neuromodulators) are neither inactivated nor reabsorbed. They simply diffuse away.
STOP
&
CHECK
9. What happens to acetylcholine molecules after they stimulate a postsynaptic receptor? 10. What happens to serotonin and catecholamine molecules after they stimulate a postsynaptic receptor? Check your answers on page 68.
Negative Feedback from the Postsynaptic Cell Suppose someone had a habit of sending you an e-mail message and then, worried that you might not have re-
ceived it, sending it again and again. To prevent cluttering your inbox, you might add a system that replied to any message with an automatic reply, “Yes, I got your message; don’t send it again.” A couple of mechanisms in the nervous system may serve that function. First, many presynaptic terminals have receptors sensitive to the same transmitter they release. Theoretically, these receptors may act as autoreceptors—receptors that detect the amount of transmitter released and inhibit further synthesis and release after it reaches a certain level. That is, they could provide negative feedback. However, the evidence on this point is mixed. For example, we should expect autoreceptors to be more strongly stimulated by repetitive release of the transmitter than by single release, but the data so far have been inconsistent on this point (Kalsner, 2001; Starke, 2001). Second, some postsynaptic neurons respond to stimulation by releasing special chemicals that travel back to the presynaptic terminal, where they inhibit further release of transmitter. Nitric oxide is apparently one such transmitter. Two others are anandamide and 2-AG (sn-2 arachidonylglycerol), both of which bind to the same receptors as marijuana extracts. We shall discuss them further in the next module, when we consider drugs more thoroughly. Here, the point is that neurons apparently have ways to inhibit themselves from excessive release of transmitter.
Synapses and Personality The nervous system uses, according to current estimates, about 100 neurotransmitters in various areas. Many of the neurotransmitters have multiple kinds of receptors, each with different properties. Why such an abundance of mechanisms? Presumably, each kind of synapse has a specific function in behavior or in the development of the nervous system. If each kind of synapse has its own specific role in behavior, then anyone who has more or less than the normal amount of some neurotransmitter or receptor (for genetic or other reasons) should have some altered behavioral tendencies. That is, theoretically, variations in synapses should have something to do with variations in personality. In the 1990s, researchers studied variations in dopamine receptors and found that people with one form of the D2 receptor (one of five types of dopamine receptors) were more likely than others to develop severe alcoholism. Later research suggested that this gene is not specific to alcoholism but instead increases the probability of a variety of pleasure-seeking behaviors, including alcohol consumption, other recreational drug use, overeating, and habitual gambling (K. Blum, Cull, Braverman, & Comings, 1996). Similarly, researchers studying variations in the D4 receptor found that people with an alternative form of
3.2
Chemical Events at the Synapse
67
the receptor tend to have a “novelty-seeking” personality as measured by a standardized personality test (Benjamin et al., 1996; Ebstein et al., 1996; Elovainio, Kivimäki, Viikari, Ekelund, & Keltikangas-Järvinen, 2005; Noble et al., 1998). Novelty seeking consists of being impulsive, exploratory, and quick-tempered. Other studies have linked genes controlling various receptors to anxiety, neurotic personality, and so forth. However, most of the reported effects have been small, or based on small samples, and hard to replicate. For example, of 36 studies examining the role of one gene in anxiety, 12 found the gene significantly linked to increased anxiety, 6 found it linked to decreased anxiety, and 18 found no significant difference (Savitz & Ramesar, 2004). Although the results so far have been far from convincing, research continues. Personality is a complicated matter, difficult to measure, and no doubt controlled by all of life’s experiences as well as a great many genes. Teasing apart the contribution of any one gene will not be easy.
2. Many chemicals are used as neurotransmitters. With few exceptions, each neuron releases the same combination of neurotransmitters from all branches of its axon. (p. 59) 3. At certain synapses, a neurotransmitter exerts its effects by attaching to a receptor that opens the gates to allow a particular ion, such as sodium, to cross the membrane more readily. At other synapses, a neurotransmitter may lead to slower but longer lasting changes inside the postsynaptic cell. (p. 62) 4. After a neurotransmitter (other than a peptide) has activated its receptor, many of the transmitter molecules reenter the presynaptic cell through transporter molecules in the membrane. This process, known as reuptake, enables the presynaptic cell to recycle its neurotransmitter. (p. 66)
Answers to STOP
&
CHECK
Questions Module 3.2 In Closing: Neurotransmitters and Behavior In the first module of this chapter, you read how Charles Sherrington began the study of synapses with his observations on dogs. In this module, you read about cellular and molecular processes, based on research with a wide variety of other species. For most of what we know about synapses, species differences are small or nonexistent. The microscopic structure of neuron types and ion channels is virtually the same across species. The neurotransmitters found in humans are the same as those of other species, with very few exceptions. (For example, insects apparently lack the receptors for a couple of peptide transmitters.) Evidently, evolution has been very conservative about neurotransmission. After certain chemicals proved useful for the purpose, newly evolved species have continued using those same chemicals. Therefore, the differences among species, or among individuals within a species, are quantitative. We vary in the number of various kinds of synapses, the amount of release, sensitivity of receptors, and so forth. From those quantitative variations in a few constant principles comes all the rich variation that we see in behavior.
Summary 1. The great majority of synapses operate by transmitting a neurotransmitter from the presynaptic cell to the postsynaptic cell. (p. 58)
68
Chapter 3
Synapses
1. When Loewi stimulated a nerve that increased or decreased a frog’s heart rate, he could withdraw some fluid from the area around the heart, transfer it to another frog’s heart, and thereby increase or decrease its rate also. (p. 60) 2. Epinephrine, norepinephrine, and dopamine (p. 60) 3. Many highly active neurons release nitric oxide, which dilates the blood vessels in the area and thereby increases blood flow to the area. (p. 60) 4. Calcium (p. 64) 5. Ionotropic synapses act more quickly and more briefly. (p. 64) 6. At metabotropic synapses, when the neurotransmitter attaches to its receptor, it releases a chemical (the second messenger) within the postsynaptic cell, which alters metabolism or gene expression within the postsynaptic cell. (p. 64) 7. A hormone has longer lasting effects. A hormone can affect many parts of the brain as well as other organs; a neurotransmitter affects only the neurons near its point of release. (p. 66) 8. The posterior pituitary is neural tissue, like the hypothalamus. The anterior pituitary is glandular tissue and produces hormones that control several other endocrine organs. (p. 66) 9. The enzyme acetylcholinesterase breaks acetylcholine molecules into two smaller molecules, acetate and choline, which are then reabsorbed by the presynaptic terminal. (p. 67)
10. Most serotonin and catecholamine molecules are reabsorbed by the presynaptic terminal. Some of their molecules are broken down into inactive chemicals, which then float away in the blood. (p. 67)
Thought Questions 1. Suppose that axon A enters a ganglion (a cluster of neurons) and axon B leaves on the other side. An experimenter who stimulates A can shortly thereafter record an impulse traveling down B. We want
to know whether B is just an extension of axon A or whether A formed an excitatory synapse on some neuron in the ganglion, whose axon is axon B. How could an experimenter determine the answer? You should be able to think of more than one good method. Presume that the anatomy within the ganglion is so complex that you cannot simply trace the course of an axon through it. 2. Transmission of visual and auditory information relies largely on ionotropic synapses. Why is ionotropic better than metabotropic for these purposes? For what purposes might metabotropic synapses be better?
3.2
Chemical Events at the Synapse
69
Module 3.3
Drugs and Synapses
D
id you know that your brain is constantly making chemicals resembling opiates? It also makes its own marijuana-like chemicals, and it has receptors that respond to cocaine and LSD. Nearly every drug with psychological effects resembles something that the brain makes naturally and takes advantage of mechanisms that evolved to handle normal synaptic transmission. (The exceptions are Novocain and related anesthetic drugs that block sodium channels in the membrane instead of acting at synapses.) By studying the effects of drugs, we learn more about the drugs, of course, but also about synapses. This module deals mainly with abused drugs; later chapters will also dis-
Figure 3.17 Effects of some drugs at dopamine synapses Drugs can alter any stage of synaptic processing, from synthesis of the neurotransmitter through release and reuptake.
cuss antidepressants, antipsychotic drugs, tranquilizers, and other psychiatric medications. Most of the commonly abused drugs derive from plants. For example, nicotine comes from tobacco, caffeine from coffee, opiates from poppies, cocaine from coca, and tetrahydrocannabinol from marijuana. When we stop to think about it, we should be puzzled that our brains respond to plant chemicals. An explanation is more apparent if we put it the other way: Why do plants produce chemicals that affect our brains? Part of the answer relates to the surprising fact that, with few exceptions, human neurotransmitters and
(from diet)
Tyrosine AMPT can block this reaction
DOPA can increase supply
DOPA
Dopamine (DA) m vi a e n z y
Reserpine can cause leakage from vesicles
e
DA
Amphetamine increases release
Certain antidepressants block this reaction
C PA e) DO ctiv a (in
DA
DA DA
DA
Typical antipsychotic drug, such as haloperidol, blocks receptor
70
Chapter 3
Synapses
release reuptake ine receptor Dopam
Postsynaptic neuron
Cocaine blocks reuptake. So do methylphenidate and tricyclic antidepressants, but less strongly.
hormones are the same as those of other animal species (Cravchik & Goldman, 2000). So if a plant evolves some chemical to attract wasps to its flowers, to deter caterpillars from eating its leaves, or whatever, that chemical is likely to affect humans also. Another part of the answer is that many of our neurotransmitters are also found in plants. For example, plants not only have glutamate, but they also have ionotropic glutamate receptors chemically similar to our own (Lam et al., 1998). Exactly what these receptors do in plants is uncertain; perhaps they serve for some type of communication within a plant, even though the plant has no nervous system. Evidently, a small number of chemicals are so well suited to conveying information that evolution has had no reason to replace them.
Drug Mechanisms Drugs can either facilitate or inhibit transmission at synapses. A drug that blocks the effects of a neurotransmitter is called an antagonist; a drug that mimics or increases the effects is called an agonist. (The term agonist is derived from a Greek word meaning “contestant.” We derive our term agony from the same root. An antagonist is an “antiagonist,” or member of the opposing team.) A drug that is a mixed agonist–antagonist is an agonist for some behavioral effects of the neurotransmitter and an antagonist for others, or an agonist at some doses and an antagonist at others. Drugs influence synaptic activity in many ways. As in Figure 3.17, which illustrates a dopamine synapse, a drug can increase or decrease the synthesis of the neurotransmitter, cause it to leak from its vesicles, increase its release, decrease its reuptake, block its breakdown into inactive chemicals, or directly stimulate or block the postsynaptic receptors. Investigators say that a drug has an affinity for a particular type of receptor if it binds to that receptor, fitting somewhat like a lock and key. Drugs vary in their affinities from strong to weak. The efficacy of a drug is its tendency to activate the receptor. So, for example, a drug that binds tightly to a receptor but fails to stimulate it has a high affinity but a low efficacy. You may have noticed that the effectiveness and side effects of tranquilizers, antidepressants, and other drugs vary from one person to another. Why? One reason is that most drugs affect several kinds of receptors. People vary in their abundance of each kind of receptor. For example, one person might have a relatively large number of dopamine type D4 receptors and relatively few D1 or D2 receptors, whereas someone else has more D1, fewer D4, and so forth. Therefore, a drug with an affinity for dopamine receptors will affect different kinds of dopamine receptors in different people (Cravchik & Goldman, 2000).
STOP
&
CHECK
1. Is a drug with high affinity and low efficacy an agonist or an antagonist? Check your answers on page 77.
Common Drugs and Their Synaptic Effects We categorize drugs based on their predominant action. For example, amphetamine and cocaine are stimulants; opiates are narcotics; LSD is a hallucinogen. Despite their differences, however, nearly all abused drugs directly or indirectly stimulate the release of dopamine, especially in the nucleus accumbens, a small subcortical area rich in dopamine receptors (Figure 3.18). Dopamine is in most cases an inhibitory transmitter; however, in the nucleus accumbens, sustained bursts of dopamine inhibit cells that release the inhibitory transmitter GABA. Thus, by inhibiting release of an inhibitor, dopamine has the net effect of exciting the nucleus accumbens (Hjelmstad, 2004).
Stimulant Drugs Stimulant drugs increase excitement, alertness, and motor activity, while elevating mood. They decrease fatigue. Whereas nearly all drugs elevate dopamine activity in the nucleus accumbens in some way, stimulant drugs do so directly, especially at dopamine receptor types D2, D3, and D4 (R. A. Harris, Brodie, & Dunwiddie, 1992; Wise & Bozarth, 1987). Because dopamine is in many cases an inhibitory transmitter, drugs that increase activity at dopamine synapses decrease the activity in much of the brain. Figure 3.19 shows the results of a PET scan, which measures relative amounts of activity in various brain areas (London et al., 1990). (PET scans are a method of visualizing brain anatomy, as described in Chapter 4.) How, you might wonder, could drugs that decrease brain activity lead to behavioral arousal? One hypothesis is that high dopamine activity mostly decreases “background noise” in the brain and therefore increases the signal-to-noise ratio (Mattay et al., 1996; Willson, Wilman, Bell, Asghar, & Silverstone, 2004). Amphetamine stimulates dopamine synapses by increasing the release of dopamine from the presynaptic terminal. The presynaptic terminal ordinarily reabsorbs released dopamine through a protein called the dopamine transporter. Amphetamine reverses the transporter, causing the cell to release dopamine instead of 3.3
Drugs and Synapses
71
Axons from nucleus accumbens
Nucleus accumbens Medial forebrain bundle (a path of axons that release dopamine)
Figure 3.18 Location of the nucleus accumbens in the human brain Nearly all abused drugs, as well as a variety of other highly reinforcing or addictive activities, increase dopamine release in the nucleus accumbens.
Images not available due to copyright restrictions
72
Chapter 3
Synapses
reabsorbing it (Giros, Jaber, Jones, Wightman, & Caron, 1996). Amphetamine also blocks certain receptors that inhibit dopamine release, so it enhances dopamine release in at least two ways (Paladini, Fiorillo, Morikawa, & Williams, 2001). However, amphetamine’s effects are nonspecific, as it also increases the release of serotonin, norepinephrine, and other transmitters. Cocaine blocks the reuptake of dopamine, norepinephrine, and serotonin, thus prolonging their effects. Several kinds of evidence indicate that the behavioral effects of cocaine depend mainly on increasing dopamine effects and secondarily on serotonin effects (Rocha et al., 1998; Volkow, Wang, Fischman, et al., 1997). Many of the effects of cocaine resemble those of amphetamine. The effects of cocaine and amphetamine on dopamine synapses are intense but short-lived. By increasing the release of dopamine or decreasing its reuptake, the drugs increase the accumulation of dopamine in the synaptic cleft. However, the excess dopamine washes away from the synapse faster than the presynaptic cell can make more to replace it. As a result, a few hours after taking amphetamine or cocaine, a user “crashes” into a more depressed state. These withdrawal symptoms are not powerful like those of heroin, but they are easily noticeable. Stimulant drugs are known primarily for their short-term effects, but repeated use of high doses can produce long-term problems. Cocaine users suffer lasting changes in brain metabolism and blood flow, thereby increasing their risk of stroke, epilepsy, and memory impairments (Strickland, Miller, Kowell, & Stein, 1998). Methylphenidate (Ritalin), another stimulant drug, is often prescribed for people with attention-deficit disorder (ADD), a condition marked by impulsiveness and poor control of attention. Methylphenidate and cocaine block the reuptake of dopamine in the same way at the same brain receptors. The differences between the drugs relate to dose and time course. Drug abusers, of course, use large doses of cocaine, whereas anyone following a physician’s directions uses only small amounts of methylphenidate. Furthermore, when people take methylphenidate pills, the drug’s concentration in the brain increases gradually over an hour or more and then declines slowly. In contrast, sniffed or injected cocaine produces a rapid rise and fall of effects (Volkow, Wang, & Fowler, 1997; Volkow, Wang, Fowler, et al., 1998). Therefore, methylphenidate does not produce the sudden rush of excitement, the strong withdrawal effects, the cravings, or the addiction that are
common with cocaine. Larger amounts of methylphenidate produce effects resembling cocaine’s. Many people wonder whether prolonged use of methylphenidate in childhood makes people more likely to abuse drugs later. According to a review of the few studies conducted on this issue, children who take methylphenidate are less likely than others to abuse drugs during adolescence or early adulthood (Wilens, Faraone, Biederman, & Gunawardene, 2003). Of course, the researchers could not randomly assign children to methylphenidate and control conditions. However, experiments with rats point to the same conclusion. For example, experimenters gave young rats moderate doses of methylphenidate and then, months later, gave them cocaine and tested their preference for the room where they received cocaine versus an adjoining room where they had not received it. Compared to other rats, those with early exposure to methylphenidate showed a lower preference for the stimuli associated with cocaine and in some cases an avoidance of them (Andersen, Arvanitogiannis, Pliakas, LeBlanc, & Carlezon, 2002; Carlezon, Mague, & Andersen, 2003; Mague, Andersen, & Carlezon, 2005). Although these studies contradict the worry that early methylphenidate treatment might lead to later drug abuse, prolonged use of methylphenidate leads to other long-term disadvantages, including increased fear responses and a possibly increased risk of depression (Bolaños, Barrot, Berton, Wallace-Black, & Nestler, 2003; Carlezon et al., 2003). The drug methylenedioxymethamphetamine (MDMA, or “ecstasy”) is a stimulant at low doses, increasing the release of dopamine. At higher doses (comparable to the doses people use recreationally), it also increases serotonin release, producing hallucinogenic effects as well. In monkey studies, MDMA not only stimulates axons that release serotonin but also destroys them (McCann, Lowe, & Ricaurte, 1997). Research on humans is difficult because researchers cannot randomly assign people to MDMA and control groups. Many studies have found that repeated MDMA users, in comparison to nonusers, have more anxiety, depression, sleep problems, memory deficits, attention problems, and impulsiveness, even a year or two after quitting (Montoya, Sorrentino, Lukas, & Price, 2002). Like laboratory animals, humans exposed to MDMA have long-term deficits in serotonin synthesis and release (Parrott, 2001). One study found that repeated MDMA users have thinner cell layers in certain brain areas, as shown in Figure 3.20 (P. M. Thompson et al., 2004). We do not know, of course, how many of the users had behavioral problems or thin cell layers in the brain prior to using MDMA. (Maybe people who already have anxiety, memory problems, and so forth are more likely than others to use MDMA.) Also, some MDMA users abuse other drugs as well. Still, the evidence suggests serious risks associated with this drug.
CTL
MA >0.5
>0.5
0.3
0.3
0.1 Average gray matter (a)
0.1 Average gray matter (b)
Figure 3.20 Brain thinning in MDMA users The figure on the left is the average brain scan for 21 adults who had never been substance abusers; the figure on the right is the average of 22 adults who had used MDMA frequently for years. Red represents the thickest area of brain cells, followed by yellow, green, blue, and purple. Note the apparent thinning in certain areas for the MDMA users. (Source: From “Structural abnormalities in the brains of human subjects who use methamphetamine,” by P. M. Thompson et al., Journal of Neuroscience, 24. Copyright 2004 by the Society for Neuroscience. Reprinted with permission.
STOP
&
CHECK
2. How does amphetamine influence dopamine synapses? 3. How does cocaine influence dopamine synapses? 4. Why is methylphenidate generally less disruptive to behavior than cocaine is despite the drugs’ similar mechanisms? 5. Does cocaine increase or decrease overall brain activity? Check your answers on page 77.
Nicotine Nicotine, a compound present in tobacco, has long been known to stimulate one type of acetylcholine receptor, conveniently known as the nicotinic receptor, which is found both in the central nervous system and at the nerve-muscle junction of skeletal muscles. Nicotinic receptors are abundant on neurons that release dopamine in the nucleus accumbens, so nicotine increases dopamine release there (Levin & Rose, 1995; Pontieri, Tanda, Orzi, & DiChiara, 1996). In fact, nicotine increases dopamine release in mostly the same cells of the nucleus accumbens that cocaine does (Pich et al., 1997). One consequence of repeated exposure to nicotine, as demonstrated in rat studies, is that after the end of nicotine use, the nucleus accumbens cells responsible for reinforcement become less responsive than usual (Epping-Jordan, Watkins, Koob, & Markou, 1998). That is, many events, not just nicotine itself, become less reinforcing than they used to be.
3.3
Drugs and Synapses
73
Opiates Opiate drugs are derived from (or similar to those derived from) the opium poppy. Familiar opiates include morphine, heroin, and methadone. Opiates relax people, decrease their attention to real-world problems, and decrease their sensitivity to pain. Although opiates are frequently addictive, people who take them as painkillers under medical supervision almost never abuse them. Addiction is not entirely a product of the drug; it depends on the person, the reasons for taking the drug, the dose, and the social setting. People used morphine and other opiates for centuries without knowing how the drugs affected the brain. Then Candace Pert and Solomon Snyder found that opiates attach to specific receptors in the brain (Pert & Snyder, 1973). It was a safe guess that vertebrates had not evolved such receptors just to enable us to become drug addicts; the brain must produce its own chemical that attaches to these receptors. Soon investigators found that the brain produces peptides now known as the endorphins—a contraction of endogenous morphines. This discovery was important because it indicated that opiates relieve pain by acting on receptors in the brain, not just out in the skin or organs where people felt the pain. Also, this discovery paved the way for the discovery of other peptides that regulate emotions and motivations. Endorphin synapses may contribute directly to certain kinds of reinforcement (Nader, Bechara, Roberts, & van der Kooy, 1994), but they also act indirectly by way of dopamine. Endorphin synapses inhibit ventral tegmental neurons (in the midbrain) that release GABA, a transmitter that inhibits the firing of dopamine neurons (North, 1992). Thus, by inhibiting an inhibitor, the net effect is to increase dopamine release. Endorphins also block the locus coeruleus, a hindbrain area that responds to arousing or stressful stimuli by releasing norepinephrine, which facilitates memory storage. When endorphins or opiate drugs block this area, the result is decreased response to stress and decreased memory storage—two effects common among opiate users (Curtis, Bello, & Valentine, 2001).
STOP
&
CHECK
6. How does nicotine influence dopamine synapses? 7. How do opiates influence dopamine synapses? Check your answers on page 77.
Marijuana The leaves of the marijuana plant contain the chemiD9-THC) and other cancal D9-tetrahydrocannabinol (D
74
Chapter 3
Synapses
nabinoids (chemicals related to ∆9-THC). Marijuana has been used medically to relieve pain or nausea, to combat glaucoma (an eye disorder), and to increase appetite. Chemically synthesized and purified THC (under the name dronabinol) has been approved for medical use in the United States, although marijuana itself has not—except in California, where state law and federal law are in conflict. Common psychological effects of marijuana include an intensification of sensory experience and an illusion that time is passing very slowly. Studies have reported significant impairments of memory and cognition, especially in new users and heavy users. (Occasional moderate users develop partial tolerance.) The observed memory impairments in heavy users could mean either that marijuana impairs memory or that people with memory impairments are more likely to use marijuana. However, former users recover normal memory after 4 weeks of abstention from the drug (Pope, Gruber, Hudson, Huestis, & Yurgelun-Todd, 2001). The recovery implies that marijuana impairs memory. Cannabinoids dissolve in the body’s fats and leave slowly. One consequence is that certain behavioral effects can last hours (Howlett et al., 2004) and that traces of marijuana can be detected in the blood or urine for weeks. Another consequence is that quitting marijuana does not produce sudden withdrawal symptoms like those of opiates. Heavy marijuana smokers who abstain do report a moderate amount of irritability, restlessness, anxiety, depression, stomach pain, sleep difficulties, craving for marijuana, and loss of appetite (Budney, Hughes, Moore, & Novy, 2001). Investigators could not explain the effects of marijuana on the brain until 1988, when researchers finally found the brain’s cannabinoid receptors (Devane, Dysarz, Johnson, Melvin, & Howlett, 1988). These receptors are widespread in the animal kingdom, having been reported in every species tested so far, except for insects (McPartland, DiMarzo, de Petrocellis, Mercer, & Glass, 2001). Cannabinoid receptors are among the most abundant receptors in the mammalian brain, especially in the hippocampus, the basal ganglia, the cerebellum, and parts of the cerebral cortex (Herkenham, 1992; Herkenham, Lynn, de Costa, & Richfield, 1991). However, they are sparse in the medulla and the rest of the brainstem. That near absence is significant because the medulla includes the neurons that control breathing and heartbeat. Because marijuana has little effect on heartbeat or breathing, it poses almost no risk of a fatal overdose. Marijuana does produce other health hazards, including an increased risk of lung cancer (Zhu et al., 2000). Just as the discovery of opiate receptors in the brain led to finding the brain’s endogenous opiates, investigators searched for a brain chemical that binds to cannabinoid receptors. They found two such chemicals—
anandamide (from the Sanskrit word ananda, meaning “bliss”) (Calignano, LaRana, Giuffrida, & Piomelli, 1998; DiMarzo et al., 1994) and the more abundant sn-2 arachidonylglycerol, abbreviated 2-AG (Stella, Schweitzer, & Piomelli, 1997). The cannabinoid receptors are peculiar in being located on the presynaptic neuron, not the postsynaptic one. When glutamate depolarizes the postsynaptic neuron, the postsynaptic neuron releases anandamide or 2-AG, which travels back to the presynaptic neuron, temporarily decreasing transmitter release (Kreitzer & Regehr, 2001; R. I. Wilson & Nicoll, 2002). Cannabinoids also diffuse to inhibit the release of GABA from other nearby neurons (Galante & Diana, 2004). That is, cannabinoids put the brakes on the release of both glutamate, which is excitatory, and GABA, which is inhibitory (Kreitzer & Regehr, 2001; Ohno-Shosaku, Maejima, & Kano, 2001; R. I. Wilson & Nicoll, 2001). In some cases, the result is decreased release of transmitter in a fairly wide area (Kreitzer, Carter, & Regehr, 2002). The functions of cannabinoids are partly known, partly unknown. After a rapid burst of glutamate release, cannabinoids feed back to retard further glutamate release (the negative feedback discussed on page 67). Excessive glutamate stimulation can be harmful, even fatal, to a postsynaptic cell, so halting it is clearly helpful (Marsicano et al., 2003). The value of blocking GABA is less obvious but may be related to the formation of memories (Chevaleyre & Castillo, 2004; Gerdeman, Ronesi, & Lovinger, 2002). Marijuana stimulates cannabinoid receptors at times when they would not ordinarily receive stimulation. In effect, they give glutamate-releasing neurons the message, “I already received your message” before the message is sent, shutting down the normal flow of information. Why are these effects pleasant or habit forming? Actually, of course, not all of the effects of marijuana are pleasant. But remember that virtually all abused drugs increase the release of dopamine in the nucleus accumbens. Cannabinoids do so indirectly. One place in which they inhibit GABA release is the ventral tegmental area of the midbrain, a major source of axons that release dopamine in the nucleus accumbens. When cannabinoids inhibit GABA there, the result is less inhibition (therefore increased activity) of the neurons that release dopamine in the nucleus accumbens (Cheer, Wassu, Heien, Phillips, & Wightman, 2004). Researchers have tried to explain some of marijuana’s other behavioral effects. Cannabinoids relieve nausea by inhibiting serotonin type 3 synapses (5-HT3), which are known to be important for nausea (Fan, 1995). Cannabinoid receptors are abundant in areas of the hypothalamus that influence feeding, and mice lacking these receptors show decreased appetite under some circumstances (DiMarzo et al., 2001). Presumably, excess cannabinoid activity would produce extra appetite.
The report that “time passes more slowly” under marijuana’s influences is harder to explain, but whatever the cause, we can demonstrate it in rats as well: Consider a rat that has learned to press a lever for food on a fixed-interval schedule, where just the first press of any 30-second period produces food. With practice, a rat learns to wait after each press before it starts pressing again. Under the influence of marijuana, rats press sooner after each reinforcer. For example, instead of waiting 20 seconds, a rat might wait only 10 or 15. Evidently, the 10 or 15 seconds felt like 20 seconds; time was passing more slowly (Han & Robinson, 2001).
Hallucinogenic Drugs Drugs that distort perception are called hallucinogenic drugs. Many hallucinogenic drugs, such as lysergic acid diethylamide (LSD), chemically resemble serotonin (Figure 3.21) and stimulate serotonin type 2A (5-HT2A) receptors at inappropriate times or for longer than usual durations. You could compare this effect to a key that almost fits a lock, so that when you get it in, it’s hard to get it out. (The analogy is not perfect, but it conveys the general idea.) O === C ––– N(C2H5)2 NCH3
HO
CH2CH2NH2 N H Serotonin
N H LSD
Figure 3.21 Resemblance of the neurotransmitter serotonin to LSD, a hallucinogenic drug Note that we understand the chemistry better than the psychology. LSD exerts its effects at 5-HT2A receptors, but why do effects at those receptors produce hallucinations? Table 3.3 summarizes the effects of some commonly abused drugs.
STOP
&
CHECK
8. What are the effects of cannabinoids on neurons? 9. If incoming serotonin axons were destroyed, LSD would still have its normal effects on the postsynaptic cell. However, if incoming dopamine and norepinephrine axons were destroyed, amphetamine and cocaine would have no effects. Explain the difference. Check your answers on page 77.
3.3
Drugs and Synapses
75
Table 3.3 Summary of Some Drugs and Their Effects Drugs
Main Behavioral Effects
Main Synaptic Effects
Amphetamine
Excitement, alertness, elevated mood, decreased fatigue
Increases release of dopamine and several other transmitters
Cocaine
Excitement, alertness, elevated mood, decreased fatigue
Blocks reuptake of dopamine and several other transmitters
Methylphenidate (Ritalin)
Increased concentration
Blocks reuptake of dopamine and others, but gradually
MDMA (“ecstasy”)
Low dose: stimulant Higher dose: sensory distortions
Releases dopamine Releases serotonin, damages axons containing serotonin
Nicotine
Mostly stimulant effects
Stimulates nicotinic-type acetylcholine receptor, which (among other effects) increases dopamine release in nucleus accumbens
Opiates
Relaxation, withdrawal, decreased pain
Stimulates endorphin (e.g., heroin, morphine) receptors
Cannabinoids (marijuana)
Altered sensory experiences, decreased pain and nausea, increased appetite
Excites negative-feedback receptors on presynaptic cells; those receptors ordinarily respond to anandamide and 2AG
Hallucinogens (e.g., LSD)
Distorted sensations
Stimulates serotonin type 2A receptors (5-HT2A)
Module 3.3 In Closing: Drugs and Behavior Suppose while several people are communicating by e-mail or instant messaging, we arrange for the computers to delete certain letters or words and add others in their place. The result would be garbled communication, but sometimes it might be entertaining. Abused drugs are analogous: They distort the brain’s communication, blocking some messages, magnifying others, and substituting one message for another. The result can be pleasant or entertaining at times, although also troublesome. In studying the effects of drugs, researchers have gained clues that may help combat drug abuse, as we shall consider in Chapter 15. They have also learned much about synapses. For example, the research on cocaine called attention to the importance of reuptake transporters, and the research on cannabinoids led to increased understanding of the retrograde signaling from postsynaptic cells to presynaptic cells. However, from the standpoint of understanding the physiology of behavior, much remains to be learned. For example, research has identified dopamine activity in the nucleus accumbens as central to reinforcement and addiction, but . . . well, why is dopamine activity in that location reinforcing? Stimulation of 5-HT2A receptors produces hallucinations, but again we ask, “Why?” In neuroscience or biological psychology, answering one question leads to new ones, and the deepest questions are usually the most difficult.
76
Chapter 3
Synapses
Summary 1. A drug that increases activity at a synapse is an agonist; one that decreases activity is an antagonist. Drugs act in many ways, varying in their affinity (tendency to bind to a receptor) and efficacy (tendency to activate it). (p. 71) 2. Nearly all abused drugs, as well as many other highly reinforcing experiences, increase the release of dopamine in the nucleus accumbens. (p. 71) 3. Amphetamine increases the release of dopamine. Cocaine and methylphenidate act by blocking the reuptake transporters and therefore decreasing the reuptake of dopamine and serotonin after their release. (p. 71) 4. Nicotine excites acetylcholine receptors, including the ones on axon terminals that release dopamine in the nucleus accumbens. (p. 73) 5. Opiate drugs stimulate endorphin receptors, which inhibit the release of GABA, which would otherwise inhibit the release of dopamine. Thus, the net effect of opiates is increased dopamine release. (p. 74) 6. At certain synapses in many brain areas, after glutamate excites the postsynaptic cell, the cell responds by releasing endocannabinoids, which inhibit further release of both glutamate and GABA by nearby neurons. Chemicals in marijuana mimic the effects of these endocannabinoids. (p. 74) 7. Hallucinogens act by stimulating certain kinds of serotonin receptors. (p. 75)
Answers to STOP
&
CHECK
Questions 1. Such a drug is therefore an antagonist because, by occupying the receptor, it prevents the normal effects of the transmitter. (p. 71) 2. Amphetamine causes the dopamine transporter to release dopamine instead of reabsorbing it. (p. 73) 3. Cocaine interferes with reuptake of released dopamine. (p. 73) 4. The effects of a methylphenidate pill develop and decline in the brain much more slowly than do those of cocaine. (p. 73) 5. Cocaine decreases total activity in the brain because it stimulates activity of dopamine, which is an inhibitory transmitter in many cases. (p. 73) 6. Nicotine excites acetylcholine receptors on neurons that release dopamine and thereby increases dopamine release. (p. 74)
7. Opiates stimulate endorphin synapses, which inhibit GABA synapses on certain cells that release dopamine. By inhibiting an inhibitor, opiates increase the release of dopamine. (p. 74) 8. Cannabinoids released by the postsynaptic neuron attach to receptors on presynaptic neurons, where they inhibit further release of glutamate as well as GABA. (p. 75) 9. Amphetamine acts by releasing norepinephrine and dopamine from the presynaptic neurons. If those neurons are damaged, amphetamine is ineffective. In contrast, LSD directly stimulates the receptor on the postsynaptic membrane. (p. 75)
Thought Question 1. People who take methylphenidate (Ritalin) for control of attention-deficit disorder often report that, although the drug increases their arousal for a while, they feel a decrease in alertness and arousal a few hours later. Explain.
3.3
Drugs and Synapses
77
Chapter Ending
Key Terms and Activities Terms acetylcholine (p. 60)
exocytosis (p. 61)
posterior pituitary (p. 65)
acetylcholinesterase (p. 66)
G-protein (p. 62)
postsynaptic neuron (p. 53)
affinity (p. 71)
hallucinogenic drugs (p. 75)
presynaptic neuron (p. 53)
2-AG (p. 75)
hormone (p. 65)
protein hormone (p. 65)
agonist (p. 71)
purines (p. 60)
amino acids (p. 60)
inhibitory postsynaptic potential (IPSP) (p. 55)
amphetamine (p. 71)
ionotropic effect (p. 62)
reflex arc (p. 52)
anandamide (p. 75)
MAO (p. 67)
releasing hormone (p. 66)
antagonist (p. 71)
metabotropic effect (p. 62)
reuptake (p. 67)
anterior pituitary (p. 65)
methylphenidate (p. 72)
second messenger (p. 63)
autoreceptors (p. 67)
monoamines (p. 60)
spatial summation (p. 53)
cannabinoids (p. 74)
neuromodulator (p. 64)
spontaneous firing rate (p. 56)
catecholamine (p. 60)
neurotransmitter (p. 59)
stimulant drugs (p. 71)
cocaine (p. 72)
nicotine (p. 73)
synapse (p. 52)
COMT (p. 67)
nitric oxide (p. 60)
temporal summation (p. 53)
nucleus accumbens (p. 71)
transporter (p. 67)
opiate drugs (p. 74)
vasopressin (p. 65)
efficacy (p. 71)
oxytocin (p. 65)
vesicle (p. 61)
endocrine glands (p. 65)
peptide (p. 60)
excitatory postsynaptic potential (EPSP) (p. 53)
peptide hormone (p. 65)
∆9-tetrahydrocannabinol
(∆9-THC)
(p. 74)
78
Chapter Ending
pituitary gland (p. 65)
reflex (p. 52)
AChE Inhibitors
Suggestions for Further Reading Cowan, W. M., Südhof, T. C., & Stevens, C. F. (2001). Synapses. Baltimore, MD: Johns Hopkins University Press. If you are curious about some detailed aspect of synapses, this is a good reference book to check for an answer. McKim, W. A. (2003). Drugs and behavior (5th ed.). Upper Saddle River, NJ: Prentice Hall. Concise, informative text on drugs and drug abuse.
Websites to Explore
Opiate Narcotics (animation) Critical Thinking (essay questions) Chapter Quiz (multiple-choice questions)
http://www.thomsonedu.com Go to this site for the link to ThomsonNOW, your one-stop study shop, Take a Pre-Test for this chapter, and ThomsonNOW will generate a Personalized Study Plan based on your test results. The Study Plan will identify the topics you need to review and direct you to online resources to help you master these topics. You can then take a Post-Test to help you determine the concepts you have mastered and what you still need work on.
You can go to the Biological Psychology Study Center and click these links. While there, you can also check for suggested articles available on InfoTrac College Edition. The Biological Psychology Internet address is: http://psychology.wadsworth.com/book/kalatbiopsych9e/
Exploring Biological Psychology CD Postsynaptic Potentials (animation) Release of Neurotransmitter (animation) Cholinergic Synapse (animation) Release of ACh (animation) AChE Inactivates ACh (animation)
This animation shows how a drug can block reuptake of a neurotransmitter. You can find this animation in Chapter 3 on the CD.
Chapter Ending
79
4
Anatomy of the Nervous System Chapter Outline
Main Ideas
Module 4.1
1. Each part of the nervous system has specialized functions, and the parts work together to produce behavior. Damage to different areas results in different types of behavioral deficits.
Structure of the Vertebrate Nervous System Terminology That Describes the Nervous System The Spinal Cord The Autonomic Nervous System The Hindbrain The Midbrain The Forebrain The Ventricles In Closing: Learning Neuroanatomy Summary Answers to Stop & Check Questions Thought Question Module 4.2
The Cerebral Cortex Organization of the Cerebral Cortex The Occipital Lobe The Parietal Lobe The Temporal Lobe The Frontal Lobe How Do the Parts Work Together? In Closing: Functions of the Cerebral Cortex Summary Answers to Stop & Check Questions Thought Question Module 4.3
Research Methods Correlating Brain Anatomy with Behavior Recording Brain Activity Effects of Brain Damage Effects of Brain Stimulation Brain and Intelligence In Closing: Research Methods and Their Limits Summary Answers to Stop & Check Questions Thought Question Terms Suggestions for Further Reading Websites to Explore Exploring Biological Psychology CD ThomsonNOW Opposite: New methods allow researchers to examine living brains. Source: Peter Beck/CORBIS
2. The cerebral cortex, the largest structure in the mammalian brain, elaborately processes sensory information and provides for fine control of movement. 3. As research has identified the different functions of different brain areas, a difficult question has arisen: How do the areas work together to produce unified experience and behavior? 4. It is difficult to conduct research on the functions of the nervous system. Conclusions must come from multiple methods and careful behavioral measurements.
T
rying to learn neuroanatomy (the anatomy of the nervous system) from a book is like trying to learn geography from a road map. A map can tell you that Mystic, Georgia, is about 40 km north of Enigma, Georgia. Similarly, a book can tell you that the habenula is about 4.6 mm from the interpeduncular nucleus in a rat’s brain (slightly farther in a human brain). But these little gems of information will seem both mysterious and enigmatic unless you are concerned with that part of Georgia or that area of the brain. This chapter does not provide a detailed road map of the nervous system. It is more like a world globe, describing the large, basic structures (analogous to the continents) and some distinctive features of each. The first module introduces key neuroanatomical terms and outlines overall structures of the nervous system. In the second module, we concentrate on the structures and functions of the cerebral cortex, the largest part of the mammalian central nervous system. The third module deals with the main methods that researchers use to discover the behavioral functions of different brain areas. Be prepared: This chapter contains a huge number of new terms. You should not expect to memorize all of them at once, and it will pay to review this chapter repeatedly.
81
Module 4.1
Structure of the Vertebrate Nervous System
Y
our nervous system consists of many substructures, each of them including many neurons, each of which receives and makes many synapses. How do all those little parts work together to make one behaving unit? Does each neuron have an independent function so that, for example, one cell recognizes your grandmother, another controls your desire for pizzas, and another makes you smile at babies? Or does the brain operate as an undifferentiated whole, with each part doing the same thing as every other part? The answer is “something in between those extremes.” Individual neurons do have specialized functions, but the activity of a single cell by itself has no more meaning than the letter h has out of context. In some ways, neurons are like the people who compose a complex society: Each person has a specialized job, but few of us could do our jobs without the collabora-
Figure 4.1 The human nervous system Both the central nervous system and the peripheral nervous system have major subdivisions. The closeup of the brain shows the right hemisphere as seen from the midline.
tion of a huge number of other people. Also, when various individuals become injured or otherwise unable to perform their jobs, other people change what they are doing to adjust. So we have neither complete specialization nor complete lack of specialization.
Terminology That Describes the Nervous System Vertebrates have a central nervous system and a peripheral nervous system, which are of course connected (Figure 4.1). The central nervous system (CNS) is the brain and the spinal cord, each of which includes a great many substructures. The peripheral nervous system (PNS)—the nerves outside the brain and spinal
Central Nervous System (brown) Brain Corpus Cerebral Spinal cord callosum cortex
Thalamus Hypothalamus Pituitary gland Pons Medulla Cerebellum Peripheral Nervous System Somatic (blue): Controls voluntary muscles and conveys sensory information to the central nervous system Autonomic (red): Controls involuntary muscles Sympathetic: Expends energy Parasympathetic: Conserves energy
82
Chapter 4
Anatomy of the Nervous System
Table 4.1 Anatomical Terms Referring to Directions Term
Definition
Term
Definition
Dorsal
Toward the back, away from the ventral (stomach) side. The top of the brain is considered dorsal because it has that position in fourlegged animals.
Proximal
Located close (approximate) to the point of origin or attachment
Distal
Located more distant from the point of origin or attachment
Ipsilateral
On the same side of the body (e.g., two parts on the left or two on the right)
Contralateral
On the opposite side of the body (one on the left and one on the right)
Coronal plane (or frontal plane)
A plane that shows brain structures as seen from the front
Sagittal plane
A plane that shows brain structures as seen from the side
Horizontal plane (or transverse plane)
A plane that shows brain structures as seen from above
Ventral
Toward the stomach, away from the dorsal (back) side
Anterior
Toward the front end
Posterior
Toward the rear end
Superior
Above another part
Inferior
Below another part
Lateral
Toward the side, away from the midline
Medial
Toward the midline, away from the side
Coronal plane
Right
Sagittal plane
An
Pos
te
rio
Horizontal plane
teri
r
or
Dorsal Left (for brain)
Ventral (for brain)
Ventral (for brainstem and spinal cord) Medial
Dorsal (for brainstem and spinal cord) Lateral
Left and right: Dr. Dana Copeland; middle: © Lester V. Bergman/CORBIS
cord—has two divisions: The somatic nervous system consists of the nerves that convey messages from the sense organs to the CNS and from the CNS to the muscles. The autonomic nervous system controls the heart, the intestines, and other organs. To follow a road map, you first must understand the terms north, south, east, and west. Because the nervous system is a complex three-dimensional structure, we need more terms to describe it. As Figure 4.2 and Table 4.1 indicate, dorsal means toward the back and ventral means toward the stomach. (One way to remember these terms is that a ventriloquist is literally a “stomach talker.”) In a four-legged animal, the top of the brain (with respect to gravity) is dorsal (on the same side as the animal’s back), and the bottom of the brain is ventral (on the stomach side).
Horizontal plane
Sagittal plane
Coronal plane
Figure 4.2 Terms for anatomical directions in the nervous system In four-legged animals, dorsal and ventral point in the same direction for the head as they do for the rest of the body. However, humans’ upright posture has tilted the head, so the dorsal and ventral directions of the head are not parallel to those of the spinal cord. 4.1
Structure of the Vertebrate Nervous System
83
When humans evolved an upright posture, the position of our head changed relative to the spinal cord. For convenience, we still apply the terms dorsal and ventral to the same parts of the human brain as other vertebrate brains. Consequently, the dorsal–ventral axis of the human brain is at a right angle to the dorsal–ventral axis of the spinal cord. If you picture a person in a crawling position with all four limbs on the ground but nose pointing forward, the dorsal and ventral positions of the brain become parallel to those of the spinal cord. Table 4.2 introduces additional terminology. Such technical terms may seem confusing at first, but they help investigators communicate unambiguously. Tables 4.1 and 4.2 require careful study and review. After studying the terms, check yourself with the following.
STOP
&
CHECK
1. What does dorsal mean, and what term is its opposite? 2. What term means toward the side, away from the midline, and what term is its opposite? 3. If two structures are both on the left side of the body, they are ______ to each other. If one is on the left and one is on the right, they are ______ to each other. 4. The bulges in the cerebral cortex are called ______; the grooves between them are called ______.
The Spinal Cord The spinal cord is the part of the CNS found within the spinal column; the spinal cord communicates with the sense organs and muscles below the level of the head. It is a segmented structure, and each segment has on each side both a sensory nerve and a motor nerve, as shown in Figure 4.3. According to the Bell-Magendie law, which was one of the first discoveries about the functions of the nervous system, the entering dorsal roots (axon bundles) carry sensory information and the exiting ventral roots carry motor information. The axons to and from the skin and muscles are the peripheral nervous system. The cell bodies of the sensory neurons are located in clusters of neurons outside the spinal cord, called the dorsal root ganglia. (Ganglia is the plural of ganglion, a cluster of neurons. In most cases, a neuron cluster outside the central nervous system is called a ganglion, and a cluster inside the CNS is called a nucleus.) Cell bodies of the motor neurons are inside the spinal cord.
Gray matter White matter Central canal Dorsal root ganglion Dorsal
Sensory nerve
Check your answers on page 95.
Table 4.2 Terms Referring to Parts of the Nervous System Term
Definition
Lamina
A row or layer of cell bodies separated from other cell bodies by a layer of axons and dendrites
Column
A set of cells perpendicular to the surface of the cortex, with similar properties
Tract
A set of axons within the CNS, also known as a projection. If axons extend from cell bodies in structure A to synapses onto B, we say that the fibers “project” from A onto B.
Nerve
A set of axons in the periphery, either from the CNS to a muscle or gland or from a sensory organ to the CNS
Nucleus
A cluster of neuron cell bodies within the CNS
Ganglion
A cluster of neuron cell bodies, usually outside the CNS (as in the sympathetic nervous system)
Gyrus (pl.: gyri)
A protuberance on the surface of the brain
Sulcus (pl.: sulci)
A fold or groove that separates one gyrus from another
Fissure
A long, deep sulcus
84
Chapter 4
Anatomy of the Nervous System
Motor nerve
Ventral
Figure 4.3 Diagram of a cross-section through the spinal cord The dorsal root on each side conveys sensory information to the spinal cord; the ventral root conveys motor commands to the muscles.
In the cross-section through the spinal cord shown in Figures 4.4 and 4.5, the H-shaped gray matter in the center of the cord is densely packed with cell bodies and dendrites. Many neurons of the spinal cord send axons from the gray matter toward the brain or to other parts of the spinal cord through the white matter, which is composed mostly of myelinated axons.
Dorsal
Manfred Kage/Peter Arnold, Inc.
The Autonomic Nervous System
Ventral
Figure 4.4 Photo of a cross-section through the spinal cord The H-shaped structure in the center is gray matter, composed largely of cell bodies. The surrounding white matter consists of axons. The axons are organized in tracts; some carry information from the brain and higher levels of the spinal cord downward, while others carry information from lower levels upward.
The autonomic nervous system consists of neurons that receive information from and send commands to the heart, intestines, and other organs. It is comprised of two parts: the sympathetic and parasympathetic nervous systems (Figure 4.6). The sympathetic nervous system, a network of nerves that prepare the organs for vigorous activity, consists of two paired chains of ganglia lying just to the left and right of the spinal cord in its central regions (the thoracic and lumbar areas) and connected by axons to the spinal cord. Sympathetic axons extend from the ganglia to the organs and activate them for “fight or flight”—increasing breathing and heart rate and decreasing digestive activity. Because all of the sympathetic ganglia are closely linked, they often act as a single system “in sympathy” with one another, although some parts can be more active than the others. The sweat glands, the adrenal glands, the muscles that constrict blood vessels, and the muscles that erect the hairs of the skin have only sympathetic, not parasympathetic, input.
E X T E N S I O N S A N D A P P L I C AT I O N S
Goose Bumps
Each segment of the spinal cord sends sensory information to the brain and receives motor commands from the brain. All that information passes through tracts of axons in the spinal cord. If the spinal cord is cut at a given segment, the brain loses sensation from that segment and all segments below it; the brain also loses motor control over all parts of the body served by that segment and the lower ones.
Manfred Kage/Peter Arnold, Inc.
Erection of the hairs, known as “goose bumps” or “goose flesh,” occurs when we are cold. What does it have to do with the fight-or-flight functions associated with the sympathetic nervous system? Part of the answer is that we also get goose bumps when we are frightened. You have heard the expression, “I was so frightened my hairs stood on end.” You may also have seen a frightened cat erect its fur. Human body hairs are so short that erecting them accomplishes nothing, but a cat with erect fur looks bigger and potentially frightening. A frightened porcupine erects its quills, which are just modified hairs (Richter & Langworthy, 1933). The behavior that makes the quills so useful, their erection in response to fear, evidently evolved before the quills themselves did.
Figure 4.5 A section of gray matter of the spinal cord (lower left) and white matter surrounding it Cell bodies and dendrites reside entirely in the gray matter. Axons travel from one area of gray matter to another in the white matter.
The parasympathetic nervous system facilitates vegetative, nonemergency responses by the organs. The term para means “beside” or “related to,” and parasympathetic activities are related to, and generally the opposite of, sympathetic activities. For example, the sympathetic nervous system increases heart rate; the parasympathetic nervous system decreases it. The parasympathetic nervous system increases digestive activity; the sympathetic nervous system decreases it. Although the sympathetic and parasympa-
4.1
Structure of the Vertebrate Nervous System
85
Preganglionic axons Postganglionic axons Pupil
Salivary glands Heart
Cranial nerves (12 pairs)
Vagus nerve
Cervical nerves (8 pairs) Lungs
Stomach Celiac ganglion
Pancreas
Liver
Adrenal gland
Muscles that erect hairs Sweat gland
Thoracic nerves (12 pairs)
Kidney
Small intestine
Lumbar nerves (5 pairs)
Large intestine (Most ganglia near spinal cord)
Bladder Pelvic nerve Uterus
Sympathetic outflow
Sacral nerves (5 pairs)
Coccygeal nerve Parasympathetic outflow (1 pair)
Genitals
Figure 4.6 The sympathetic nervous system (red lines) and parasympathetic nervous system (blue lines) Note that the adrenal glands and hair erector muscles receive sympathetic input only. (Source: Adapted from Biology: The Unity and Diversity, 5th Edition, by C. Starr and R. Taggart, p. 340. Copyright © 1989 Wadsworth.)
thetic systems act in opposition to one another, both are constantly active to varying degrees, and many stimuli arouse parts of both systems. The parasympathetic nervous system is also known as the craniosacral system because it consists of the cranial nerves and nerves from the sacral spinal cord (see Figure 4.6). Unlike the ganglia in the sympathetic system, the parasympathetic ganglia are not arranged
86
Chapter 4
Anatomy of the Nervous System
in a chain near the spinal cord. Rather, long preganglionic axons extend from the spinal cord to parasympathetic ganglia close to each internal organ; shorter postganglionic fibers then extend from the parasympathetic ganglia into the organs themselves. Because the parasympathetic ganglia are not linked to one another, they act somewhat more independently than the sympathetic ganglia do. Parasympathetic activity decreases
heart rate, increases digestive rate, and in general, promotes energy-conserving, nonemergency functions. The parasympathetic nervous system’s postganglionic axons release the neurotransmitter acetylcholine. Most of the postganglionic synapses of the sympathetic nervous system use norepinephrine, although a few, including those that control the sweat glands, use acetylcholine. Because the two systems use different transmitters, certain drugs may excite or inhibit one system or the other. For example, over-thecounter cold remedies exert most of their effects either by blocking parasympathetic activity or by increasing sympathetic activity. This action is useful because the flow of sinus fluids is a parasympathetic response; thus, drugs that block the parasympathetic system inhibit sinus flow. The common side effects of cold remedies also stem from their sympathetic, antiparasympathetic activities: They inhibit salivation and digestion and increase heart rate. For additional detail about the autonomic nervous system, visit this website: http://www .ndrf.org/ans.htm
STOP
&
CHECK
5. Sensory nerves enter which side of the spinal cord, dorsal or ventral? 6. Which functions are controlled by the sympathetic nervous system? Which are controlled by the parasympathetic nervous system? Check your answers on page 95.
The Hindbrain The brain itself (as distinct from the spinal cord) consists of three major divisions: the hindbrain, the midbrain, and the forebrain (Figure 4.7 and Table 4.3).
Midbrain Forebrain
Hindbrain
Olfactory bulb Optic nerve
Figure 4.7 Three major divisions of the vertebrate brain In a fish brain, as shown here, the forebrain, midbrain, and hindbrain are clearly visible as separate bulges. In adult mammals, the forebrain grows and surrounds the entire midbrain and part of the hindbrain.
Brain investigators unfortunately use a variety of terms synonymously. For example, some people prefer words with Greek roots: rhombencephalon (hindbrain), mesencephalon (midbrain), and prosencephalon (forebrain). You may encounter those terms in other reading. The hindbrain, the posterior part of the brain, consists of the medulla, the pons, and the cerebellum. The medulla and pons, the midbrain, and certain central structures of the forebrain constitute the brainstem (Figure 4.8). The medulla, or medulla oblongata, is just above the spinal cord and could be regarded as an enlarged, elaborated extension of the spinal cord, although it is located in the skull. The medulla controls a number of vital reflexes—including breathing, heart rate, vomiting, salivation, coughing, and sneezing—through the cranial nerves, which control sensations from the head, muscle movements in the head, and much of the parasympathetic output to the organs. Some of the cranial nerves include both sensory and motor components; others have just one or the other. Damage to the medulla is frequently fatal, and large doses of opiates are life-threatening because they suppress activity of the medulla.
Table 4.3 Major Divisions of the Vertebrate Brain Area
Also Known as
Forebrain
Prosencephalon (“forward-brain”)
Major Structures
Diencephalon (“between-brain”)
Thalamus, hypothalamus
Telencephalon (“end-brain”)
Cerebral cortex, hippocampus, basal ganglia
Midbrain
Mesencephalon (“middle-brain”)
Tectum, tegmentum, superior colliculus, inferior colliculus, substantia nigra
Hindbrain
Rhombencephalon (literally, “parallelogram-brain”)
Medulla, pons, cerebellum
Metencephalon (“afterbrain”)
Pons, cerebellum
Myelencephalon (“marrow-brain”)
Medulla
4.1
Structure of the Vertebrate Nervous System
87
Pineal gland Thalamus Superior colliculus Inferior colliculus
Midbrain
Tectum Tegmentum Pons
Posterolateral view of brainstem
Medulla
Figure 4.8 The human brainstem This composite structure extends from the top of the spinal cord into the center of the forebrain. The pons, pineal gland, and colliculi are ordinarily surrounded by the cerebral cortex.
Just as the lower parts of the body are connected to the spinal cord via sensory and motor nerves, the receptors and muscles of the head and the internal organs are connected to the brain by 12 pairs of cranial
nerves (one of each pair on the right of the brain and one on the left), as shown in Table 4.4. Each cranial nerve originates in a nucleus (cluster of neurons) that integrates the sensory information, regulates the motor output, or both. The cranial nerve nuclei for nerves V through XII are in the medulla and pons of the hindbrain. Those for cranial nerves I through IV are in the midbrain and forebrain (Figure 4.9). The pons lies anterior and ventral to the medulla; like the medulla it contains nuclei for several cranial nerves. The term pons is Latin for “bridge”; the name reflects the fact that many axons in the pons cross from one side of the brain to the other. This is in fact the location where axons from each half of the brain cross to the opposite side of the spinal cord so that the left hemisphere controls the muscles of the right side of the body and the right hemisphere controls the left side. The medulla and pons also contain the reticular formation and the raphe system. The reticular formation has descending and ascending portions. The descending portion is one of several brain areas that control the motor areas of the spinal cord. The ascending portion sends output to much of the cerebral cortex, selectively increasing arousal and attention in one area or another (Guillery, Feig, & Lozsádi, 1998). The raphe system also sends axons to much of the forebrain, modifying the brain’s readiness to respond to stimuli (Mesulam, 1995). The cerebellum is a large hindbrain structure with a great many deep folds. It has long been known for its contributions to the control of movement (see Chap-
Table 4.4 The Cranial Nerves Number and Name
Major Functions
I. Olfactory
Smell
II. Optic
Vision
III. Oculomotor
Control of eye movements, pupil constriction
IV. Trochlear
Control of eye movements
V. Trigeminal
Skin sensations from most of the face; control of jaw muscles for chewing and swallowing
VI. Abducens
Control of eye movements
VII. Facial
Taste from the anterior two-thirds of the tongue; control of facial expressions, crying, salivation, and dilation of the head’s blood vessels
VIII. Statoacoustic
Hearing, equilibrium
IX. Glossopharyngeal
Taste and other sensations from throat and posterior third of the tongue; control of swallowing, salivation, throat movements during speech
X. Vagus
Sensations from neck and thorax; control of throat, esophagus, and larynx; parasympathetic nerves to stomach, intestines, and other organs
XI. Accessory
Control of neck and shoulder movements
XII. Hypoglossal
Control of muscles of the tongue
Cranial nerves III, IV, and VI are coded in red to highlight their similarity: control of eye movements. Cranial nerves VII, IX, and XII are coded in green to highlight their similarity: taste and control of tongue and throat movements. Cranial nerve VII has other important functions as well. Nerve X (not highlighted) also contributes to throat movements, although it is primarily known for other functions.
88
Chapter 4
Anatomy of the Nervous System
Figure 4.9 Cranial nerves II through XII
Optic nerve (Cranial nerve II)
Midbrain
Cranial nerve I, the olfactory nerve, connects directly to the olfactory bulbs of the forebrain. (Source: Based on Braus, 1960)
Cranial nerve IV
Cranial nerve III Cranial nerve V Pons
Cerebellum
Cranial nerve VIII VII VI IX X XI XII
Medulla
Spinal nerve
ter 8), and many older textbooks describe the cerebellum as important for “balance and coordination.” True, people with cerebellar damage are clumsy and lose their balance, but the functions of the cerebellum extend far beyond balance and coordination. People with damage to the cerebellum have trouble shifting their attention back and forth between auditory and visual stimuli (Courchesne et al., 1994). They have much difficulty with timing, including sensory timing. For example, they are poor at judging whether one rhythm is faster than another.
The Midbrain As the name implies, the midbrain is in the middle of the brain, although in adult mammals it is dwarfed and surrounded by the forebrain. In birds, reptiles, amphibians, and fish, the midbrain is a larger, more prominent structure. The roof of the midbrain is called the tectum. (Tectum is the Latin word for “roof”; the same root shows up in the geological term plate tectonics.) The two swellings on each side of the tectum are the superior colliculus and the inferior colliculus (see Figures 4.8 and 4.10); both are part of important routes for sensory information. Under the tectum lies the tegmentum, the intermediate level of the midbrain. (In Latin, tegmentum
Spinal cord
means a “covering,” such as a rug on the floor. The tegmentum covers several other midbrain structures, although it is covered by the tectum.) The tegmentum includes the nuclei for the third and fourth cranial nerves, parts of the reticular formation, and extensions of the pathways between the forebrain and the spinal cord or hindbrain. Another midbrain structure is the substantia nigra, which gives rise to the dopamine-containing pathway that deteriorates in Parkinson’s disease (see Chapter 8).
The Forebrain The forebrain is the most anterior and most prominent part of the mammalian brain. It consists of two cerebral hemispheres, one on the left side and one on the right (Figure 4.11). Each hemisphere is organized to receive sensory information, mostly from the contralateral (opposite) side of the body, and to control muscles, mostly on the contralateral side, by way of axons to the spinal cord and the cranial nerve nuclei. The outer portion is the cerebral cortex. (Cerebrum is a Latin word meaning “brain”; cortex is a Latin word meaning “bark” or “shell.”) Under the cerebral cortex are other structures, including the thalamus, which is the main source of input to the cerebral cortex. A set of structures known as the basal ganglia plays a major 4.1
Structure of the Vertebrate Nervous System
89
Cingulate gyrus Cerebral cortex
Parietal lobe
Frontal lobe Thalamus Corpus callosum
Tissue dividing lateral ventricles
Occipital lobe
Nucleus accumbens
Superior and inferior colliculi
Hypothalamus
Midbrain
Pituitary gland Pons
Cerebellum
Medulla
Spinal cord
Central canal of spinal cord
Figure 4.10 A sagittal section through the human brain (Source: After Nieuwenhuys, Voogd, & vanHuijzen, 1988)
Anterior Frontal lobe of cerebral cortex Frontal lobe
Corpus callosum
Precentral gyrus
Lateral ventricles (anterior parts)
Central sulcus
Basal ganglia
Postcentral gyrus Parietal lobe
Thalamus
Dr. Dana Copeland
Hippocampus Lateral ventricles (posterior parts)
Occipital lobe
Posterior
Figure 4.11 Dorsal view of the brain surface and a horizontal section through the brain
role in certain aspects of movement. A number of other interlinked structures, known as the limbic system, form a border (or limbus, the Latin word for “border”) around the brainstem. These structures are particularly
90
Chapter 4
Anatomy of the Nervous System
important for motivations and emotions, such as eating, drinking, sexual activity, anxiety, and aggression. The structures of the limbic system are the olfactory bulb, hypothalamus, hippocampus, amygdala, and cin-
Figure 4.12 The limbic system is a set of subcortical structures that form a border (or limbus) around the brainstem
Cingulate gyrus
Thalamus
Hypothalamus Mamillary body
Hippocampus
Amygdala
Olfactory bulb
Frontal lobe of cerebral cortex Cerebral cortex
Corpus callosum Lateral ventricles Basal ganglia
Dorsal Ventral
Temporal lobes
Left
Longitudinal fissure Olfactory bulbs
Temporal lobe of cerebral cortex
Optic nerves
Medulla Spinal cord
Cerebellum
Right Anterior commissure
Figure 4.13 Two views of the human brain Top: A coronal section. Note how the corpus callosum and anterior commissure provide communication between the left and right hemispheres. Bottom: The ventral surface. The optic nerves (cut here) extend to the eyes. (Photos courtesy of Dr. Dana Copeland)
gulate gyrus of the cerebral cortex. Figure 4.12 shows the positions of these structures in three-dimensional perspective. Figures 4.10 and 4.13 show sagittal (from the side) and coronal (from the front) sections through the human brain. Figure 4.13 also includes a view of the ventral surface of the brain. In describing the forebrain, we begin with the subcortical areas; the next module focuses on the cerebral
cortex. In later chapters, we return to each of these areas as they become relevant.
Thalamus The thalamus and hypothalamus together form the diencephalon, a section distinct from the rest of the forebrain, which is known as the telencephalon. The thal4.1
Structure of the Vertebrate Nervous System
91
Primary motor cortex Primary somatosensory cortex
Frontal cortex Occipital cortex
Thalamus
Optic tract
Dorsomedial nucleus
Ventral lateral nucleus Ventral posterior nucleus
Pulvinar nucleus Lateral geniculate body
Figure 4.14 Routes of information from the thalamus to the cerebral cortex Each thalamic nucleus projects its axons to a different location in the cortex. (Source: After Nieuwenhuys, Voogd, & vanHuijzen, 1988)
amus is a structure in the center of the forebrain. The term is derived from a Greek word meaning “anteroom,” “inner chamber,” or “bridal bed.” It resembles two avocados joined side by side, one in the left hemisphere and one in the right. Most sensory information goes first to the thalamus, which then processes it and sends the output to the cerebral cortex. The one clear exception to this rule is olfactory information, which progresses from the olfactory receptors to the olfactory bulbs and from the bulbs directly to the cerebral cortex without passing through the thalamus. Many nuclei of the thalamus receive their primary input from one of the sensory systems, such as vision, and then transmit the information to a single area of the cerebral cortex, as in Figure 4.14. The cerebral cortex then sends information back to the thalamus, prolonging and magnifying certain kinds of input at the expense of others, apparently serving to focus attention on particular stimuli (Komura et al., 2001).
Hypothalamus The hypothalamus is a small area near the base of the brain just ventral to the thalamus (see Figures 4.10 and 4.12). It has widespread connections with the rest of the forebrain and the midbrain. The hypothalamus
92
Chapter 4
Anatomy of the Nervous System
contains a number of distinct nuclei, which we examine in Chapters 10 and 11. Partly through nerves and partly through hypothalamic hormones, the hypothalamus conveys messages to the pituitary gland, altering its release of hormones. Damage to any hypothalamic nucleus leads to abnormalities in motivated behaviors, such as feeding, drinking, temperature regulation, sexual behavior, fighting, or activity level. Because of these important behavioral effects, the rather small hypothalamus attracts a great deal of research attention.
Pituitary Gland The pituitary gland is an endocrine (hormoneproducing) gland attached to the base of the hypothalamus by a stalk that contains neurons, blood vessels, and connective tissue (see Figure 4.10). In response to messages from the hypothalamus, the pituitary synthesizes and releases hormones into the bloodstream, which carries them to other organs.
Basal Ganglia The basal ganglia, a group of subcortical structures lateral to the thalamus, include three major structures: the caudate nucleus, the putamen, and the globus pal-
Figure 4.15 The basal ganglia The thalamus is in the center, the basal ganglia are lateral to it, and the cerebral cortex is on the outside. (Source: After Nieuwenhuys, Voogd, & vanHuijzen, 1988) Caudate nucleus Thalamus lidus (Figure 4.15). Some authorities include several other structures as well. The basal Globus pallidus ganglia have been conserved (medial) through evolution, and the basic organization is about the same in mammals as in amphibians (Marin, Smeets, & González, 1998). The basal ganglia have multiple subdivisions, each of which exchanges information with a different part of the cerebral cortex. The connections are most abundant with the frontal areas of the cortex, which are responsible for planning sequences of behavior and for certain aspects of memory and emotional expression (Graybiel, Aosaki, Flaherty, & Kimura, 1994). In conditions such as Parkinson’s disease and Huntington’s disease, in which the basal ganglia deteriorate, the most prominent symptom is impaired movement, but people also show depression, deficits of memory and reasoning, and attentional disorders.
Putamen (lateral) Amygdala
Basal Forebrain Several structures lie on the dorsal surface of the forebrain, including the nucleus basalis, which receives input from the hypothalamus and basal ganglia and sends axons that release acetylchoNucleus basalis line to widespread areas in the cerebral cortex (Figure 4.16). We might regard the nucleus basalis as an intermediary between the emotional arousal of the hypothalamus and the information processing of the cerebral cortex. The nucleus basalis is a key part of the brain’s system for arousal, wakefulness, and attention, as we consider in Chapter 9. Patients with ParkinFigure 4.16 The basal forebrain son’s disease and Alzheimer’s disease have impairThe nucleus basalis and other structures in this area send ments of attention and intellect because of inactivity axons throughout the cortex, increasing its arousal and or deterioration of their nucleus basalis.
Hippocampus
wakefulness through release of the neurotransmitter acetylcholine. (Source: Adapted from “Cholinergic Systems
The hippocampus (from a Latin word meaning “sea horse,” a shape suggested by the hippocampus) is a large structure between the thalamus and the cerebral cortex, mostly toward the posterior of the forebrain, as shown in Figure 4.12. We consider the hippocampus in more detail in Chapter 12; the gist of that dis-
cussion is that the hippocampus is critical for storing certain kinds of memories but not all. A debate continues about how best to describe the class of memories
in Mammalian Brain and Spinal Cord,” by N. J. Woolf, Progress in Neurobiology, 37, pp. 475–524, 1991)
4.1
Structure of the Vertebrate Nervous System
93
contains one of the two large lateral ventricles (Figure 4.17). Toward the posterior, they connect to the third ventricle, which connects to the fourth ventricle in the medulla. The ventricles and the central canal of the spinal cord contain cerebrospinal fluid (CSF), a clear fluid similar to blood plasma. CSF is formed by groups of cells, the choroid plexus, inside the four ventricles. It flows from the lateral ventricles to the third and fourth ventricles. From the fourth ventricle, some CSF flows into the central canal of the spinal cord, but more goes through an opening into the narrow spaces between the brain and the thin meninges, membranes that surround the brain and spinal cord. (Meningitis is an inflammation of the meninges.) From one of those spaces, the subarachnoid space, CSF is gradually reabsorbed into the blood vessels of the brain. Cerebrospinal fluid cushions the brain against mechanical shock when the head moves. It also provides buoyancy; just as a person weighs less in water than on land, cerebrospinal fluid helps support the weight of the brain. It also provides a reservoir of hormones and nutrition for the brain and spinal cord. Sometimes the flow of CSF is obstructed, and it accumulates within the ventricles or in the subarachnoid space, thus increasing the pressure on the brain. When this occurs in infants, the skull bones may spread, causing an overgrown head. This condition, known as hydrocephalus (HI-dro-SEFF-ah-luss), is usually associated with mental retardation.
that depend on the hippocampus. People with hippocampal damage have trouble storing new memories, but they do not lose the memories they had from before the damage occurred.
STOP
&
CHECK
7. Of the following, which are in the hindbrain, which in the midbrain, and which in the forebrain: basal ganglia, cerebellum, hippocampus, hypothalamus, medulla, pituitary gland, pons, substantia nigra, superior and inferior colliculi, tectum, tegmentum, thalamus? 8. Which area is the main source of input to the cerebral cortex? Check your answers on page 95.
The Ventricles The nervous system begins its development as a tube surrounding a fluid canal. The canal persists into adulthood as the central canal, a fluid-filled channel in the center of the spinal cord, and as the ventricles, four fluid-filled cavities within the brain. Each hemisphere
Lateral ventricles
Third ventricle
Lateral ventricles
Thalamus
Cerebral aqueduct
Fourth ventricle Central canal of spinal cord
Anterior (b)
(a)
Figure 4.17 The cerebral ventricles (a) Diagram showing positions of the four ventricles. (b) Photo of a human brain, viewed from above, with a horizontal cut through one hemisphere to show the position of the lateral ventricles. Note that the two parts of this figure are seen from different angles. (Photo courtesy of Dr. Dana Copeland)
94
Chapter 4
Anatomy of the Nervous System
Posterior
Module 4.1 In Closing: Learning Neuroanatomy The brain is a highly complex structure. This module has introduced a great many terms and facts; do not be discouraged if you have trouble remembering them. You didn’t learn world geography all at one time either. It will help to return to this section to review the anatomy of certain structures as you encounter them again in later chapters. Gradually, the material will become more familiar. It helps to see the brain from different angles and perspectives. Check this fantastic website, which includes detailed photos of both normal and abnormal human brains: http://www.med.harvard.edu/AANLIB/ home.html
ing, heart rate, and other vital functions through the cranial nerves. The cerebellum contributes to movement. (p. 87) 5. The subcortical areas of the forebrain include the thalamus, hypothalamus, pituitary gland, basal ganglia, and hippocampus. (p. 89) 6. The cerebral cortex receives its sensory information (except for olfaction) from the thalamus. (p. 92)
Answers to STOP
&
CHECK
Questions 1. Dorsal means toward the back, away from the stomach side. Its opposite is ventral. (p. 84)
You might also appreciate this site, which compares the brains of different species. (Have you ever wondered what a polar bear’s brain looks like? Or a dolphin’s? Or a weasel’s?) http://www.brainmuseum.org/
2. Lateral; medial (p. 84)
Sections/index.html
5. Dorsal (p. 87)
Summary 1. The main divisions of the vertebrate nervous system are the central nervous system and the peripheral nervous system. The central nervous system consists of the spinal cord, the hindbrain, the midbrain, and the forebrain. (p. 82) 2. Each segment of the spinal cord has a sensory nerve on each side and a motor nerve on each side. Several spinal pathways convey information to the brain. (p. 84) 3. The sympathetic nervous system (one of the two divisions of the autonomic nervous system) activates the body’s internal organs for vigorous activities. The parasympathetic system (the other division) promotes digestion and other nonemergency processes. (p. 85) 4. The hindbrain consists of the medulla, pons, and cerebellum. The medulla and pons control breath-
3. Ipsilateral; contralateral (p. 84) 4. Gyri; sulci (p. 84) 6. The sympathetic nervous system prepares the organs for vigorous fight-or-flight activity. The parasympathetic system increases vegetative responses such as digestion. (p. 87) 7. Hindbrain: cerebellum, medulla, and pons. Midbrain: substantia nigra, superior and inferior colliculi, tectum, and tegmentum. Forebrain: basal ganglia, hippocampus, hypothalamus, pituitary, and thalamus. (p. 94) 8. Thalamus (p. 94)
Thought Question The drug phenylephrine is sometimes prescribed for people suffering from a sudden loss of blood pressure or other medical disorders. It acts by stimulating norepinephrine synapses, including those that constrict blood vessels. One common side effect of this drug is goose bumps. Explain why. What other side effects might be likely?
4.1
Structure of the Vertebrate Nervous System
95
Module 4.2
The Cerebral Cortex
T
and medulla decrease. Curiously, the cerebellum occupies a remarkably constant percentage—approximately 13% of any mammalian brain (D. A. Clark et al., 2001). That is, the cerebellum maintains an almost constant proportion to the whole brain. (Why? No one knows.)
Organization of the Cerebral Cortex The microscopic structure of the cells of the cerebral cortex varies substantially from one cortical area to another. The differences in appearance relate to differences in function. Much research has been directed toward understanding the relationship between structure and function. In humans and most other mammals, the cerebral cortex contains up to six distinct laminae, layers of cell bodies that are parallel to the surface of the cortex and separated from each other by layers of fibers (Figure 4.21). The laminae vary in thickness and prominence from one part of the cortex to another, and a given lamina may be absent from certain areas. Lamina V, which sends long axons to the spinal cord and other distant areas, is thickest in the motor cortex, which has the greatest control of the muscles. Lamina IV, which receives axons from the various sensory nuclei of the thalamus, is prominent in all the primary sensory areas
Walley Welker, UW–Madison Comparative Mammalian Brain Collection
he most prominent part of the mammalian brain is the cerebral cortex, consisting of the cellular layers on the outer surface of the cerebral hemispheres. The cells of the cerebral cortex are gray matter; their axons extending inward are white matter (see Figure 4.13). Neurons in each hemisphere communicate with neurons in the corresponding part of the other hemisphere through two bundles of axons, the corpus callosum (see Figures 4.10, 4.11, and 4.13) and the smaller anterior commissure (see Figure 4.13). Several other commissures (pathways across the midline) link subcortical structures. If we compare mammalian species, we see differences in both the size of the cerebral cortex and the degree of folding (Figure 4.18). The cerebral cortex constitutes a higher percentage of the brain in primates— monkeys, apes, and humans—than in other species of comparable size. Figure 4.19 shows the size of the cerebral cortex in comparison to the rest of the brain for insectivores and two suborders of primates (Barton & Harvey, 2000). Figure 4.20 compares species in another way (D. A. Clark, Mitra, & Wang, 2001). The investigators arranged the insectivores and primates from left to right in terms of what percentage of their brain was devoted to the forebrain (telencephalon), which includes the cerebral cortex. They also inserted tree shrews, a species often considered intermediate or transitional. Note that as the proportion devoted to the forebrain increases, the relative sizes of the midbrain
Figure 4.18 Comparison of mammalian brains The human brain is the largest of those shown, although whales, dolphins, and elephants have still larger brains. All mammals have the same brain subareas in the same locations.
96
Chapter 4
Anatomy of the Nervous System
Image not available due to copyright restrictions
(visual, auditory, and somatosensory) but absent from the motor cortex. The cells of the cortex are also organized into columns of cells perpendicular to the laminae. Figure 4.22
illustrates the idea of columns, although in nature they are not so straight. The cells within a given column have similar properties to one another. For example, if one cell in a column responds to touch on the palm of the left hand, then the other cells in that column do too. If one cell responds to a horizontal pattern of light at a particular location in the retina, then the other cells in the column respond to the same pattern in nearly the same location. We now turn to some of the specific parts of the cortex. Researchers distinguish 50 or more areas of the cerebral cortex based on differences in the thickness of
Laminae
Cells
Molecular layer I
Image not available due to copyright restrictions
External granular layer Pyramidal cell layer
Fibers
Composition Mostly dendrites and long axons
II
Small pyramidal cells
III
Pyramidal cells
Internal IV granular layer Inner pyramidal V layer
Small cells; main site for incoming sensory information Large pyramidal cells; main source of motor output
Vla Multiform layer
Spindle cells Vlb
Figure 4.21 The six laminae of the human cerebral cortex (Source: From S. W. Ranson and S. L. Clark, The Anatomy of the Nervous System, 1959, Copyright © 1959 W. B. Saunders Co. Reprinted by permission.)
4.2 The Cerebral Cortex May no
97
Surface of cortex
The Parietal Lobe
White matter
Figure 4.22 Columns in the cerebral cortex Each column extends through several laminae. Neurons within a given column have similar properties. For example, in the somatosensory cortex, all the neurons within a given column respond to stimulation of the same area of skin.
the six laminae and on the structure of cells and fibers within each lamina. For convenience, we group these areas into four lobes named for the skull bones that lie over them: occipital, parietal, temporal, and frontal.
The Occipital Lobe The occipital lobe, located at the posterior (caudal) end of the cortex (Figure 4.23), is the main target for axons from the thalamic nuclei that receive visual input. The posterior pole of the occipital lobe is known as the primary visual cortex, or striate cortex, because of its striped appearance in cross-section. Destruction of any part of the striate cortex causes cortical blindness in the related part of the visual field. For example, extensive damage to the striate cortex of the right hemisphere causes blindness in the left visual field (the left side of the world from the viewer’s perspective). A person with cortical blindness has normal eyes, normal pupillary reflexes, and some eye movements but no pattern perception and not even visual imagery. People who suffer severe damage to the eyes become blind, but if they have an intact occipital cortex and previous visual experience, they can still imagine visual scenes and can still have visual dreams (Sabo & Kirtley, 1982).
98
Chapter 4
Anatomy of the Nervous System
The parietal lobe lies between the occipital lobe and the central sulcus, which is one of the deepest grooves in the surface of the cortex (see Figure 4.23). The area just posterior to the central sulcus, the postcentral gyrus, or the primary somatosensory cortex, is the primary target for touch sensations and information from muscle-stretch receptors and joint receptors. Brain surgeons sometimes use only local anesthesia (anesthetizing the scalp but leaving the brain awake). If during this process they lightly stimulate the postcentral gyrus, people report “tingling” sensations on the opposite side of the body. The postcentral gyrus includes four bands of cells that run parallel to the central sulcus. Separate areas along each band receive simultaneous information from different parts of the body, as shown in Figure 4.24a (Nicolelis et al., 1998). Two of the bands receive mostly light-touch information, one receives deep-pressure information, and one receives a combination of both (Kaas, Nelson, Sur, Lin, & Merzenich, 1979). In effect, the postcentral gyrus represents the body four times. Information about touch and body location is important not only for its own sake but also for interpreting visual and auditory information. For example, if you see something in the upper left portion of the visual field, your brain needs to know which direction your eyes are turned, the position of your head, and the tilt of your body before it can determine the location of the object that you see and therefore the direction you should go if you want to approach or avoid it. The parietal lobe monitors all the information about eye, head, and body positions and passes it on to brain areas that control movement (Gross & Graziano, 1995). It is essential not only for processing spatial information but also numerical information (Hubbard, Piazza, Pinel, & Dehaene, 2005). That overlap makes sense when you consider all the ways in which number relates to space—from initially learning to count with our fingers, to geometry, and to all kinds of graphs.
The Temporal Lobe The temporal lobe is the lateral portion of each hemisphere, near the temples (see Figure 4.23). It is the primary cortical target for auditory information. In humans, the temporal lobe—in most cases, the left temporal lobe—is essential for understanding spoken language. The temporal lobe also contributes to some of the more complex aspects of vision, including perception of movement and recognition of faces. A tumor in the temporal lobe may give rise to elaborate auditory or visual hallucinations, whereas a tumor in the oc-
Precentral gyrus (primary motor cortex)
Central sulcus
Frontal lobe (planning of movements, recent memory, some aspects of emotions)
Postcentral gyrus (primary somatosensory cortex) Motor Somesthetic
Parietal lobe (body sensations)
Prefrontal cortex
Visual Auditory
Olfactory bulb Olfaction
Occipital lobe (vision)
Audition Vision
Temporal lobe (hearing, advanced visual processing)
Somesthesis
(a)
Movement
(b)
Figure 4.23 Areas of the human cerebral cortex (a) The four lobes: occipital, parietal, temporal, and frontal. (b) The primary sensory cortex for vision, hearing, and body sensations; the primary motor cortex; and the olfactory bulb, a noncortical area responsible for the sense of smell. (Source for part b: T. W. Deacon, 1990)
nd
Dr. Dana Copeland
Knee Hip k Trun lder u Sho Arm ow Elb
t
Fi Th nge N um rs Broeck b Eye w Fac e
ris
Teeth Gums Jaw ue Tong y r nx l Pha ina om d ab raInt
Ha
Lips
W
Ha
Postcentral gyrus (primary Fi Th nge somatosensory cortex) Ey um rs No e b se Fac e
Fo
Leg Hip k Trun k Nec d Hea Arm w o Elb arm re nd
Precentral gyrus (primary motor cortex)
Toes Genitals
Lips Jaw
e Tongu ing llow Swa
(a) Somatosensory cortex
(b) Motor cortex
Figure 4.24 Approximate representation of sensory and motor information in the cortex (a) Each location in the somatosensory cortex represents sensation from a different body part. (b) Each location in the motor cortex regulates movement of a different body part. (Source: Adapted from The Cerebral Cortex of Man by W. Penfield and T. Rasmussen, Macmillan Library Reference. Reprinted by permission of The Gale Group.)
4.2
The Cerebral Cortex
99
cipital lobe ordinarily evokes only simple sensations, such as flashes of light. In fact, when psychiatric patients report hallucinations, brain scans detect extensive activity in the temporal lobes (Dierks et al., 1999). The temporal lobes also play a part in emotional and motivational behaviors. Temporal lobe damage can lead to a set of behaviors known as the Klüver-Bucy syndrome (named for the investigators who first described it). Previously wild and aggressive monkeys fail to display normal fears and anxieties after temporal lobe damage (Klüver & Bucy, 1939). They put almost anything they find into their mouths and attempt to pick up snakes and lighted matches (which intact monkeys consistently avoid). Interpreting this behavior is difficult. For example, a monkey might handle a snake because it is no longer afraid (an emotional change) or because it no longer recognizes what a snake is (a cognitive change). Such issues will be a major topic in Chapter 12.
of the ipsilateral (same) side. Figure 4.24b shows the traditional map of the precentral gyrus, also known as the primary motor cortex. However, the map is only an approximation; for example, the arm area does indeed control arm movements, but within that area, there is no one-to-one relationship between brain location and specific muscles (Graziano, Taylor, & Moore, 2002). The most anterior portion of the frontal lobe is the prefrontal cortex. In general, the larger a species’ cerebral cortex, the higher the percentage of it is devoted to the prefrontal cortex (Figure 4.25). For example, it forms a larger portion of the cortex in humans and all the great apes than in other species (Semendeferi, Lu, Schenker, & Damasio, 2002). It is not the primary target for any single sensory system, but it receives information from all of them, in different parts of the prefrontal cortex. The dendrites in the prefrontal cortex have up to 16 times as many dendritic spines (see Figure 2.7) as neurons in other cortical areas (Elston, 2000). As a result, the prefrontal cortex can integrate an enormous amount of information.
The Frontal Lobe The frontal lobe, which contains the primary motor cortex and the prefrontal cortex, extends from the central sulcus to the anterior limit of the brain (see Figure 4.23). The posterior portion of the frontal lobe just anterior to the central sulcus, the precentral gyrus, is specialized for the control of fine movements, such as moving one finger at a time. Separate areas are responsible for different parts of the body, mostly on the contralateral (opposite) side but also with slight control
E X T E N S I O N S A N D A P P L I C AT I O N S
The Rise and Fall of Prefrontal Lobotomies The prefrontal cortex was the target of the infamous procedure known as prefrontal lobotomy—surgical disconnection of the prefrontal cortex from the rest of the brain. The surgery consisted of damaging the prefrontal cortex or cutting its connections to the rest of the cortex. The lobotomy trend began with a report that damaging the prefrontal cortex of laboratory pri-
Figure 4.25 Species differences in prefrontal cortex Note that the prefrontal cortex (blue area) constitutes a larger proportion of the human brain than of these other species. (Source: After The
Squirrel monkey
Cat
Rhesus monkey
Chimp
Human
Prefrontal Cortex by J. M. Fuster, 1989, Raven Press. Reprinted by permission.)
Dog
100
Chapter 4
Anatomy of the Nervous System
Gaps left by the lobotomy
calculate adequately the probable outcomes of their behaviors.
Dr. Dana Copeland
Modern View of the Prefrontal Cortex
A horizontal section of the brain of a person who had a prefrontal lobotomy many years earlier. The two holes in the frontal cortex are the visible results of the operation.
mates had made them tamer without noticeably impairing their sensory or motor capacities. A few physicians reasoned (loosely) that the same operation might help people who suffered from severe, untreatable psychiatric disorders. In the late 1940s and early 1950s, about 40,000 prefrontal lobotomies were performed in the United States (Shutts, 1982), many of them by Walter Freeman, a medical doctor untrained in surgery. His techniques were crude, even by the standards of the time, using such instruments as an electric drill and a metal pick. He performed many operations in his office or other nonhospital sites. (Freeman carried his equipment in his car, which he called his “lobotomobile.”) Freeman and others became increasingly casual in deciding who should have a lobotomy. At first, they limited the technique only in severe, apparently hopeless cases of schizophrenia. Lobotomy did calm some individuals, but the effects were usually disappointing. We now know that the frontal lobes of people with schizophrenia are relatively inactive; lobotomy was therefore damaging a structure that was already impaired. Later, Freeman lobotomized people with less serious disorders, including some whom we would consider normal by today’s standards. After drug therapies became available for schizophrenia and depression, physicians quickly and almost completely abandoned lobotomies, performing only a few of them after the mid-1950s (Lesse, 1984; Tippin & Henn, 1982). Among the common consequences of prefrontal lobotomy were apathy, a loss of the ability to plan and take initiative, memory disorders, distractibility, and a loss of emotional expressions (Stuss & Benson, 1984). People with prefrontal damage lose their social inhibitions, ignoring the rules of polite, civilized conduct. They often act impulsively because they fail to
Lobotomies added rather little to our understanding of the prefrontal cortex. Later researchers studying people and monkeys with brain damage found that the prefrontal cortex is important for working memory, the ability to remember recent stimuli and events, such as where you parked the car today or what you were talking about before being interrupted (GoldmanRakic, 1988). The prefrontal cortex is especially important for the delayed-response task, in which a stimulus appears briefly, and after some delay, the individual must respond to the remembered stimulus. The prefrontal cortex is much less important for remembering unchanging, permanent facts. Neuroscientists have offered several other hypotheses about the function of the prefrontal cortex. One is that it is essential when we have to follow two or more rules at the same time in the same situation (Ramnani & Owen, 2004). Another is that it controls behaviors that depend on the context (E. Miller, 2000). For example, if the phone rings, do you answer it? It depends: In your own home, yes, but at someone else’s home, probably not. If you saw a good friend from a distance, would you shout out a greeting? Again, it depends: You would in a public park but not in a library. People with prefrontal cortex damage often fail to adjust to their context, so they behave inappropriately or impulsively.
STOP
&
CHECK
1. If several neurons of the visual cortex all respond best when the retina is exposed to horizontal lines of light, then those neurons are probably located in the same ______. 2. Which lobe of the cerebral cortex includes the primary auditory cortex? The primary somatosensory cortex? The primary visual cortex? The primary motor cortex? 3. What are the functions of the prefrontal cortex? Check your answers on page 104.
How Do the Parts Work Together? We have just considered a variety of brain areas, each with its own function. How do they merge their effects to produce integrated behavior and the experience of a 4.2
The Cerebral Cortex
101
single self? In particular, consider the sensory areas of the cerebral cortex. The primary visual area, auditory area, and somatosensory area are in different locations, hardly even connected with one another. When you hold your radio or iPod, how does your brain know that the object you see is also what you feel and what you hear? Consider other examples of what we need to explain: • When you hear a ventriloquist’s voice while you watch the dummy’s mouth move, the dummy appears to be talking. Even infants look at someone whose mouth is moving when they hear speech; somehow they know to attribute sound to moving lips instead of stationary ones. • If you watch a film in which the picture is slightly out of synchrony with the sound, or a foreign-language film that was badly dubbed, you know that the sound does not match the picture. • If you see a light flash once while you simultaneously hear two beeps, you will sometimes think you saw the light flash twice, coincident with the beeps. If the tone is soft, it is also possible to experience the opposite: The tone beeps twice during one flash of light, and you think you heard only one beep. If you saw three flashes of light, you might think you heard three beeps (Andersen, Tiippana, & Sams, O N L I N E 2004). You can experience an example of this phenomenon with the Online try it Try It Yourself activity “Illustration of yourself Binding.” • Here is another great demonstration to try (I. H. Robertson, 2005). Position yourself parallel to a large mirror, as in Figure 4.26, so that you see your right hand and its reflection in the mirror. Keep your left hand out of sight. Now repeatedly clench and unclench both hands in unison. You will feel your left hand clenching and unclenching at the same time you see the hand in the mirror doing the same thing. After 2 or 3 minutes, you may start to feel that the hand in the mirror is your own left hand. Some people even feel that they have three hands—the right hand, the real left try it hand, and the apparent left hand in the yourself mirror. The question of how the visual, auditory, and other areas of your brain produce a perception of a single object is known as the binding problem, or large-scale integration problem (Varela, Lachaus, Rodriguez, & Martinerie, 2001). In an earlier era, researchers thought that various kinds of sensory information converged onto what they called the association areas of the cortex (Figure 4.27). The guess was that those areas “associate” vision with hearing, hearing with touch, or current sensations with memories of previous experiences. However, later research found that the “associ-
102
Chapter 4
Anatomy of the Nervous System
Figure 4.26 An illusion to demonstrate binding Clench and unclench both hands while looking at your right hand and its reflection in the mirror. Keep your left hand out of sight. After a couple of minutes, you may start to experience the hand in the mirror as being your own left hand.
ation areas” perform advanced processing on a particular sensory system, such as vision or hearing, and do not combine one sense with another. Discarding the idea that various senses converge in the association areas called attention to the binding problem. If they don’t converge, then how do we know that something we see is also what we hear or feel? One hypothesis is that the binding of a perception requires precisely simultaneous activity in various brain areas (Eckhorn et al., 1988; Gray, König, Engel, & Singer, 1989). When people see a vague image and recognize it as a face, neurons in several areas of their visual cortex produce rapid activity known as gamma waves, ranging in frequency at various times from 30 to 80 action potentials per second (Rodriguez et al., 1999). The gamma waves are synchronized to the millisecond in various brain areas. When people look at the same image but fail to see a face, the synchronized waves do not emerge. Many but not all other studies have confirmed this relationship between recognizing or binding a visual pattern and developing synchronized activity in separate brain areas (Roelfsema, Engel, König, & Singer, 1997; Roelfsema, Lamme, & Spekreijse, 2004). According to an alternative hypothesis (which does not contradict the first one), the key to binding a perception is to locate it in space. For example, if the location of something you see matches the location of something you hear, then you identify them as being the same thing. People with damage to the parietal cortex
Soma
PARIETAL ASSOCIATION CENTRE
Toes
esthetic Are a Trunk
Tongue
Leg Foot Shoulder Head Elbow Wrist Eyes Finger UpperFacial LowerMuscles Platy mus sma cle L arynx
STOP FRONTAL ASSOCIATION CENTRE
VISUAL AREA
&
CHECK
4. What is meant by the “binding problem” and what are two hypotheses to explain it? Check your answer on page 104.
OCCIPITO-TEMPORAL ASSOCIATION CENTRE AUDITORY AREA
SOMÆSTHETIC AREA
PARTIETAL ASSOCIATION CENTER
VISUAL AREA FRONTAL ASSOCIATION CENTRE
OCCIPITO-TEMPORAL ASSOCIATION CENTRE
OLFACTORY AREA
Figure 4.27 An old, somewhat misleading view of the cortex Note the designation “association centre” in this illustration of the cortex from an old introductory psychology textbook (Hunter, 1923). Today’s researchers are more likely to regard those areas as “additional sensory areas.”
have trouble locating objects in space—that is, they are not sure where anything is—and they often fail to bind objects. For example, they have great trouble finding one red X among a group of green Xs and red Os (L. C. Robertson, 2003). Also, if they see a display such as
Module 4.2 In Closing: Functions of the Cerebral Cortex The human cerebral cortex is so large that we easily slip into thinking of it as “the” brain, with all of the rest of the brain almost trivial. In fact, only mammals have a true cerebral cortex, and many mammals have only a small one. So subcortical areas by themselves can produce very complex behaviors, and a cerebral cortex by itself cannot do anything at all (because it would not be connected to any sense organs or muscles). What, then, is the function of the cerebral cortex? The primary function seems to be one of elaborating sensory material. Even fish, which have no cerebral cortex, can see, hear, and so forth, but they do not recognize and remember all the complex aspects of sensory stimuli that mammals do. In a television advertisement, one company says that it doesn’t make any products, but it makes lots of products better. The same could be said for the cerebral cortex.
Summary 1. Although brain size varies among mammalian species, the overall organization is similar. (p. 96) 2. The cerebral cortex has six laminae (layers) of neurons. A given lamina may be absent from certain parts of the cortex. The cortex is organized into columns of cells arranged perpendicular to the laminae. (p. 96)
they could report seeing a green triangle and a red square instead of a red triangle and a green square (L. Robertson, Treisman, Friedman-Hill, & Grabowecky, 1997; Treisman, 1999; R. Ward, Danziger, Owen, & Rafal, 2002; Wheeler & Treisman, 2002). Even people with intact brains sometimes make mistakes of this kind if the displays are flashed very briefly or presented in the periphery of vision or presented during a distraction (Holcombe & O N L I N E Cavanagh, 2001; Lehky, 2000). You can experience this failure of binding with the try it Online Try It Yourself activity “Failure of yourself Binding.”
3. The occipital lobe of the cortex is primarily responsible for vision. Damage to part of the occipital lobe leads to blindness in part of the visual field. (p. 98) 4. The parietal lobe processes body sensations. The postcentral gyrus contains four separate representations of the body. (p. 98) 5. The temporal lobe contributes to hearing and to complex aspects of vision. (p. 98) 6. The frontal lobe includes the precentral gyrus, which controls fine movements. It also includes the prefrontal cortex, which contributes to memories of current and recent stimuli, planning of movements, and regulation of emotional expressions. (p. 100) 4.2
The Cerebral Cortex
103
7. The binding problem is the question of how we connect activities in different brain areas, such as sights and sounds. The various brain areas do not all send their information to a single central processor. (p. 102) 8. One hypothesis to answer the binding problem is that the brain binds activity in different areas when those areas produce precisely synchronous waves of activity. Still, many questions remain. (p. 102)
Answers to STOP
&
4. The binding problem is the question of how the brain combines activity in different brain areas to produce unified perception and coordinated behavior. One hypothesis is that the brain binds activity in different areas when those areas produce precisely synchronized waves of activity. Another hypothesis is that binding requires first identifying the location of each object; when the sight and sound appear to come from the same location, we bind them as a single experience. (p. 103)
CHECK
Questions
Thought Question
1. Column (p. 101) 2. Temporal lobe; parietal lobe; occipital lobe; frontal lobe (p. 101) 3. The prefrontal cortex is especially important for working memory (memory for what is currently
104
happening) and for modifying behavior based on the context. (p. 101)
Chapter 4
Anatomy of the Nervous System
When monkeys with Klüver-Bucy syndrome pick up lighted matches and snakes, we do not know whether they are displaying an emotional deficit or an inability to identify the object. What kind of research method might help answer this question?
Module 4.3
Research Methods
I
magine yourself trying to understand some large, complex machine. You could begin by describing the appearance and location of the machine’s main parts. That task could be formidable, but it is easy compared to discovering what each part does. Similarly, describing the structure of the brain is difficult enough, but the real challenge is to discover how it works. The methods for exploring brain functions are quite diverse, and throughout the text, we shall consider particular research methods as they become appropriate. However, most methods fall into a few categories. In this module, we consider those categories and the logic behind them. We also examine a few of the most common research techniques that will reappear in one chapter after another. The main categories of methods for studying brain function are as follows: 1. Correlate brain anatomy with behavior. Do people with some unusual behavior also have unusual brains? If so, in what way? 2. Record brain activity during behavior. For example, we might record changes in brain activity during fighting, sleeping, finding food, or solving a problem. 3. Examine the effects of brain damage. After damage or temporary inactivation, what aspects of behavior are impaired? 4. Examine the effects of stimulating some brain area. Ideally, a behavior that is impaired by damage to some area should be enhanced by stimulating the same area.
Correlating Brain Anatomy with Behavior One of the first ways ever used for studying brain function sounds easy: Find someone with unusual behavior and then look for unusual features of the brain. In the 1800s, Franz Gall observed some people with excellent verbal memories who had protruding eyes. He inferred that verbal memory depended on brain areas behind the eyes that had pushed the eyes forward. Gall then examined the skulls of people with other talents
or personalities. He could not examine their brains, but he assumed that bulges and depressions on the skull corresponded to the brain areas below them. His process of relating skull anatomy to behavior is known as phrenology. One of his followers made the phrenological map in Figure 4.28. One problem with phrenologists was their uncritical use of data. In some cases, they examined just one person with some behavioral quirk to define a brain area presumably responsible for it. Another problem was that skull shape has little relationship to brain anatomy. The skull is thicker in some places than others and thicker in some people than others. Other investigators of the 1800s and 1900s rejected the idea of examining skulls but kept the idea that brain anatomy relates to behavior. One project was to remove people’s brains after death and see whether the brains of eminent people looked unusual in any way. Several societies arose in which members agreed to donate their brains after death to the research cause. No conclusion resulted. The brains of the eminent varied considerably in size and external anatomy; so did the brains of everyone else. Certainly, if brain anatomy related to eminence or anything else, the relation wasn’t obvious (Burrell, 2004). At the end of this module, we’ll return to the issue of brain anatomy and intelligence. Modern methods enable us to approach the question more systematically than in the past, although the conclusions are still murky. Even if we ignore the question of how overall brain size or shape relates to anything, the size of particular areas within the brain might relate to specific behaviors. For example, researchers would like to know whether people with schizophrenia or other psychiatric disorders have any brain abnormalities. Today, they can examine detailed brain anatomy in detail in living people using large enough groups for statistical analysis. We shall encounter a few examples of this kind of research throughout the text. One method is computerized axial tomography, better known as a CT or CAT scan (Andreasen, 1988). A physician injects a dye into the blood (to increase contrast in the image) and then places the person’s head into a CT scanner like the one shown in Figure 4.29a. X-rays are passed through the head and 4.3
Research Methods
105
Affective Faculties Propensities ? Desire to live • Alimentiveness 1 Destructiveness 2 Amativeness 3 Philoprogenitiveness 4 Adhesiveness 5 Inhabitiveness 6 Combativeness 7 Secretiveness 8 Acquisitiveness 9 Constructiveness
Sentiments 10 Cautiousness 11 Approbativeness 12 Self-esteem 13 Benevolence 14 Reverence 15 Firmness 16 Conscientiousness 17 Hope 18 Marvelousness 19 Ideality 20 Mirthfulness 21 Imitation
recorded by detectors on the opposite side. The CT scanner is rotated slowly until a measurement has been taken at each angle over 180 degrees. From the measurements, a computer constructs images of the brain. Figure 4.29b is an example. Another method is magnetic resonance imaging (MRI) (Warach, 1995), which is based on the fact that any atom with an oddnumbered atomic weight, such as hydrogen, has an axis of rotation. An MRI device applies a powerful magnetic field (about 25,000 times the magnetic field of the earth) to align all the axes of rotation and then tilts them with a brief radio frequency field. When the radio frequency field is turned off, the atomic nuclei release electromagnetic energy as they relax and return to their original axis. By measuring that energy, MRI devices form an image of the brain, such as the one in Figure 4.30. MRI images anatomical details smaller than a millimeter in diameter. One drawback is that the person must lie motionless in a confining, noisy apparatus. The procedure is not suitable for children or anyone who fears enclosed places.
Intellectual Faculties Perceptive Reflective 22 Individuality 34 Comparison 23 Configuration 35 Causality 24 Size 25 Weight and resistance 26 Coloring 27 Locality 28 Order 29 Calculation 30 Eventuality 31 Time 32 Tune 33 Language
Figure 4.28 A phrenologist’s map of the brain Neuroscientists today also try to localize functions in the brain, but they use more careful methods and they study such functions as vision and hearing, not “secretiveness” and “marvelousness.” (Source: From Spurzheim, 1908)
-ra
Dan McCoy/Rainbow
yd
ete
ctor
X-ra y
so ur
ce
X
(a)
(b)
Figure 4.29 CT scanner (a) A person’s head is placed into the device and then a rapidly rotating source sends x-rays through the head while detectors on the opposite side make photographs. A computer then constructs an image of the brain. (b) A view of a normal human brain generated by computerized axial tomography (CT scanning).
106
Chapter 4
Anatomy of the Nervous System
© Will and Demi McIntyre/Photo Researchers
Recording Brain Activity
Figure 4.30 A view of a living brain generated by magnetic resonance imaging Any atom with an odd-numbered atomic weight, such as hydrogen, has an inherent rotation. An outside magnetic field can align the axes of rotation. A radio frequency field can then make all these atoms move like tiny gyros. When the radio frequency field is turned off, the atomic nuclei release electromagnetic energy as they relax. By measuring that energy, we can obtain an image of a structure such as the brain without damaging it.
When you watch a sunset, feel frightened, or solve a mathematical problem, which brain areas change their activity? With laboratory animals, researchers insert electrodes to record brain activity (see Methods 6.1, page 173). Studies of human brains use noninvasive methods—that is, methods that don’t require inserting anything. A device called the electroencephalograph (EEG) records electrical activity of the brain through electrodes—ranging from just a few to more than a hundred—attached to the scalp (Figure 4.31). Electrodes glued to the scalp measure the average activity at any moment for the population of cells under the electrode. The output is then amplified and recorded. This device can record either spontaneous brain activity, or activity in response to a stimulus, in which case we call the results evoked potentials or evoked responses. For one example of a study, researchers recorded evoked potentials from young adults as they watched pictures of nudes of both sexes. Men reported high arousal by the female nudes, while women reported neutral feelings to both the males and females, but both men’s and women’s brains showed strong evoked potentials to the opposite-sex nudes (Costa, Braun, & Birbaumer, 2003).
STOP
&
CHECK
1. Researchers today sometimes relate differences in people’s behavior to differences in their brain anatomy. How does their approach differ from that of the phrenologists? Check your answer on page 116.
© Richard Nowitz/Photo Researchers
One limitation is something you will hear repeatedly in psychology: Correlation does not mean causation. For example, brain abnormalities could influence behavior, but it is also possible that abnormal behavior led to brain abnormalities. We need other kinds of evidence to support cause-and-effect conclusions.
Figure 4.31 Electroencephalography An electroencephalograph records the overall activity of neurons under various electrodes attached to the scalp. 4.3
Research Methods
107
200 fT/cm 200 ms
Figure 4.32 A result of magnetoencephalography, showing responses to a tone in the right ear
a living brain by recording the emission of radioactivity from injected chemicals. First, the person receives an injection of glucose or some other chemical containing radioactive atoms. When a radioactive atom decays, it releases a positron that immediately collides with a nearby electron, emitting two gamma rays in exactly opposite directions. The person’s head is surrounded by a set of gamma ray detectors (Figure 4.33). When two detectors record gamma rays at the same time, they identify a spot halfway between those detectors as the point of origin of the gamma rays. A computer uses this information to determine how many gamma rays are coming from each spot in the brain and therefore how much of the radioactive chemical is located in each area (Phelps & Mazziotta, 1985). The areas showing the most radioactivity are the ones with the most blood flow, and therefore, presumably, the most brain activity. Ordinarily, PET scans use radioactive chemicals with a short half-life, made in a large device called a cyclotron. Because cyclotrons are large and expensive, PET scans are available only at research hospitals. Furthermore, PET requires exposing the brain to radioactivity. For most purposes, PET scans have been re-
The nose is shown at the top. For each spot on the diagram, the display shows the changing response over a few hundred ms following the tone. (Note calibration at lower right.) The tone evoked responses in many areas, with the largest responses in the temporal cortex, especially on the left side. (Source: Reprinted from Neuroscience: From the Molecular
That is, evoked potentials sometimes reveal information that self-reports do not. A magnetoencephalograph (MEG) is similar, but instead of measuring electrical activity, it measures the faint magnetic fields generated by brain activity (Hari, 1994). Like EEG, an MEG recording identifies only the approximate location of activity to within about a centimeter. However, MEG has excellent temporal resolution, showing changes from one millisecond to another. Figure 4.32 shows an MEG record of brain responses to a brief tone heard in the right ear. The diagram represents a human head as viewed from above, with the nose at the top (Hari, 1994). Researchers using MEG can identify the times at which various brain areas respond and thereby trace a wave of brain activity from its point of origin to all the other areas that process it (Salmelin, Hari, Lounasmaa, & Sams, 1994). Another method, positron-emission tomography (PET), provides a high-resolution image of activity in
108
Chapter 4
Anatomy of the Nervous System
Michael Evans/Life File/Getty Images
to the Cognitive, by R. Hari, 1994, p. 165, with kind permission from Elsevier Science–NL, Sara Burgerhartstraat 25, 1055 KV Amsterdam, The Netherlands.)
Figure 4.33 A PET scanner A person engages in a cognitive task while attached to this apparatus that records which areas of the brain become more active and by how much.
researchers might ask you to look at a page written in a foreign language. Identifying the areas active during reading still does not tell us what those areas do during reading. Reading requires language, memory, visual attention, and other skills, so further research would be needed to identify which areas do what. The task might seem overwhelming, and it would be for any laboratory by itself, but researchers share their results in an online library of fMRI results (Van Horn, Grafton, Rockmore, & Gazzaniga, 2004). One important generalization is that brain scan results vary across individuals. For example, solving a chess problem activates different areas for average players than in chess experts. For chess experts, who may have seen the same position many times before, it is not a reasoning problem but a memory problem (Amidzic, Riehle, Fehr, Wienbruch, & Elbert, 2001; Pesenti et al., 2001).
placed by functional magnetic resonance imaging (fMRI), which is somewhat less expensive and poses no known health risks. Standard MRI scans record the energy released by water molecules after removal of a magnetic field. Because the brain has little net flow of water, MRI doesn’t show changes over time. Functional magnetic resonance imaging (fMRI) is a modified version of MRI based on hemoglobin (the blood protein that binds oxygen) (Detre & Floyd, 2001). Hemoglobin with oxygen reacts to a magnetic field differently from hemoglobin without oxygen. Because oxygen consumption increases in the brain areas with the greatest activity (Mukamel et al., 2005), researchers can set the fMRI scanner to detect changes in the oxygen content of the blood and thereby measure the relative levels of brain activity in various areas (Logothetis, Pauls, Augath, Trinath, & Oeltermann, 2001). An fMRI image has a spatial resolution of 1 or 2 mm (almost as good as standard MRI) and temporal resolution of about a second (Figure 4.34). Various other methods for brain scans are also in use (Grinvald & Hildesheim, 2004). For more information about brain scan techniques and some striking pictures, check this website: http://www.musc.edu/
Effects of Brain Damage
psychiatry/fnrd/primer_index.htm
Simon Fraser, Dept. of Neuroadiology, Newcastle General Hospital/Science Photo Library
Unfortunately, interpreting the images is not easy. For example, a raw measure of your brain activity while you were reading would mean nothing without a comparison to something else. So researchers would record your brain activity once while you were reading, and once while you were, well, not reading, but doing what? Doing “nothing” is not an option; a healthy human brain is always doing something. As a comparison task,
Figure 4.34 An fMRI scan of a human brain An fMRI produces fairly detailed photos at rates up to about one per second.
In 1861, the French neurologist Paul Broca found that nearly every patient who had lost the ability to speak had damage in part of the left frontal cortex, an area now known as Broca’s area. That was the first discovery about the function of any brain area and a pioneering event in modern neurology. Since then, researchers have made countless reports of behavioral impairments after brain damage from stroke, disease, and other causes. From a research standpoint, the problem is the lack of control. Most people with damage in any area have damage to other areas too, and no two people have exactly the same damage. With laboratory animals, researchers can intentionally damage a selected area. A lesion is damage to a brain area; an ablation is a removal of a brain area. To damage a structure in the interior of the brain, researchers use a stereotaxic instrument, a device for the precise placement of electrodes in the brain (Figure 4.35). By consulting a stereotaxic atlas (map) of some species’ brain, a researcher aims an electrode at the desired position with reference to landmarks on the skull (Figure 4.36). Then the researcher anesthetizes an animal, drills a small hole in the skull, inserts the electrode, and lowers it to the target. Suppose someone makes a lesion, finds that the animal stops eating, and concludes that the area is important for eating. “Wait a minute,” you might ask. “How do we know the deficit wasn’t caused by anesthetizing the animal, drilling a hole in its skull, and lowering an electrode to this target?” To test this possibility, an experimenter produces a sham lesion in a control group, performing all the same procedures but without the electrical current. Any behavioral difference
4.3
Research Methods
109
Occipital bone Parietal bone
Interparietal bone Bregma
Frontal bone
Eye socket
Provided by James W. Kalat
Nasal bone
Figure 4.36 Skull bones of a rat Bregma, the point where four bones meet on the top of the skull, is a useful landmark from which to locate areas of the brain.
Figure 4.35 A stereotaxic instrument for locating brain areas in small animals Using this device, researchers can insert an electrode to stimulate, record from, or damage any point in the brain.
between the two groups must result from the lesion and not the other procedures. Besides lesions, several other procedures can inactivate various brain structures or systems. In the gene-knockout approach, researchers use biochemical methods to direct a mutation to a particular gene that is important for certain types of cells, transmitters, or receptors (Joyner & Guillemot, 1994). Certain chemicals temporarily inactivate one part of the brain or one type of synapse. Transcranial magnetic stimulation, the application of an intense magnetic field to a portion of the scalp, can temporarily inactivate the neurons below the magnet (Walsh & Cowey, 2000). This procedure enables researchers to study a given individual’s behavior with the brain area active, then inactive, and then active again. Figure 4.37 shows the apparatus for this procedure. With any of these approaches, a big problem is to specify the exact behavioral deficit. By analogy, suppose you cut a wire inside a television and the picture disappeared. You would know that this wire is necessary for the picture, but you would not know why. Similarly, if you damaged a brain
110
Chapter 4
Anatomy of the Nervous System
Images not available due to copyright restrictions
area and the animal stopped eating, you wouldn’t know how that area contributes to eating. To find out, you would need to test eating and other behaviors under many conditions.
Effects of Brain Stimulation If brain damage impairs some behavior, stimulation should increase it. Researchers can insert electrodes to stimulate brain areas in laboratory animals. With humans, they can use a less invasive procedure (although it provides less precision). Researchers apply a magnetic field to the scalp, thereby stimulating the brain areas beneath it (Fitzgerald, Brown, & Daskalakis, 2002). Whereas intense transcranial magnetic stimulation inactivates the underlying area, a brief, milder application stimulates it. Another approach is to inject a chemical that stimulates one kind of receptor. That method, of course, stimulates those receptors throughout the brain, not just in one area. One limitation of any stimulation study is that complex behaviors and experiences depend on many brain areas, not just one, so artificial stimulation pro-
duces artificial responses. For example, electrically or magnetically stimulating the primary visual areas of the brain produces reports of sparkling flashing points of light, not the sight of a face or other recognizable object. It is easier to discover which brain area is responsible for vision (or movement or whatever) than to discover how it produces a meaningful pattern. Table 4.5 summarizes various methods of studying brain-behavior relationships.
STOP
&
CHECK
2. What is the difference between invasive and noninvasive procedures? 3. How do the effects of brief, mild magnetic stimulation differ from those of longer, more intense stimulation? 4. Why does electrical or magnetic stimulation of the brain seldom produce complex, meaningful sensations or movements? Check your answers on page 116.
Table 4.5 Brain-Behavior Research Methods Correlate Brain Anatomy with Behavior Computerized axial tomography (CAT)
Maps brain areas, but requires exposure to x-rays
Magnetic resonance imaging (MRI)
Maps brain areas in detail, using magnetic fields
Record Brain Activity During Behavior Record from electrodes in brain
Invasive; used with laboratory animals, seldom humans
Electroencephalograph (EEG)
Records from scalp; measures changes by ms, with but low resolution of location of the signal
Evoked potentials
Similar to EEG but in response to stimuli
Magnetoencephalograph (MEG)
Similar to EEG but measures magnetic fields
Positron emission tomography (PET)
Measures changes over both time and location but requires exposing brain to radiation
Functional magnetic resonance imaging (fMRI)
Measures changes over about 1 second, identifies location within 1–2 mm, no use of radiation
Examine Effects of Brain Damage Study victims of stroke etc.
Used with humans; each person has different damage
Lesion
Controlled damage in laboratory animals
Ablation
Removal of a brain area
Gene-knockout
Effects wherever that gene is active (e.g., a receptor)
Transcranial magnetic stimulation
Intense application temporarily inactivates a brain area
Examine Effects of Stimulating a Brain Area Stimulating electrodes
Invasive; used with laboratory animals, seldom with humans
Transcranial magnetic stimulation
Brief, mild application activates underlying brain area
4.3
Research Methods
111
Brain and Intelligence How does intelligence relate to brain size or structure, if at all? This is a question about which you might be curious. It is also an example of how new methods facilitate more detailed research. As mentioned at the start of this module, many researchers compared the brains of eminent (presumably intelligent) people to those of less successful people but failed to find any obvious difference in total brain size or other easily observed features. More recently, neuroscientists examined the brain of the famous scientist Albert Einstein, again hoping to find something unusual. Einstein’s total brain size was just average. He did have a higher than average ratio of glia to neurons in one brain area (M. C. Diamond, Scheibel, Murphy, & Harvey, 1985). However, because researchers examined several areas and found a difference in only one, the difference could be accidental or irrelevant. Another study found expansion of part of Einstein’s parietal cortex, as shown in Figure 4.38 (Witelson, Kigar, & Harvey, 1999). However, a study of a single brain produces no more than a suggestive hypothesis. Indeed, a little reflection should convince us that brain size can’t be synonymous with intelligence. If it
were, then we could promote intelligence just by providing lots of good nutrition, and we wouldn’t have to bother with education. However, despite the arguments against it and the weak evidence for it, the idea has lingered: Shouldn’t brain size have something to do with intelligence? Even if the idea isn’t quite right, is it completely wrong? By analogy, muscle size isn’t a good predictor of athletic ability—except for a few sports like weightlifting—but it isn’t completely irrelevant either.
Comparisons Across Species For better or worse, we humans dominate life on earth, presumably because of our brains. However, all mammalian brains have the same basic organization. The components, such as the visual cortex and the auditory cortex, are in the same relative locations, and all mammalian brains have the same cell types and same neurotransmitters. Mammals also resemble one another in the proportions of various brain areas. Choose any two major brain areas, such as hippocampus and thalamus. Call one area A and the other B. Now choose any mammalian species. If you know the size of area A, you can fairly accurately predict the size of area B. The main exception to this rule is the olfactory bulb, which is,
Images not available due to copyright restrictions
112
Chapter 4
Anatomy of the Nervous System
4
Log of brain mass
Human Primates Nonprimate mammals
2
Birds 0
© Michael Dick/Animals Animals
for example, large in dogs and small in humans (Finlay & Darlington, 1995). So, because brain organization is about the same across species, the main differences are quantitative. Do variations in overall size relate to intelligence? We humans like to think of ourselves as the most intelligent animals—after all, we’re the ones defining what intelligence means! However, if we look only at size, we cannot demonstrate our intellectual superiority. Elephants’ brains are four times the size of ours, and sperm whales’ brains are twice as big as elephants’. Perhaps, many people suggest, the more important consideration is brain-to-body ratio. Figure 4.39 illustrates the relationship between logarithm of body mass and logarithm of brain mass for various vertebrates (Jerison, 1985). Note that the species we regard as most intelligent—for example, ahem, ourselves—have larger brains in proportion to body size than do the species we consider less impressive, such as frogs.
Figure 4.40 An elephant-nose fish The brain of this odd-looking fish weighs 0.3 g (0.01 oz), which is 3% of the weight of the whole fish—a vastly higher percentage than most other fish and higher even than humans. What this fish does with so much brain, we don’t know, but it may relate to the fish’s unusual ability to detect electrical fields.
to humans’ 2% brain-to-body ratio (Nilsson, 1999). So neither total brain mass nor brain-to-body ratio puts humans in first place. We might look for some more complex measure that considers both total brain size and brain-to-body ratio. But before we can test various formulas, we need a clear definition of animal intelligence, which has been an elusive concept, to say the least (Macphail, 1985). Given that studies of brain and behavior in nonhumans are not helping, let’s abandon the effort and turn to humans.
Reptiles
STOP
Fish
&
CHECK
Amphibians
–2 –3
–2
–1
0 1 2 Log of body mass
3
4
5
5. If you know the total brain size for some species and you want to predict the size of a given structure, which structure would be hardest to predict?
Figure 4.39 Relationship of brain mass to body mass across species
6. Why are both brain size and brain-to-body ratio unsatisfactory ways of estimating animal intelligence?
Each species is one point within one of the polygons. In general, log of body mass is a good predictor of log of brain mass. Note that primates in general and humans in particular have a large brain mass in proportion to body mass. (Source: Adapted from Jerison, 1985)
Check your answers on page 116.
Comparisons Across Humans
However, brain-to-body ratio has problems also: Chihuahuas have the highest brain-to-body ratio of all dog breeds, not because they were bred for intelligence but because they were bred for small body size (Deacon, 1997). Squirrel monkeys, which are also very thin, have a higher brain-to-body ratio than humans. (And with the increasing prevalence of human obesity, our brainto-body ratio is declining even more!) The elephantnose fish (Figure 4.40), which you might keep in a tropical fish aquarium, has a brain that weighs just 0.3 g, but that’s 3% of the total weight of the fish, as compared
For many years, studies of brain size and intelligence in humans found correlations barely above zero. However, a low correlation between two variables can mean either that they are truly unrelated or that at least one of them was measured poorly. In this case, the measurements of intelligence (by IQ tests) were of course imperfect, and the measurements of brain size were as bad or worse. External skull size is a poor measure of brain size because some people have thicker skulls than others. Measuring the internal volume of the skull (after death) is also imperfect because many people’s brains do not fill the entire skull. Removing the brain 4.3
Research Methods
113
after death and weighing it has its own problems. Brains begin to decompose immediately after death, and they begin to dry out as soon as they are removed from the skull. Today, however, MRI scans can accurately measure brain volume in healthy, living people. Most studies, though not all (Schoenemann, Budinger, Sarich, & Wang, 2000), have found a moderate positive correlation between brain size and IQ, typically around .3 (Willerman, Schultz, Rutledge, & Bigler, 1991). Two studies on twins found greater resemblance between monozygotic than dizygotic twins for both brain size and IQ scores (Pennington et al., 2000; Posthuma et al., 2002) (Figure 4.41). Surprisingly, IQ correlated more strongly with the size of subcortical areas than with that of the cerebral cortex (Pennington et al., 2000). Do the same genes that control brain size also influence IQ? To approach this question, investigators again examined pairs of twins. For monozygotic twins, they found that the size of one twin’s brain correlated .31 with the other twin’s IQ. For dizygotic twins, the correlation was .15. These results suggest that the genes controlling brain size also influence IQ (Pennington et al., 2000). Several genes have been identified that influence both brain structure and intellectual performance (Pezawas et al., 2004; Zhang, 2003). Another approach is to examine the correlation between IQ scores and specific brain areas. In one study, investigators used MRI to measure the size of gray matter and white matter areas throughout the brains of 23 young adults from one university campus and 24 middle-aged or older adults from another campus. In Figure 4.42, the areas highlighted in red showed a statistically significant correlation with IQ; those highlighted in yellow showed an even stronger correlation.
Similarity of DZ Twins
Similarity of MZ Twins
1500 Cerebral volume of twin 2
Cerebral volume of twin 2
1600
Note two points: First, IQ correlates with the size of many brain areas. Second, the results differed between the two samples, either because of the age difference or for other unidentified reasons (Haier, Jung, Yeo, Head, & Alkire, 2004). As always, correlation does not mean causation. For example, brain size and IQ might correlate because good health and nutrition contribute to both brain growth and intellectual performance. In addition, how many pencils someone can hold in one hand no doubt correlates with the size of the hand. But it also correlates with the size of the foot, just because most people with large hands also have large feet. Similarly, the size of one brain area correlates with the size of others, so even if intelligence depended on only one brain area, it still might correlate with the size of other areas. Now for the most confusing part: Although IQ correlates positively with brain size for either men or women separately, men on the average have larger brains than women but equal IQs (Willerman et al., 1991). If brain size is important, why don’t men have higher IQs? One possibility is to examine brain-to-body ratio instead of just brain volume, but that may not be the whole answer. Most of the research has reported correlations between IQ and brain volume, not brainto-body ratio. Besides, if IQ depended literally on brain-to-body ratio, it should change when people gain or lose weight, and of course, it does not. A different hypothesis is that IQ relates more closely to the gray matter of the brain—that is, the cell bodies—rather than total mass including white matter (the axons). Women on the average have more and deeper gyri on the surface of the cortex, especially in the frontal and parietal areas (Luders et al., 2004). Consequently, the surface area of the cortex is about the
1500 1400 1300 1200 1100 1000 1000 1100 1200 1300 1400 1500 1600 1700 Cerebral volume of twin 1
1400 1300 1200 1100 1000 900 600
800 1000 1200 1400 Cerebral volume of twin 1
Figure 4.41 Correlations of brain size for twins Each graph is a scatter plot, in which each dot represents one pair of twins. Brain size for one twin is shown along the x axis; brain size for the other twin is along the y axis. Note that both kinds of twins show similarities, but the correlation is stronger for the monozygotic twins. (Source: From B. F. Pennington et al., “A twin MRI study of size variations in the human brain,” Journal of Cognitive Neuroscience, 12, pp. 223–232, Figures 1, 2. © 2000 by the Massachusetts Institute of Technology. Reprinted with permission.)
114
Chapter 4
Anatomy of the Nervous System
1600
Image not available due to copyright restrictions
same in men and women. Because the surface is lined with cells (gray matter), the sexes are nearly equal in gray matter, whereas men have more white matter (Allen, Damasio, Grabowski, Bruss, & Zhang, 2003). An MRI study found a .34 correlation between teenagers’ IQ scores and the gray matter volumes of their brains (Frangou, Chitins, & Williams, 2004). So a tentative conclusion is that, other things being equal, more gray matter is associated with better performance on intellectual tests. However, let’s go back to species comparisons: The difference between human brains and those of chimpanzees and gorillas is more a matter of increased white matter in humans than increased gray matter (Schoenemann, Sheehan, & Glotzer, 2005). So the species difference seems to imply that white matter (i.e., connectivity within the brain) is important for intelligence. At this point, we have far better data than before about the brain and intelligence, but the overall picture is still confusing. On the other hand, how important is this question, really? It has been a matter of curiosity for many people for a long time, but it has no great theoretical importance or practical applications. Relating total brain to total intelligence is like relating the geographical area of a country to the size of its population: Sure, the correlation is positive, but it overlooks a host of more interesting variables. Progress in both psychology and neuroscience depends on making finer-grained distinctions. How do the anatomy, chemistry, and other features of each part of the brain relate to specific aspects
of behavior? In the rest of this text, we concentrate on these types of questions.
STOP
&
CHECK
7. Why do recent studies show a stronger relationship between brain size and IQ than older studies did? 8. What evidence indicated that the genes that control human brain size also influence IQ? Check your answers on page 116.
Module 4.3 In Closing: Research Methods and Their Limits Descriptions of the history of science sometimes highlight a single study that “conclusively” established one theory or another. Such events are rare. Far more often, researchers gradually accumulate evidence that points in a particular direction, until eventually that view becomes dominant. Even in those rare cases when a single study appears to have been decisive, researchers often identify it as decisive only in retrospect, after several additional studies have confirmed the finding. 4.3 Research Methods
115
The reason we need so many studies is that almost any study has limitations. For example, one set of researchers found evidence linking one part of the human brain to mathematical thinking (Piazza, Izard, Pinel, Le Bihan, & Dehaene, 2004). Another study at about the same time found evidence against that conclusion (Shuman & Kanwisher, 2004). Both studies were conducted by well-respected scientists using similar methods. In a case like this, we hope additional research will identify key differences between the two studies. Do the differences depend on the age of the participants, the type of mathematical task, or the exact method of measuring brain activity? Sometimes what seem like small differences in procedure produce very different outcomes (MacAndrew, Klatzky, Fiez, McClelland, & Becker, 2002). For example, several decades ago, two laboratories studying human learning found that they got consistently different results just because one of them used chairs that resembled dentists’ chairs, which reminded participants of pain or displeasure (Kimble, 1967). Even when several studies using the same method produce similar results, the possibility remains that the method itself has a hidden flaw. Therefore, scientists prefer whenever possible to compare results from widely different methods. The more types of evidence point to a given conclusion, the greater our confidence.
Summary 1. People who differ with regard to some behavior sometimes also differ with regard to their brain anatomy. MRI is one modern method of imaging a living brain. However, correlations between behavior and anatomy should be evaluated cautiously. (p. 105) 2. Another research method is to record activity in some brain area during a given behavior. Many methods are available, including EEG, fMRI, and other noninvasive procedures. (p. 107) 3. Another way to study brain-behavior relationships is to examine the effects of brain damage. If someone loses an ability after some kind of brain damage, then that area contributes in some way, although we need more research to determine how. (p. 109) 4. If stimulation of a brain area increases some behavior, presumably that area contributes to the behavior. (p. 111) 5. Recent research using modern methods suggests a moderate positive relationship between brain size and intelligence, although many puzzles and uncertainties remain. (p. 112) 6. Each method by itself has limitations, and any conclusion must remain tentative, pending further research using additional methods and populations. (p. 115)
116
Chapter 4
Anatomy of the Nervous System
Answers to STOP
&
CHECK
Questions 1. The phrenologists drew conclusions based on just one or a few people with some oddity of behavior. Today’s researchers compare groups statistically. Also, today’s researchers examine the brain itself, not the skull above it. (p. 107) 2. An invasive procedure is one in which the investigator inserts something, like an electrode. A noninvasive procedure inserts nothing and does not cause any known health risk. (p. 111) 3. Brief, mild magnetic stimulation on the scalp increases activity in the underlying brain areas, whereas longer, more intense stimulation blocks it. (p. 111) 4. Meaningful, complex sensations and movements require a pattern of precisely timed activity in a great many cells, not just a burst of overall activity diffusely in one area. (p. 111) 5. The olfactory bulb (p. 113) 6. If we consider ourselves to be the most intelligent species—and admittedly, that is just an assumption—we are confronted with the fact that we have neither the largest brains nor the highest brain-tobody ratio. Brain-to-body ratio depends on selection for thinness as well as selection for brain size. Furthermore, animal intelligence is undefined and poorly measured, so we cannot even determine what correlates with it. (p. 113) 7. The use of MRI greatly improves the measurement of brain size. (p. 115) 8. For pairs of monozygotic twins, the size of one twin’s brain correlates significantly with the other twin’s IQ (as well as his or her own). Therefore, whatever genes increase the growth of the brain also increase IQ. (p. 115)
Thought Question Certain unusual aspects of brain structure were observed in the brain of Albert Einstein. One interpretation is that he was born with certain specialized brain features that encouraged his scientific and intellectual abilities. What is an alternative interpretation?
Chapter Ending
Key Terms and Activities Terms ablation (p. 109)
gamma waves (p. 102)
phrenology (p. 105)
anterior (p. 83)
ganglion (pl.: ganglia) (p. 84)
pituitary gland (p. 92)
anterior commissure (p. 96)
gene-knockout approach (p. 110)
pons (p. 88)
autonomic nervous system (p. 83)
gray matter (p. 84)
basal ganglia (p. 92)
gyrus (pl.: gyri) (p. 84)
positron-emission tomography (PET) (p. 108)
Bell-Magendie law (p. 84)
hindbrain (p. 87)
postcentral gyrus (p. 98)
binding problem (p. 102)
hippocampus (p. 93)
posterior (p. 83)
brainstem (p. 87)
horizontal plane (p. 83)
precentral gyrus (p. 100)
central canal (p. 94)
hypothalamus (p. 92)
prefrontal cortex (p. 100)
central nervous system (CNS) (p. 82)
inferior (p. 83)
prefrontal lobotomy (p. 100)
inferior colliculus (p. 89)
primates (p. 96)
central sulcus (p. 98)
ipsilateral (p. 83)
proximal (p. 83)
cerebellum (p. 88)
Klüver-Bucy syndrome (p. 100)
raphe system (p. 88)
cerebral cortex (p. 96)
lamina (pl.: laminae) (pp. 84, 96)
reticular formation (p. 88)
cerebrospinal fluid (CSF) (p. 94)
lateral (p. 83)
sagittal plane (p. 83)
column (pp. 84, 97)
lesion (p. 109)
sham lesion (p. 109)
computerized axial tomography (CT or CAT scan) (p. 105)
limbic system (p. 90)
somatic nervous system (p. 83)
magnetic resonance imaging (MRI) (p. 106)
spinal cord (p. 84)
magnetoencephalograph (MEG) (p. 108)
substantia nigra (p. 89)
corpus callosum (p. 96) cranial nerve (p. 87)
medial (p. 83)
superior (p. 83)
delayed-response task (p. 101)
medulla (p. 87)
superior colliculus (p. 89)
distal (p. 83)
meninges (p. 94)
dorsal (p. 83)
midbrain (p. 89)
sympathetic nervous system (p. 85)
dorsal root ganglion (p. 84)
nerve (p. 84)
tectum (p. 89)
electroencephalograph (EEG) (p. 107)
neuroanatomy (p. 81)
tegmentum (p. 89)
nucleus (p. 84)
temporal lobe (p. 98)
evoked potentials or evoked responses (p. 107)
nucleus basalis (p. 93)
thalamus (p. 91)
occipital lobe (p. 98)
tract (p. 84)
forebrain (p. 89)
parasympathetic nervous system (p. 85)
transcranial magnetic stimulation (p. 110)
frontal lobe (p. 100)
parietal lobe (p. 98)
ventral (p. 83)
functional magnetic resonance imaging (fMRI) (p. 109)
peripheral nervous system (PNS) (p. 82)
ventricle (p. 94)
contralateral (p. 83) coronal plane (p. 83)
fissure (p. 84)
stereotaxic instrument (p. 109) sulcus (pl.: sulci) (p. 84)
white matter (p. 84)
Chapter Ending
117
Brain Imaging in Psychiatry
Suggestions for Further Reading Burrell, B. (2004). Postcards from the brain museum. New York: Broadway Books. Fascinating history of the attempts to collect brains of successful people and try to relate their brain anatomy to their success. Klawans, H. L. (1988). Toscanini’s fumble and other tales of clinical neurology. Chicago: Contemporary Books. Description of illustrative cases of brain damage and their behavioral consequences.
http://www.musc.edu/psychiatry/fnrd/primer_index.htm
The Whole Brain Atlas (neuroanatomy) http://www.med.harvard.edu/AANLIB/home.html
Exploring Biological Psychology CD Virtual Reality Head Planes (virtual reality) Planes Puzzle (drag & drop) 3D Virtual Brain (virtual reality) Left Hemisphere Function (roll over with text pop-ups)
Websites to Explore You can go to the Biological Psychology Study Center and click these links. While there, you can also check for suggested articles available on InfoTrac College Edition. The Biological Psychology Internet address is:
Cortex Puzzle (drag & drop) Sagittal Section: Right Hemisphere, parts 1–3 (roll over with text pop-ups) Brain Puzzle (drag & drop) The Motor Cortex (animation) The Sensory Cortex (animation) Illustration of Binding (Try It Yourself) Possible Failure of Binding (Try It Yourself)
http://psychology.wadsworth.com/book/kalatbiopsych9e/
Research with Brain Scans (video)
Autonomic Nervous System
Critical Thinking (essay questions)
http://www.ndrf.org/ans.htm
Chapter Quiz (multiple-choice questions
118
Chapter Ending
http://www.thomsonedu.com Go to this site for the link to ThomsonNOW, your one-stop study shop, Take a Pre-Test for this chapter, and ThomsonNOW will generate a Personalized Study Plan based on your test results. The Study Plan will identify the topics you need to review and direct you to online resources to help you master these topics. You can then take a Post-Test to help you determine the concepts you have mastered and what you still need work on.
A short video illustrates the fMRI method.
The CD includes this virtual reality brain that you can rotate to various positions and dissect with a click.
Chapter Ending
119
Development and Plasticity of the Brain
5
Chapter Outline
Main Ideas
Module 5.1
1. Neurons begin by migrating to their proper locations and developing axons, which extend to approximately their correct targets by following chemical pathways.
Development of the Brain Growth and Differentiation of the Vertebrate Brain Pathfinding by Axons Determinants of Neuronal Survival The Vulnerable Developing Brain Fine-Tuning by Experience In Closing: Brain Development Summary Answers to Stop & Check Questions Thought Questions Module 5.2
Plasticity After Brain Damage Brain Damage and Short-Term Recovery Later Mechanisms of Recovery In Closing: Brain Damage and Recovery Summary Answers to Stop & Check Questions Thought Questions Terms Suggestions for Further Reading Websites to Explore Exploring Biological Psychology CD ThomsonNOW
2. The nervous system at first forms far more neurons than it needs and then eliminates those that do not establish suitable connections or receive sufficient input. It also forms excess synapses and discards the less active ones. 3. Experiences, especially early in life, can alter brain anatomy within limits. 4. Brain damage can result from a sharp blow, an interruption of blood flow, and several other types of injury. 5. Many mechanisms contribute to recovery from brain damage, including restoration of undamaged neurons to full activity, regrowth of axons, readjustment of surviving synapses, and behavioral adjustments.
“S
ome assembly required.” Have you ever bought a package with those ominous words? Sometimes all you have to do is attach a few parts. But sometimes you face page after page of incomprehensible instructions. I remember putting together my daughter’s bicycle and wondering how something that looked so simple could be so complicated. The human nervous system requires an enormous amount of assembly, and the instructions are different from those for a bicycle. Instead of, “Put this piece here and that piece there,” the instructions are, “Put these axons here and those dendrites there, and then wait to see what happens. Keep the connections that work the best and discard the others. Continue periodically making new connections and keeping only the most successful ones.” Therefore, we say that the brain’s anatomy is plastic; it is constantly changing, within limits. The brain changes rapidly in early development and continues changing throughout life.
Opposite: An enormous amount of brain development has already occurred by the time a person is 1 year old. Source: Dr. Dana Copeland
121
Module 5.1
Development of the Brain
T
hink of all the things you and other college students can do that you couldn’t have done a few years ago—analyze statistics, read a foreign language, write brilliant critiques of complex issues, and so on. Have you developed these new skills because of brain growth? No, or at least not in the usual sense. Many of your dendrites have grown new branches, but we would need an electron microscope to see any of them. Now think of all the things that 1-year-old children can do that they could not do at birth. Have they de-
veloped their new skills because of brain growth? To a large extent, yes. Consider, for example, Jean Piaget’s object permanence task, in which an observer shows a toy to an infant and then places it behind a barrier. Generally, a child younger than 9 months old does not reach around the barrier to retrieve the toy (Figure 5.1). Why not? The biological explanation is that the prefrontal cortex is necessary for responding to a signal that appears and then disappears, and the synapses of the prefrontal cortex develop massively between 7 and 12 months (Goldman-Rakic, 1987). An infant’s behavioral development does not depend entirely on brain growth, of course; infants learn just as adults do. Furthermore, as we shall see, many processes of brain development depend on experience in complex ways that blur the distinction between learning and maturation. In this module, we consider how neurons develop, how their axons connect, and how experience modifies development.
Doug Goodman/Photo Researchers
Growth and Differentiation of the Vertebrate Brain The human central nervous system begins to form when the embryo is about 2 weeks old. The dorsal surface thickens and then long thin lips rise, curl, and merge, forming a neural tube surrounding a fluid-filled cavity (Figure 5.2). As the tube sinks under the surface of the skin, the forward end enlarges and differentiates into the hindbrain, midbrain, and forebrain (Figure 5.3); the rest becomes the spinal cord. The fluid-filled cavity within the neural tube becomes the central canal of the spinal cord and the four ventricles of the brain; the fluid is the cerebrospinal fluid (CSF). At birth, the average human brain weighs about 350 grams. By the end of the first year, it weighs 1000 g, close to the adult weight of 1200 to 1400 g.
Figure 5.1 Piaget’s object permanence task An infant sees a toy and then an investigator places a barrier in front of the toy. Infants younger than about 9 months old fail to reach for the hidden toy. Tasks that require a response to a stimulus that is no longer present depend on the prefrontal cortex, a structure that is slow to mature.
122
Chapter 5
Development and Plasticity of the Brain
Growth and Development of Neurons The development of the nervous system naturally requires the production and alteration of neurons. Neuroscientists distinguish these processes in the de-
Figure 5.2 Early development of the human central nervous system
Future brain Neural plate
The brain and spinal cord begin as folding lips surrounding a fluid-filled canal. The stages shown occur at approximately age 2 to 3 weeks.
Developing heart bulge
Neural fold
Neural tube
Neural groove
(a)
(b)
(c)
(d)
long the cell proliferation lasts in days and the number of new neurons produced per day. For example, the main differences between human brains and chimpanzee brains are due to the fact that neurons continue proliferating longer in humans (Rakic, 1998; Vrba, 1998). Evidently, a small genetic change can produce a major difference in outcome. After cells have differentiated as neurons or glia, they migrate (move) toward their eventual destinations in the brain. Different kinds of cells originate in different locaMidbrain Hindbrain Midbrain tions at different times, and each must migrate substantial distances, following specific chemForebrain ical paths, to reach its final desHindbrain tination (Marín & Rubenstein, Cranial Forebrain 2001). Some move radially from nerves the inside of the brain to the outSpinal cord side; others move tangentially along the surface of the brain; and some move tangentially and then 3 weeks 7 weeks radially (Nadarajah & ParnaveMidbrain Forebrain las, 2002). Chemicals in families known as immunoglobulins and Forebrain chemokines help to guide neuron migration. A deficit in these chemicals can lead to impaired Hindbrain migration, decreased brain size, Cerebellum decreased axon growth, and mental retardation (Berger-Sweeney Medulla Midbrain & Hohmann, 1997; Crossin & (hidden) Krushel, 2000; Tran & Miller, 2003). On the other extreme, excesses of immunoglobulins have 11 weeks At birth been linked to some cases of Figure 5.3 Human brain at four stages of development schizophrenia (Crossin & Krushel, Chemical processes develop the brain to an amazing degree even before the start of 2000; Poltorak et al., 1997). The any experience with the world. Detailed changes in development continue to occur brain has many kinds of immunothroughout life. globulins and chemokines, pre-
velopment of neurons: proliferation, migration, differentiation, myelination, and synaptogenesis. Proliferation is the production of new cells. Early in development, the cells lining the ventricles of the brain divide. Some cells remain where they are (as stem cells), continuing to divide and redivide. Others become primitive neurons and glia that begin migrating to other locations. The developmental process is about the same in all vertebrates, varying in two factors: how
5.1
Development of the Brain
123
usually receive input from the ears now received input sumably reflecting the complexity of brain developonly from the eyes. Which would you guess happened? ment. The existence of so many chemicals implies The result, surprising to many, was this: What that brain development can go wrong in many ways, would have been auditory thalamus and cortex reorgabut it also implies that if one system fails, another one nized, developing some (but not all) of the characteriscan partly compensate. tic appearance of a visual cortex (Sharma, Angelucci, At first, a primitive neuron looks like any other & Sur, 2000). But how do we know whether the animals cell. Gradually, the neuron differentiates, forming the treated that activity as vision? Remember that the reaxon and dendrites that provide its distinctive shape. searchers performed these procedures on one side of The axon grows first, sometimes while the neuron is the brain. They left the other side intact. The researchmigrating. In such cases, the neuron tows its growing ers presented stimuli to the normal side of the brain axon along like a tail (Gilmour, Knaut, Maischein, & and trained the ferrets to turn one direction when they Nüsslein-Volhard, 2004), allowing its tip to remain at or heard something and the other direction when they near its target. In other cases, the axon needs to grow saw a light, as shown in Figure 5.4. After the ferrets toward its target, requiring it to find its way through learned this task well, the researchers presented a light what would seem like a jungle of other cells and fibers. that the rewired side could see. The result: The ferrets After the migrating neuron reaches its final location, turned the way they had been taught to turn when they dendrites begin to form, slowly at first. saw something. In short, the rewired temporal cortex, Neurons in different parts of the brain differ from receiving input from the optic nerve, produced visual one another in their shapes and chemical components. responses (von Melchner, Pallas, & Sur, 2000). When and how does a neuron “decide” which kind of A later and slower stage of neuronal development neuron it is going to be? Evidently, it is not a sudden is myelination, the process by which glia produce the all-or-none decision. In some cases, immature neurons insulating fatty sheaths that accelerate transmission in experimentally transplanted from one part of the demany vertebrate axons. Myelin forms first in the spinal veloping cortex to another develop the properties charcord and then in the hindbrain, midbrain, and foreacteristic of their new location (S. K. McConnell, 1992). brain. Unlike the rapid proliferation and migration of However, immature neurons transplanted at a slightly later stage develop some new properties while retaining some old ones (Cohen-Tannoudji, Babinet, Initial Training & Wassef, 1994). The result resemAnd learns to turn right when it sees a red light Ferret with rewired left bles the speech of immigrant chilflashed briefly in the left visual field (stimulating hemisphere learns to turn dren: Those who enter a country right hemisphere, which is wired normally). left when it hears a tone. when very young master the correct pronunciation, whereas slightly older children retain an accent. In one fascinating experiment, researchers explored what would happen to the immature auditory portions of the brain if they received input from the eyes instead Test of the ears. Ferrets, mammals in the weasel family, are born so immaNow flash the red light so Result: Ferret turns right. ture that their optic nerves (from that the left (rewired) the eyes) have not yet reached the hemisphere sees it. thalamus. On one side of the brain, researchers damaged the superior colliculus and the occipital cortex, the two main targets for the optic nerves. On that side, they also damaged the inferior colliculus, a major source of auditory input. Therefore, the optic nerve, unable to attach to its usual target, Figure 5.4 Behavior of a ferret with rewired temporal cortex attached to the auditory area of the First the normal (right) hemisphere is trained to respond to a red light by turning to thalamus, which lacked its usual the right. Then the rewired (left) hemisphere is tested with a red light. The fact that input. The result was that the parts the ferret turns to the right indicates that it regards the stimulus as light, not sound. of the thalamus and cortex that
124
Chapter 5
Development and Plasticity of the Brain
neurons, myelination continues gradually for decades (Benes, Turtle, Khan, & Farol, 1994). The final stage, synaptogenesis, or the formation of synapses, continues throughout life. Neurons are constantly forming new synapses and discarding old ones. This process does, however, slow down in most older people, as does the formation of new dendritic branches (Buell & Coleman, 1981; Jacobs & Scheibel, 1993).
New Neurons Later in Life Can the adult vertebrate brain generate any new neurons? The traditional belief, dating back to the work of Cajal in the late 1800s, was that vertebrate brains formed all their neurons during embryological development or during early infancy at the latest. Beyond that point, the brain could only lose neurons, never gain. Gradually, researchers found exceptions. The first were the olfactory receptors, which, because they are exposed to the outside world and its toxic chemicals, usually survive only a month or two. Certain neurons in the nose remain immature throughout life. Periodically, they divide, with one cell remaining immature while the other develops to replace a dying olfactory receptor. It grows its axon back to the appropriate site in the brain (Gogos, Osborne, Nemes, Mendelsohn, & Axel, 2000; Graziadei & deHan, 1973). Later researchers also found a population of undifferentiated cells, called stem cells, in the interior of the brain that sometimes generate “daughter” cells that migrate to the olfactory bulb and transform into glia cells or neurons (Gage, 2000). Still later researchers found evidence of new neuron formation in other brain areas. For example, songbirds have an area in their brain necessary for singing, and in this area, they have a steady replacement of a few kinds of neurons. Old neurons die and new ones take their place (Nottebohm, 2002). The black-capped chickadee, a small North American bird, hides seeds during the late summer and early fall and then finds them during the winter. It grows new neurons in its hippocampus (a brain area important for spatial memory) during the late summer (Smulders, Shiflett, Sperling, & deVoogd, 2000). Stem cells can also differentiate into new neurons in the adult hippocampus of mammals (Song, Stevens, & Gage, 2002; van Praag et al., 2002). Newly formed hippocampal neurons go through a period of actively forming and altering synapses, so they are likely to be important contributors to new learning (SchmidtHieber, Jonas, & Bischofberger, 2004). So far, most of this research has used rodents; whether humans and other primates also form new neurons in adulthood remains controversial (Eriksson et al., 1998; Gould, Reeves, Graziano, & Gross, 1999; Rakic, 2002). Except for olfactory neurons and the hippocampus, new neurons apparently do not form in other parts
of the adult mammalian brain. Why not? Apparently, stem cells are available elsewhere; the problem is that in nearly all mature brain areas, neurons are already covered with synapses and have no availability for new neurons to establish new synapses (Rakic, 2004).
STOP
&
CHECK
1. Which develops first, a neuron’s axon or its dendrites? 2. In the ferret study, how did the experimenters determine that visual input to the auditory portions of the brain actually produced a visual sensation? 3. In which brain areas do new neurons form in adults? Check your answers on page 136.
Pathfinding by Axons If you asked someone to run a cable from your desk to another desk in the same room, you wouldn’t have to give detailed directions. But imagine asking someone to run a cable to somewhere on the other side of the country. You would have to give detailed instructions about how to find the right city, the right building, and eventually, the right desk. The developing nervous system faces a similar challenge because it sends some of its axons over great distances. How do they find their way?
Chemical Pathfinding by Axons A famous biologist, Paul Weiss (1924), conducted an experiment in which he grafted an extra leg to a salamander and then waited for axons to grow into it. (Unlike mammals, salamanders and other amphibians can accept transplants of extra limbs and generate new axon branches to the extra limbs. Research sometimes requires finding the right species for a given study.) After the axons reached the muscles, the extra leg moved in synchrony with the normal leg next to it. Weiss dismissed as unbelievable the idea that each axon had developed a branch that found its way to exactly the correct muscle in the extra limb. He suggested instead that the nerves attached to muscles at random and then sent a variety of messages, each one tuned to a different muscle. In other words, it did not matter which axon was attached to which muscle. The muscles were like radios, each tuned to a different station: Each muscle received many signals but responded to only one.
Specificity of Axon Connections Weiss was mistaken. Later evidence supported the interpretation he had rejected: The salamander’s extra 5.1
Development of the Brain
125
Figure 5.5 Connections from eye to brain in a frog The optic tectum is a large structure in fish, amphibians, reptiles, and birds. Its location corresponds to the midbrain of mammals, but its function is more elaborate, analogous to what the cerebral cortex does in mammals. Note: Connections from eye to brain are different in humans, as described in Chapter 14.
Anterior (rostral)
leg moved in synchrony with its neighbor because each axon had found exactly the correct muscle. Since the time of Weiss’s work, most of the research on axon growth has dealt with how sensory axons find their way to the correct targets in the brain. (The issues are the same as those for axons finding their way to muscles.) In one study, Roger Sperry, a former student of Weiss, cut the optic nerves of some newts. (See photo and quote on the pages inside the back cover.) The damaged optic nerve grew back and connected with the tectum, which is the main visual area of fish, amphibians, reptiles, and birds (Figure 5.5). Sperry found that when the new synapses formed, the newt regained normal vision. Then Sperry (1943) repeated the experiment, but this time, after he cut the optic nerve, he rotated the eye by 180 degrees. When the axons grew back to the tectum, which targets would they contact? Sperry found that the axons from what had originally been the dorsal portion of the retina (which was now ventral) grew back to the area responsible for vision in the dorsal retina. Axons from what had once been the ventral retina (now dorsal) also grew back to their original targets. The newt now saw the world upside down and backward, responding to stimuli in the sky as if they were on the ground and to stimuli on the left as if they were on the right (Figure 5.6). Each axon regenerated to the area of the tectum where it had originally been, presumably by following a chemical trail.
Chemical Gradients The next question was: How specific is the axon’s aim? Must an axon from the retina find the tectal cell with exactly the right chemical marker on its surface, like a key finding the right lock? Does the body have to synthesize a separate chemical marker for each of the billions of axons in the nervous system? Chapter 5
Development and Plasticity of the Brain
Chiasm
Optic tectum
Lateral
(Source: After Romer, 1962)
126
Optic nerve
Medial
Posterior (caudal)
No. The current estimate is that humans have only about 30,000 genes total—far too few to mark each of the brain’s many billions of neurons. Nevertheless, nearly all axons grow to almost exactly their correct targets (Kozloski, Hamzei-Sichani, & Yuste, 2001). Without an extensive system of individual markers, neurons still manage to find their way. But how? A growing axon follows a path of cell-surface molecules, attracted by some chemicals and repelled by others, in a process that steers the axon in the correct direction (Yu & Bargmann, 2001). Some axons follow a trail based on one attractive chemical until they reach an intermediate location where they become insensitive to that chemical and start following a different attractant (Shirasaki, Katsumata, & Murakami, 1998; H. Wang & Tessier-Lavigne, 1999). Eventually, axons sort themselves over the surface of their target area by following a gradient of chemicals. For example, one chemical in the amphibian tectum is a protein known as TOPDV (TOP for topography; DV for dorsoventral). This protein is 30 times more concentrated in the axons of the dorsal retina than of the ventral retina and 10 times more concentrated in the ventral tectum than in the dorsal tectum. As axons from the retina grow toward the tectum, the retinal axons with the greatest concentration of TOPDV connect to the tectal cells with the highest concentration of that chemical; the axons with the lowest concentration connect to the tectal cells with the lowest concentration. A similar gradient of another protein aligns the axons along the anterior–posterior axis (J. R. Sanes, 1993) (Figure 5.7). (By analogy, you could think of men lining up from tallest to shortest, pairing up with women who lined up from tallest to shortest, so the tallest man dated the tallest woman and so forth.)
Retina
Tectum
Dorsal
Dorsal
Anterior
Posterior
Anterior
Ventral
Posterior
Ventral
Retina Old ventral
Optic nerve cut
Tectum Dorsal
Eye rotated Posterior
Anterior
Anterior
Posterior
Old dorsal
Ventral
Retina Old ventral
Tectum Dorsal
Posterior
Anterior
Old dorsal
Anterior
Axons regrow and attach to the same target neurons as before.
Posterior
Ventral
Figure 5.6 Summary of Sperry’s experiment on nerve connections in newts After he cut the optic nerve and inverted the eye, the optic nerve axons grew back to their original targets, not to the targets corresponding to the eye’s current position.
STOP
&
Optic tectum
Retina
CHECK
4. What was Sperry’s evidence that axons grow to a specific target presumably by following a chemical gradient instead of attaching at random? 5. If all cells in the tectum of an amphibian produced the same amount of TOPDV, what would be the effect on the attachment of axons? Check your answers on page 136.
Competition Among Axons as a General Principle As you might guess from the experiments just described, when axons initially reach their targets, each one forms synapses onto several cells in approximately
Figure 5.7 Retinal axons match up with neurons in the tectum by following two gradients The protein TOPDV is concentrated mostly in the dorsal retina and the ventral tectum. Axons rich in TOPDV attach to tectal neurons that are also rich in that chemical. Similarly, a second protein directs axons from the posterior retina to the anterior portion of the tectum. 5.1
Development of the Brain
127
the correct location, and each target cell receives synapses from a large number of axons. At first, axons make “trial” connections with many postsynaptic cells, and then the postsynaptic cells strengthen some synapses and eliminate others (Hua & Smith, 2004). Even at the earliest stages, this fine-tuning depends on the pattern of input from incoming axons (Catalano & Shatz, 1998). For example, one part of the thalamus receives input from many retinal axons. During embryological development, long before the first exposure to light, repeated waves of spontaneous activity sweep over the retina from one side to the other. Consequently, axons from adjacent areas of the retina send almost simultaneous messages to the thalamus. Each thalamic neuron selects a group of axons that are simultaneously active; in this way, it finds a group of receptors from adjacent regions of the retina (Meister, Wong, Baylor, & Shatz, 1991). It then rejects synapses from other locations. To some theorists, these results suggest a general principle, called neural Darwinism (Edelman, 1987). In Darwinian evolution, gene mutations and recombinations produce variations in individuals’ appearance and actions; natural selection favors some variations and weeds out the rest. Similarly, in the development of the nervous system, we start with more neurons and synapses than we keep. Synapses form haphazardly, and then a selection process keeps some and rejects others. In this manner, the most successful axons and combinations survive; the others fail to sustain active synapses. The principle of competition among axons is an important one, although we should use the analogy with Darwinian evolution cautiously. Mutations in the genes are random events, but neurotrophins steer new axonal branches and synapses in the right direction.
STOP
&
CHECK
6. If axons from the retina were prevented from showing spontaneous activity during early development, what would be the probable effect on development of the lateral geniculate? Check your answer on page 136.
Determinants of Neuronal Survival Getting just the right number of neurons for each area of the nervous system is more complicated than it might seem. To be of any use, each neuron must receive
128
Chapter 5
Development and Plasticity of the Brain
axons from the right source and send its own axons to a cell in the right area. The various areas do not all develop at the same time, so in many cases, the neurons in an area develop before any incoming axons have arrived and before any receptive sites are available for its own axons. If we examine a healthy adult nervous system, we find no leftover neurons that failed to make appropriate connections. How does the nervous system get the numbers to come out right? Consider a specific example. The sympathetic nervous system sends axons to muscles and organs; each ganglion has enough axons to supply the muscles and organs in its area, with no axons left over. How does the match come out so exact? Long ago, one explanation was that the muscles sent chemical messages to tell the sympathetic ganglion how many neurons to form. Rita Levi-Montalcini was largely responsible for disconfirming this hypothesis. (See photo and quote on the pages inside the back cover.) Levi-Montalcini’s early life would seem most unfavorable for a scientific career. She was a young Italian Jewish woman during the Nazi era. World War II was destroying the Italian economy, and almost everyone discouraged women from scientific or medical careers. Furthermore, the research projects assigned to her as a young medical student were virtually impossible, as she described in her autobiography (LeviMontalcini, 1988). Nevertheless, she developed a love for research and eventually discovered that the muscles do not determine how many axons form; they determine how many survive. Initially, the sympathetic nervous system forms far more neurons than it needs. When one of its neurons forms a synapse onto a muscle, that muscle delivers a protein called nerve growth factor (NGF) that promotes the survival and growth of the axon (Levi-Montalcini, 1987). An axon that does not receive NGF degenerates, and its cell body dies. That is, each neuron starts life with a “suicide program”: If its axon does not make contact with an appropriate postsynaptic cell by a certain age, the neuron kills itself through a process called apoptosis,1 a programmed mechanism of cell death. (Apoptosis is distinct from necrosis, which is death caused by an injury or a toxic substance.) NGF cancels the program for apoptosis; it is the postsynaptic cell’s way of telling the incoming axon, “I’ll be your partner. Don’t kill yourself.” The brain’s system of overproducing neurons and then applying apoptosis enables the CNS to match the number of incoming axons to the number of receiving 1Apoptosis is based on the Greek root ptosis (meaning “dropping”), which is pronounced TOE-sis. Therefore, most scholars insist that the second p in apoptosis should be silent, a-po-TOE-sis. Others argue that helicopter is also derived from a root with a silent p (pteron), but we pronounce the p in helicopter, so we should also pronounce the p in apoptosis. Be prepared for either pronunciation.
cells. For example, when the sympathetic nervous system begins sending its axons toward the muscles and glands, it has no way to know the exact size of the muscles or glands. Therefore, it makes more neurons than necessary and later discards the excess. Nerve growth factor is a neurotrophin, a chemical that promotes the survival and activity of neurons. (The word trophin derives from a Greek word for “nourishment.”) In addition to NGF, the nervous system responds to brain-derived neurotrophic factor (BDNF) and several other neurotrophins (Airaksinen & Saarma, 2002). BDNF is the most abundant neurotrophin in the adult mammalian cerebral cortex. For an immature neuron to avoid apoptosis and survive, it needs to receive neurotrophins not only from its target cells but also from incoming axons bringing stimulation. In one study, researchers examined mice with a genetic defect that prevented all release of neurotransmitters. The brains initially assembled normal anatomies, but neurons then started dying rapidly (Verhage et al., 2000). When neurons release neurotransmitters, they simultaneously release neurotrophins, and neurons that fail to release the neurotransmitters withhold the neurotrophins as well, so their target cells die (Poo, 2001). All areas of the developing nervous system initially make far more neurons than will survive into adulthood. Each brain area has a period of massive cell
death, becoming littered with dead and dying cells (Figure 5.8). This loss of cells does not indicate that something is wrong; it is a natural part of development (Finlay & Pallas, 1989). In fact, loss of cells in a particular brain area can indicate development and maturation. For example, late teenagers have substantial cell loss in parts of the prefrontal cortex, while showing increased neuronal activity in those areas (Sowell, Thompson, Holmes, Jernigan, & Toga, 1999) and sharp improvements in the kinds of memory that depend on those areas (D. A. Lewis, 1997). Evidently, maturation of appropriate cells is linked to simultaneous loss of less successful ones. After maturity, the apoptotic mechanisms become dormant, except under traumatic conditions such as stroke, and neurons no longer need neurotrophins for survival (Benn & Woolf, 2004; G. S. Walsh, Orike, Kaplan, & Miller, 2004). Although adults no longer need neurotrophins for neuron survival, they do use these chemicals for other functions. Neurotrophins increase the branching of both axons and dendrites throughout life (Baquet, Gorski, & Jones, 2004; Kesslak, So, Choi, Cotman, & Gomez-Pinilla, 1998; Kolb, Côté, Ribeiroda-Silva, & Cuello, 1997). With a deficiency of neurotrophins, cortical neurons and especially their dendrites shrink (Gorski, Zeiler, Tamowski, & Jones, 2003). Deficits in adult neurotrophins have been linked to several brain diseases (Baquet et al., 2004; Benn & Woolf, 2004) and possibly to depression, as we shall discuss in Chapter 15.
Number of motor neurons
200,000
STOP
175,000
CHECK
7. What process enables the nervous system to have only as many axons as necessary to innervate the target neurons?
150,000
8. What class of chemicals prevents apoptosis?
125,000
100,000
&
9. At what age does a person have the greatest number of neurons—as an embryo, newborn, child, adolescent, or adult? 10 20 30 Gestational age in weeks
Check your answers on page 136.
Figure 5.8 Cell loss during development of the nervous system The graph shows the number of motor neurons in the ventral spinal cord of human fetuses. Note that the number of motor neurons is highest at 11 weeks and drops steadily until about 25 weeks, the age when motor neuron axons make synapses with muscles. Axons that fail to make synapses die. (Source: From N. G. Forger and S. M. Breedlove, “Motoneuronal death in the human fetus.” Journal of Comparative Neurology, 264, 1987, 118–122. Copyright © 1987 Alan R. Liss, Inc. Reprinted by permission of N. G. Forger)
The Vulnerable Developing Brain According to Lewis Wolpert (1991, p. 12), “It is not birth, marriage, or death, but gastrulation, which is truly the important time of your life.” (Gastrulation is one of the early stages of embryological development.) Wolpert’s point was that if you mess up in early develop-
5.1 Development of the Brain
129
apoptosis: Remember that to prevent apoptosis, a neuron must receive neurotrophins from the incoming axons as well as from its own axon’s target cell. Alcohol suppresses the release of glutamate, the brain’s main excitatory transmitter, and enhances activity of GABA, the main inhibitory transmitter. Consequently, many neurons receive less excitation and neurotrophins than normal, and they undergo apoptosis (Ikonomidou et al., 2000). Prenatal exposure to other substances can be harmful too. Children of mothers who use cocaine during pregnancy have a decrease in language skills compared to other children, a slight decrease in IQ scores, and impaired hearing (Fried, Watkinson, & Gray, 2003; Lester, LaGasse, & Seifer, 1998). Related deficits in neural plasticity have been found in rats (Kolb, Gorny, Li, Samaha, & Robinson, 2003). Children of mothers who smoked during pregnancy are at much increased risk of the following (Brennan, Grekin, & Mednick, 1999; Fergusson, Woodward, & Horwood, 1998; Finette, O’Neill, Vacek, & Albertini, 1998; Milberger, Biederman, Faraone, Chen, & Jones, 1996; Slotkin, 1998):
ment, you will have problems from then on. Actually, if you mess up badly during gastrulation, your life is over. In brain development especially, the early stages are critical. Because development is such a precarious matter, chemical distortions that the mature brain could tolerate are more disruptive in early life. The developing brain is highly vulnerable to malnutrition, toxic chemicals, and infections that would produce only brief or mild interference at later ages. For example, impaired thyroid function produces temporary lethargy in adults but mental retardation in infants. (Thyroid deficiency was common in the past because of iodine deficiencies; it is rare today because table salt is almost always fortified with iodine.) A fever is a mere annoyance to an adult, but it impairs neuron proliferation in a fetus (Laburn, 1996). Low blood glucose in an adult leads to a lack of pep, whereas before birth, it seriously impairs brain development (C. A. Nelson et al., 2000). The infant brain is highly vulnerable to damage by alcohol. Children of mothers who drink heavily during pregnancy are born with fetal alcohol syndrome, a condition marked by hyperactivity, impulsiveness, difficulty maintaining attention, varying degrees of mental retardation, motor problems, heart defects, and facial abnormalities (Figure 5.9). Most dendrites are short with few branches. Rats exposed to alcohol during fetal development show some of these same deficits, including attention deficits (Hausknecht et al., 2005). When children with fetal alcohol syndrome reach adulthood, they have a high risk of alcoholism, drug dependence, depression, and other psychiatric disorders (Famy, Streissguth, & Unis, 1998). Even in children of normal appearance, impulsive behavior and poor school performance correlate with the mother’s alcohol consumption during pregnancy (Hunt, Streissguth, Kerr, & Carmichael-Olson, 1995). The mechanism of fetal alcohol syndrome probably relates to
• • • • • •
© David H. Wells/CORBIS
The overall message obviously is that pregnant women should minimize their use of drugs, even legal ones. Finally, the immature brain is highly responsive to influences from the mother. For example, if a mother rat is exposed to stressful experiences, she becomes more fearful, she spends less than the usual amount of time licking and grooming her offspring, and her offspring become permanently more fearful in a variety of situations (Cameron et al., 2005). Analogously, the children of impoverished and abused women have, on the average, increased problems in both their academic and social life. The mechanisms in humans are not exactly the same as those in rats, but the overall principles are similar: Stress to the mother changes her behavior in ways that change her offspring’s behavior.
Figure 5.9 Child with fetal alcohol syndrome Note the facial pattern. Many children exposed to smaller amounts of alcohol before birth have behavioral deficits without facial signs.
130
Chapter 5
Low weight at birth and many illnesses early in life Sudden infant death syndrome (“crib death”) Long-term intellectual deficits Attention-deficit hyperactivity disorder (ADHD) Impairments of the immune system Delinquency and crime later in life (sons especially)
Development and Plasticity of the Brain
STOP
&
CHECK
10. Anesthetic drugs increase inhibition of neurons, blocking most action potentials. Why would we predict that prolonged exposure to anesthetics would be dangerous to the brain of a fetus? Check your answer on page 136.
Fine-Tuning by Experience The blueprints for a house determine its overall plan, but because architects can’t anticipate every detail, construction workers sometimes have to improvise. The same is true, only more so, for your nervous system. Because of the unpredictability of life, our brains have evolved the ability to remodel themselves (within limits) in response to our experience (Shatz, 1992).
Experience and Dendritic Branching Decades ago, researchers believed that neurons never changed their shape. We now know that axons and dendrites continue to modify their structure throughout life. Dale Purves and R. D. Hadley (1985) developed a method of injecting a dye that enabled them to examine the structure of a living neuron at different times, days to weeks apart. They demonstrated that some dendritic branches extended between one viewing and another, whereas others retracted or disappeared (Figure 5.10). Later research found that when dendrites grow new spines, some spines last only days, whereas others last longer, perhaps forever (Trachtenberg et al., 2002). The gain or loss of spines means a gain or loss of synapses, and therefore potential information processing. As animals grow older, they continue altering the anatomy of their neurons but at progressively slower rates (Gan, Kwon, Feng, Sanes, & Lichtman, 2003; Grutzendler, Kasthuri, & Gan, 2002). Experiences guide the amount and type of change in the structure of neurons. Let’s start with a simple example. If you live in a complex and challenging environment, you need an elaborate nervous system. Ordinarily, a laboratory rat lives by itself in a small gray cage. Imagine by contrast ten rats living together in a larger cage with a few little pieces of junk to explore or play with. Researchers sometimes call this an enriched environment, but it is enriched only in contrast to the deprived experience of a typical rat cage. A rat in the more stimulating environment develops a thicker cortex, more dendritic branching, and improved performance on many tests of learning (Greenough, 1975; Rosenzweig & Bennett, 1996). Many of its neurons
become more finely tuned. For example, for a rat living in an isolated cage, each cell in part of its somatosensory cortex responds to a group of whiskers. For a rat in the group cage, each cell becomes more discriminating and responds to a smaller number of whiskers (Polley, Kvas˘n ˘ ák, & Frostig, 2004). At the opposite extreme, rats that have little opportunity to use their whiskers early in life (because the experimenters keep trimming them) show less than normal brain response to whisker stimulation (Rema, Armstrong-James, & Ebner, 2003). An enriched environment enhances sprouting of axons and dendrites in a wide variety of other species also (Coss, Brandon, & Globus, 1980) (Figure 5.11). However, although an “enriched” environment promotes more neuronal development than a deprived environment, we have no evidence that a “superenriched” environment would be even better. We might suppose that the neuronal changes in an enriched environment depend on new and interesting experiences, and no doubt, many of them do. For example, after practice of particular skills, the connections relevant to those skills proliferate, while other connections retract. Nevertheless, much of the enhancement seen in the overall “enriched environment” is due to
5.1
Development of the Brain
131
132
Chapter 5
Development and Plasticity of the Brain
Richard Coss
sion. Those statements are true in a way, but we need to be more specific. Losing one sense does not affect the receptors of other sense organs. For example, being blind does not change the touch receptors in the fingers. However, losing one sense does increase attention to other senses, and after many years of altered attention, the brain shows measurable adaptations to reflect and increase that attention. In several studies, investigators asked both sighted people and people blind since infancy to feel Braille letters, other kinds of symbols, or objects such as grooves in a board and say whether two items are same or different. On the average, blind people performed more accurately, to no one’s surprise. What was more surprising was that PET and fMRI scans indicated substantial activity in the occipital cortex of the blind (a) (b) people while they performed these tasks (Burton et al., 2002; Sadato et al., 1996, Figure 5.11 Effect 1998). Evidently, touch information had of a stimulating “invaded” this cortical area, which is orenvironment on dinarily devoted to vision alone. neuronal branching To double-check this conclusion, re(a) A jewel fish reared in searchers asked blind and sighted people isolation develops neurons to perform the same kind of task during with fewer branches. (b) A temporary inactivation of the occipital corfish reared with others has tex. Remember from Chapter 4 (p. 110) more neuronal branches. that intense magnetic stimulation on the scalp can temporarily inactivate neurons beneath the magnet. Applying this procedure to the occipital cortex of blind people interferes with their ability to identify Braille symbols or to notice the difEffects of Special Experiences ference between one tactile stimulus and another. The same procedure does not impair any aspect of touch The detailed structure and function of the brain can be for sighted people. In short, blind people use the ocmodified by experiences. For example, neurons become cipital cortex to help identify what they feel; sighted more responsive and more finely tuned to stimuli that people do not (L. G. Cohen et al., 1997). One concluhave been important or meaningful in the past (e.g., sion is that touch information invades the occipital Fritz, Shamma, Elhilali, & Klein, 2003; L. I. Zhang, Bao, cortex of blind people. A broader conclusion is that & Merzenich, 2001). How much plasticity might occur brain areas modify their function based on the kinds after experiences that are far different from the average? of stimuli they receive. Brain Adaptations in On the average, blind people also outperform People Blind Since Infancy sighted people on many verbal skills. (If you can’t see, you pay more attention to what you hear, including One way to ask this question is to consider what hapwords.) One example is the task, “When you hear the pens to the brain if one sensory system is nearly or enname of an object (e.g., apple), say as quickly as postirely absent. Recall the experiment on ferrets, in which sible the name of an appropriate action for that object axons of the visual system, unable to contact their nor(e.g., eat ).” Again, performing this task activates parts mal targets, attached instead to the brain areas usually of the occipital cortex in blind people but not in devoted to hearing and managed to convert them into sighted people. Furthermore, the amount of activity in more-or-less satisfactory visual areas (p. 124). Might the occipital cortex (for blind people) correlates with anything similar happen in the brains of people born their performance on the task (Amedi, Raz, Pianka, deaf or blind? Malack, & Zohary, 2003). Also, inactivating the occipYou will hear many people say that blind people ital cortex by intense transcranial magnetic stimulabecome better than usual at touch and hearing, or that tion interferes with verbal performance by blind peodeaf people become especially acute at touch and vithe simple fact that rats in a group cage are more active. Using a running wheel also enhances growth of axons and dendrites, even for rats in an isolated cage (Rhodes et al., 2003; van Praag, Kempermann, & Gage, 1999). The exercise does not even have to be strenuous (Trejo, Carro, & Torres-Alemán, 2001; van Praag, Kempermann, & Gage, 2000). Measurable expansion of neurons has also been demonstrated in humans as a function of physical activity—such as daily practice of juggling balls, in one case (Draganski et al., 2004). The advice to exercise for your brain’s sake is particularly important for older people. On the average, the thickness of the cerebral cortex declines in old age but much less in people who remain physically active (Colcombe et al., 2003). People with extensive academic education also tend to have longer and more widely branched dendrites than people with less formal education (B. Jacobs, Schall, & Scheibel, 1993). It is tempting to assume that the increased learning led to brain changes, but we cannot draw cause-and-effect conclusions from correlational studies. Perhaps people who already had wider dendrites succeeded more in school and therefore stayed longer. Or of course, both explanations could be correct.
ple but not by sighted people (Amedi, Floel, Knecht, Zohary, & Cohen, 2004). So the occipital cortex of blind people serves certain verbal functions as well as touch. As the occipital cortex increases its response to touch, sound, and verbal stimuli, does it decrease its response to visual stimuli? Applying brief transcranial magnetic stimulation (enough to stimulate, not enough to inactivate) over the occipital cortex causes sighted people to report seeing flashes of light. When the same procedure is applied to people who completely lost their sight because of eye injuries more than 10 years ago, most report seeing nothing or seeing flashes only rarely after stimulation in certain locations (Gothe et al., 2002). Note that for this experiment it is essential to test people who once had normal vision and then lost it because we can ask them whether they see anything in response to the magnetic stimulation. Someone blind since birth presumably would not understand the question.
The most strongly affected areas related to hand control and vision (presumably because of its importance for reading music). A related study on stringed instrument players found that a larger than normal section of the postcentral gyrus was devoted to representing the fingers of the left hand, which they use to control the strings (Elbert, Pantev, Wienbruch, Rockstroh, & Taub, 1995). The area devoted to the left fingers was largest in those who began their music practice early (and therefore also continued for more years). These results imply that practicing a skill reorganizes the brain, within limits, to maximize performance of that skill. Part of the mechanism is that sustained attention to anything releases dopamine, and dopamine acts on the cortex to expand the response to stimuli active during the dopamine release (Bao, Chan, & Merzenich, 2001).
When Brain Reorganization Goes Too Far
Effects of Prolonged Practice
Ordinarily, the expanded cortical representation of personally important information is beneficial. HowCognitive psychologists have found that extensive ever, in extreme cases, the reorganization creates practice of a skill, such as playing chess or working problems. As mentioned, when people play string incrossword puzzles, makes someone ever more adept struments many hours a day for years, the representaat that skill but not necessarily at anything else (Ericstion of the left hand increases in the somatosensory son & Charness, 1994). Presumably, developing excortex. Similar processes occur in people who play pertise changes the brain in ways that improve the piano and other instruments. abilities required for a particular skill. In a few cases, researchers have identified some of the relevant brain changes. Precentral and postcentral gyri One study used magnetoenceph(Body sensations and motor control, including fingers) alography (MEG; see p. 108) to record responses of the auditory cortex to pure tones. The responses in professional musicians were about twice as strong as those for nonmusicians. Then an examination of their brains, using MRI, found that one area of the temporal cortex in the right hemisphere was about 30% larger in the professional musicians (Schneider et al., 2002). Of course, we do not know whether their brains were equal before they started practicing; perhaps those with certain predispositions are more likely than others to develop a passion for music. Another study used a type of MRI scan to compare the entire Left inferior frontal gyrus Inferior visual cortex brains of professional keyboard (Vision, such as reading music) players, amateur keyboard players, Figure 5.12 Brain correlates of extensive music practice and nonmusicians. Several areas Areas marked in red showed thicker gray matter among professional keyboard showed that gray matter was thicker players than in amateurs and thicker gray matter among amateurs than in in the professionals than in the amnonmusicians. Areas marked in yellow showed even stronger differences in that ateurs, and thicker in the amateurs same direction. (Source: From “Brain structures differ between musicians and nonthan the nonmusicians, including musicians,” by C. Gaser & G. Schlaug, Journal of Neuroscience, 23. Copyright 2003 by the the structures highlighted in FigSociety for Neuroscience. Reprinted with permission.) ure 5.12 (Gaser & Schlaug, 2003). 5.1
Development of the Brain
133
Imagine the normal representation of the fingers in the cortex:
Fo
Leg Hip k Trun k Nec d Hea Arm w o Elb rm rea d n Ha
Precentral gyrus (primary motor cortex)
Postcentral gyrus Fi (primary n somatosensory Thu ger E ye mb s cortex) No s Fac e e
Dr. Dana Copeland
Lips Teeth Gums Jaw ue Tong nx y l r a na Ph mi do b a raInt
Toes Genitals 2
34
5
1
Portion of somatosensory cortex)
Somatosensory cortex
With extensive musical practice, the expanding representations of the fingers might spread out like this:
Fo
Leg Hip k Trun k Nec d Hea Arm w o Elb rm rea d n Ha
Precentral gyrus (primary motor cortex)
Postcentral gyrus Fi (primary n somatosensory Thu ger E ye mb s cortex) No s Fac e e
Dr. Dana Copeland
Lips Teeth Gums Jaw ue Tong nx y l r a na Ph mi do b a raInt
Toes Genitals
2
4 5
3
1
Portion of somatosensory cortex)
Somatosensory cortex
Or the representations could grow from side to side without spreading out so that representation of each finger overlaps that of its neighbor: Leg Hip k Trun k Nec d Hea Arm w o Elb rm rea Fo nd Ha
Precentral gyrus (primary motor cortex)
Postcentral gyrus Fi (primary n somatosensory Thu ger s E m y cortex) No e b se Fac e
Dr. Dana Copeland
Lips Teeth Gums Jaw ue Tong l rynx a h P ina om d ab raInt
Toes Genitals
Somatosensory cortex
134
Chapter 5
Development and Plasticity of the Brain
5
3 1
2
4
Portion of somatosensory cortex)
In some cases, the latter process does occur, such that stimulation on one finger excites mostly or entirely the same cortical areas as another finger. Consequently, the person has trouble distinguishing one finger from the other. Someone who can’t clearly feel the difference between two fingers has trouble controlling them separately. This condition is “musician’s cramp”—known more formally as focal hand dystonia—in which the fingers become clumsy, fatigue easily, and make involuntary movements that interfere with the task. This long-lasting condition is a potential career-ender for a serious musician. Some people who spend all day writing develop the same problem, in which case it is known as “writer’s cramp.” Traditionally, physicians assumed that the musician’s cramp or writer’s cramp was an impairment in the hands, but later research indicated that the cause is extensive reorganization of the sensory thalamus and cortex so that touch responses to one finger overlap those of another (Byl, McKenzie, & Nagarajan, 2000; Elbert et al., 1998; Lenz & Byl, 1999; Sanger, Pascual-Leone, Tarsy, & Schlaug, 2001; Sanger, Tarsy, & Pascual-Leone, 2001).
STOP
&
CHECK
11. An “enriched” environment promotes growth of axons and dendrites. What is known to be one important reason for this action? 12. Name two kinds of evidence indicating that touch information from the fingers “invades” the occipital cortex of people blind since birth. 13. What change in the brain is responsible for musician’s cramp? Check your answers on page 136
Module 5.1 In Closing: Brain Development Considering the number of ways in which abnormal genes and chemicals can disrupt brain development, let alone the possible varieties of abnormal experience, it is a wonder that any of us develop normally. Evidently, the system has enough margin for error that we can function even if all of our connections do not develop quite perfectly. There are many ways for development to go wrong, but somehow the system usually manages to work.
Summary 1. In vertebrate embryos, the central nervous system begins as a tube surrounding a fluid-filled cavity. Developing neurons proliferate, migrate, differentiate, myelinate, and generate synapses. (p. 122) 2. Neuron proliferation varies among species mainly by the speed of cell division and the duration of this process in days. Therefore, differences in proliferation probably depend on very few genes. In contrast, migration depends on a large number of chemicals that guide immature neurons to their destinations. Many genes affect this process. (p. 123) 3. Even in adults, new neurons can form in the olfactory system, the hippocampus, and the songproducing brain areas of some bird species. (p. 125) 4. Growing axons manage to find their way close to the right locations by following chemicals. Then they array themselves over a target area by following chemical gradients. (p. 125) 5. After axons reach their targets based on chemical gradients, the postsynaptic cell fine-tunes the connections based on experience, accepting certain combinations of axons and rejecting others. This kind of competition among axons continues throughout life. (p. 128) 6. Initially, the nervous system develops far more neurons than will actually survive. Some axons make synaptic contacts with cells that release to them nerve growth factor or other neurotrophins. The neurons that receive neurotrophins survive; the others die in a process called apoptosis. (p. 128) 7. The developing brain is vulnerable to chemical insult. Many chemicals that produce only mild, temporary problems for adults can permanently impair early brain development. (p. 129) 8. Enriched experience leads to greater branching of axons and dendrites, partly because animals in enriched environments are more active than those in deprived environments. (p. 131) 9. Specialized experiences can alter brain development, especially early in life. For example, in people who are born blind, representation of touch, some aspects of language, and other information invades the occipital cortex, which is usually reserved for vision. (p. 132) 10. Extensive practice of a skill expands the brain’s representation of sensory and motor information relevant to that skill. For example, the representation of fingers expands in people who regularly practice musical instruments. (p. 133)
5.1
Development of the Brain
135
Answers to STOP
&
CHECK
Questions 1. The axon forms first, the dendrites later (p. 125) 2. They trained the ferrets to respond to stimuli on the normal side, turning one direction in response to sounds and the other direction to lights. Then they presented light to the rewired side and saw that the ferret again turned in the direction it had associated with lights. (p. 125) 3. Olfactory receptors, neurons in the hippocampus, and neurons in the song-producing areas of some bird species (p. 125) 4. Sperry found that if he cut a newt’s eye and inverted it, axons grew back to their original targets, even though they were inappropriate to their new position on the eye. (p. 127) 5. Axons would attach haphazardly instead of arranging themselves according to their dorsoventral position on the retina. (p. 127) 6. The axons would attach based on a chemical gradient but could not fine-tune their adjustment based on experience. Therefore, the connections would be less precise. (p. 128) 7. The nervous system builds far more neurons than it needs and discards through apoptosis those that do not make lasting synapses. (p. 129) 8. Neurotrophins, such as nerve growth factor (p. 129) 9. The embryo has the most neurons. (p. 129) 10. Prolonged exposure to anesthetics might produce effects similar to fetal alcohol syndrome. Fetal alcohol syndrome occurs because alcohol increases inhibition and therefore increases apoptosis of developing neurons. (p. 130)
136
Chapter 5
Development and Plasticity of the Brain
11. Animals in an “enriched” environment are more active, and their exercise enhances growth of axons and dendrites. (p. 135) 12. First, brain scans indicate increased activity in the occipital cortex while blind people perform tasks such as feeling two objects and saying whether they are the same or different. Second, temporary inactivation of the occipital cortex blocks blind people’s ability to perform that task, without affecting the ability of sighted people. (p. 135) 13. Extensive practice of violin, piano, or other instruments causes expanded representation of the fingers in the somatosensory cortex. In some cases, the representation of each finger invades the area representing other fingers. If the representation of two fingers overlaps too much, the person cannot feel them separately, and the result is musician’s cramp. (p. 135)
Thought Questions 1. Biologists can develop antibodies against nerve growth factor (i.e., molecules that inactivate nerve growth factor). What would happen if someone injected such antibodies into a developing nervous system? 2. Based on material in this chapter, what is one reason a woman should avoid long-lasting anesthesia during delivery of a baby? 3. Decades ago, educators advocated teaching Latin and ancient Greek because the required mental discipline would promote overall intelligence and brain development in general. Occasionally, people today advance the same argument for studying calculus or other subjects. Do these arguments seem valid, considering modern research on expertise and brain development?
Module 5.2
Plasticity After Brain Damage
A
n American soldier who suffered a wound to the left hemisphere of his brain during the Korean War was at first unable to speak at all. Three months later, he could speak in short fragments. When he was asked to read the letterhead, “New York University College of Medicine,” he replied, “Doctors—little doctors.” Eight years later, when someone asked him again to read the letterhead, he replied, “Is there a catch? It says, ‘New York University College of Medicine’” (Eidelberg & Stein, 1974). Almost all survivors of brain damage show at least slight behavioral recovery and, in some cases, substantial recovery. Some of the mechanisms rely on the growth of new branches of axons and dendrites, quite similar to the mechanisms of brain development discussed in the first module. Understanding the process may lead to better therapies for people with brain damage and to insights into the functioning of the healthy brain.
Brain Damage and Short-Term Recovery The possible causes of brain damage include tumors, infections, exposure to radiation or toxic substances, and degenerative conditions such as Parkinson’s disease and Alzheimer’s disease. In young people, the most common cause is closed head injury, a sharp blow to the head resulting from a fall, an automobile or other accident, an assault, or other sudden trauma that does not actually puncture the brain. The damage occurs partly because of rotational forces that drive brain tissue against the inside of the skull. Closed head injury also produces blood clots that interrupt blood flow to the brain (Kirkpatrick, Smielewski, Czosnyka, Menon, & Pickard, 1995). Many people, probably most, have suffered at least a mild closed head injury. Obviously, repeated or more severe blows increase the risk of longterm problems.
E X T E N S I O N S A N D A P P L I C AT I O N S
How Woodpeckers Avoid Concussions Speaking of blows to the head, have you ever wondered how woodpeckers manage to avoid giving themselves concussions or worse? If you repeatedly banged your head into a tree at 6 or 7 meters per second (about 15 miles per hour), you would almost certainly harm yourself. Using slow-motion photography, researchers found that woodpeckers usually start with a couple of quick preliminary taps against the wood, much like a carpenter lining up a nail with a hammer. Then the birds make a hard strike in a straight line, keeping a rigid neck. The result is a near absence of rotational forces and consequent whiplash. The fact that woodpeckers are so careful to avoid rotating their heads during impact supports the claim that rotational forces are a major factor in closed head injuries (May, Fuster, Haber, & Hirschman, 1979). The researchers suggested that football helmets, racecar helmets, and so forth would give more protection if they extended down to the shoulders to prevent rotation and whiplash. They also suggest that if you see a sudden automobile accident or something similar about to happen, you should tuck your chin to your chest and tighten your neck muscles.
Reducing the Harm from a Stroke A common cause of brain damage in older people (more rarely in the young) is temporary loss of blood flow to a brain area during a stroke, also known as a cerebrovascular accident. The more common type of stroke is ischemia, the result of a blood clot or other obstruction in an artery; the less common type is hemorrhage, the result of a ruptured artery. Strokes vary in severity from barely noticeable to immediately fatal. Figure 5.13 shows the brains of three people: one who died immediately after a stroke, one who survived long after
5.2
Plasticity After Brain Damage
137
Dr. Dana Copeland
(a)
(b)
(c)
Figure 5.13 Three damaged human brains (a) Brain of a person who died immediately after a stroke. Note the swelling on the right side. (b) Brain of a person who survived for a long time after a stroke. Note the cavities on the left side, where many cells were lost. (c) Brain of a person who suffered a gunshot wound and died immediately.
a stroke, and a bullet wound victim. For a good collection of information about stroke, visit this website: http://www.stroke.org/
In ischemia, neurons are deprived of blood and therefore lose much of their oxygen and glucose supplies. In hemorrhage, they are flooded with blood and therefore with excess oxygen, calcium, and other products. Both ischemia and hemorrhage lead to many of the same problems, including edema (the accumulation of fluid), which increases pressure on the brain and increases the probability of additional strokes (Unterberg, Stover, Kress, & Kiening, 2004). Both ischemia and hemorrhage also impair the sodium-potassium pump, leading to an accumulation of sodium inside neurons. The combination of edema and excess sodium provokes excess release of the transmitter glutamate (Rossi, Oshima, & Attwell, 2000), which overstimulates neurons: Sodium and other ions enter the neurons in excessive amounts, faster than the sodium-potassium pump can remove them. The excess positive ions block metabolism in the mitochondria and kill the neurons (Stout, Raphael, Kanterewicz, Klann, & Reynolds, 1998). As neurons die, glia cells proliferate, removing waste products and dead neurons. If someone had a stroke and you called a hospital, what advice would you probably get? As recently as the 1980s, the staff would have been in no great hurry to see the patient because they had little to offer anyway. Today, it is possible to reduce the effects of an ischemic stroke if physicians act quickly. A drug called tissue plasminogen activator (tPA) breaks up blood clots (Barinaga, 1996) and has a mixture of other effects on
138
Chapter 5
Development and Plasticity of the Brain
damaged neurons (Kim, Park, Hong, & Koh, 1999). To get significant benefit, a patient should receive tPA within 3 hours after a stroke, although slight benefits are possible up to 6 hours after the stroke. Unfortunately, by the time a patient’s family gets the patient to the hospital, and then waits through procedures in the emergency room, the delay is usually too long (Stahl, Furie, Gleason, & Gazelle, 2003). Even when it is too late for tPA to save the cells in the immediate vicinity of the ischemia or hemorrhage, hope remains for cells in the penumbra (Latin for “almost shadow”), the region that surrounds the immediate damage (Hsu, Sik, Gallyas, Horváth, & Buzsáki, 1994; Jonas, 1995) (Figure 5.14). One idea is to prevent overstimulation by blocking glutamate synapses or preventing positive ions from entering neurons. However, most such methods have produced disappointing results (Lee, Zipfel, & Choi, 1999). A somewhat promising new drug opens potassium channels (Gribkoff et al., 2001). As calcium or other positive ions enter a neuron, potassium exits through these open channels, reducing overstimulation. One possible reason blocking overstimulation has not worked better is that stroke-damaged neurons also can die from understimulation (Colbourne, Sutherland, & Auer, 1999; Conti, Raghupathi, Trojanowski, & McIntosh, 1998). It is possible that a stroke reactivates the mechanisms of apoptosis. According to research with laboratory animals, injections of neurotrophins and other drugs that block apoptosis can improve recovery from a stroke (Barinaga, 1996; Choi-Lundberg et al., 1997; Levivier, Przedborski, Bencsics, & Kang,
Penumbra Area of greatest damage
STOP
&
CHECK
1. What are the two kinds of stroke and what causes each kind? 2. Why is tPA not recommended in cases of hemorrhage? 3. If one of your relatives has a stroke and a wellmeaning person offers a blanket, what should you do? Check your answers on page 147.
Figure 5.14 The penumbra of a stroke A stroke kills cells in the immediate vicinity of damage, but those in the surrounding area (the penumbra) survive at least temporarily. Therapies can be designed to promote better recovery in the penumbra.
1995; Schulz, Weller, & Moskowitz, 1999). However, because these drugs do not cross the blood-brain barrier, they must be injected directly into the brain. Researchers can do so with laboratory animals, but physicians understandably hesitate to try with humans. So far, the most effective method of preventing brain damage after strokes in laboratory animals is to cool the brain. A cooled brain has less activity, lower energy needs, and less risk of overstimulation than does a brain at normal temperature (Barone, Feuerstein, & White, 1997; Colbourne & Corbett, 1995). Humans cannot be cooled safely to the same temperature that rats can, but cooling someone to about 33–36°C (91–97°F) for the first three days after a stroke significantly improves survival and long-term behavioral functioning (Steiner, Ringleb, & Hacke, 2001). Note that this approach goes contrary to most people’s first impulse, which is to keep the patient warm and comfortable. Among other approaches that have shown promise with stroke-damaged laboratory animals, one of the more interesting is to use cannabinoids—drugs related to marijuana (Nagayama et al., 1999). Cannabinoids have been shown to reduce cell loss after stroke, closed head injury, and other kinds of brain damage, although not in all experiments (van der Stelt et al., 2002). Recall from Chapter 3 (p. 74) that cannabinoids decrease the release of glutamate. One possible explanation for how they decrease brain damage is that they probably prevent excess glutamate from overexciting neurons. They may have other mechanisms as well (van der Stelt et al., 2002). So far, researchers have not tried cannabinoids with human stroke patients.
Later Mechanisms of Recovery After the first few hours or at most days following brain damage, no more cells are going to die, and no new cells will replace the lost ones. Nevertheless, a variety of other changes take place in the remaining neurons.
Diaschisis A behavioral deficit after brain damage reflects more than just the functions of the cells that were destroyed. Activity in any brain area stimulates many other areas, so damage to any area deprives other areas of their normal stimulation and thus interferes with their healthy functioning. For example, after damage to part of the left frontal cortex, activity decreases in the temporal cortex and several other areas (Price, Warburton, Moore, Frackowiak, & Friston, 2001). Diaschisis (di-AS-ki-sis, from a Greek term meaning “to shock throughout”) refers to the decreased activity of surviving neurons after damage to other neurons. If diaschisis contributes to behavioral deficits following brain damage, then stimulant drugs should promote recovery by decreasing the effects of diaschisis. In a series of experiments, D. M. Feeney and colleagues measured the behavioral effects of cortical damage in rats and cats. Depending on the location of the damage, the animals showed impairments in either coordinated movement or depth perception. Injecting amphetamine (which increases release of dopamine and norepinephrine) significantly enhanced both behaviors, and animals that practiced the behaviors under the influence of amphetamine showed long-lasting benefits. Injecting a drug that blocks dopamine synapses impaired behavioral recovery (Feeney & Sutton, 1988; Feeney, Sutton, Boyeson, Hovda, & Dail, 1985; Hovda & Feeney, 1989; Sutton, Hovda, & Feeney, 1989). These results imply that people who have had a stroke should be given stimulant drugs, not immediately after the stroke, as with clot-busting drugs, but
5.2 Plasticity After Brain Damage
139
during the next days and weeks. Although amphetamine has improved recovery from brain damage in many studies with laboratory animals, it increases the risk of heart attack, and physicians rarely recommend it. In the few cases in which stroke patients have received it, the results have been disappointing (Treig, Werner, Sachse, & Hesse, 2003). However, other stimulant drugs that do not endanger the heart appear more promising (Whyte et al., 2005). Using stimulants violates many people’s impulse to calm a stroke patient with tranquilizers. Tranquilizers decrease the release of dopamine and impair recovery after brain damage (L. B. Goldstein, 1993).
STOP
&
CHECK
4. Following damage to someone’s brain, would it be best (if possible) to direct amphetamine to the cells that were damaged or somewhere else? Check your answer on page 147.
The Regrowth of Axons Although a destroyed cell body cannot be replaced, damaged axons do grow back under certain circumstances. A neuron of the peripheral nervous system has its cell body in the spinal cord and an axon that extends into one of the limbs. If the axon is crushed,
the degenerated portion grows back toward the periphery at a rate of about 1 mm per day, following its myelin sheath back to the original target. If the axon is cut instead of crushed, the myelin on the two sides of the cut may not line up correctly, and the regenerating axon may not have a sure path to follow. Sometimes a motor nerve attaches to the wrong muscle, as Figure 5.15 illustrates. Within a mature mammalian brain or spinal cord, damaged axons regenerate only a millimeter or two, if at all (Schwab, 1998). Therefore, paralysis caused by spinal cord injury is permanent. However, in many kinds of fish, axons do regenerate across a cut spinal cord far enough to restore nearly normal functioning (Bernstein & Gelderd, 1970; Rovainen, 1976; Scherer, 1986; Selzer, 1978). Why do damaged CNS axons regenerate so much better in fish than in mammals? If we could answer this question, we might be able to develop new therapies for patients with spinal cord damage. One possibility is that a cut through the adult mammalian spinal cord forms more scar tissue than it does in fish. The scar tissue not only creates a mechanical barrier to axon growth, but it also synthesizes chemicals that inhibit axon growth. We might imagine a therapy for spinal cord damage based on breaking up the scar tissue or blocking the chemicals it releases. However, despite many attempts, so far none of the methods based on breaking up scar tissue has produced much benefit (Filbin, 2003). Another explanation for the failure of axon growth in mammals and birds is that the myelin in their central nervous system secretes proteins that inhibit axon
Images not available due to copyright restrictions
140
Chapter 5
Development and Plasticity of the Brain
growth (Fields, Schwab, & Silver, 1999; McClellan, 1998). However, axons also fail to regenerate in areas of the CNS that lack myelin (Raisman, 2004). So the question remains unanswered as to why axons regenerate better in fish than in mammals.
Sprouting The brain is constantly adding new branches of axons and dendrites while withdrawing old ones (Cotman & Nieto-Sampedro, 1982). That process accelerates in response to damage. After loss of a set of axons, the cells that have lost their source of innervation react by secreting neurotrophins to induce other axons to form new branches, or collateral sprouts, that attach to the vacant synapses (Ramirez, 2001). Gradually over several months, the sprouts fill in most of the vacated synapses (Figure 5.16). Most of the research has concerned the hippocampus, where two types of sprouting are known to occur. First, damage to a set of axons can induce sprouting by similar axons. For example, the hippocampus receives input from a nearby cortical area called the entorhinal cortex, and damage to axons from the entorhinal cortex of one hemisphere induces sprouting by axons of the other hemisphere. Those sprouts form gradually over weeks, simultaneous with improvements in memory task performance, and several kinds of evidence indicate that the sprouting is essential for the improvement (Ramirez, Bulsara, Moore, Ruch, & Abrams, 1999; Ramirez, McQuilkin, Carrigan, MacDonald, & Kelley, 1996). Second, damage sometimes induces sprouting by unrelated axons. For example, after damage to the entorhinal cortex of both hemispheres, axons from other areas form sprouts into the vacant synapses of the hippocampus. The information they bring is certainly not
the same as what was lost. In some cases, this kind of sprouting appears useful, in some cases virtually worthless, and in other cases harmful (Ramirez, 2001). Several studies indicate that gangliosides (a class of glycolipids—that is, combined carbohydrate and fat molecules) promote the restoration of damaged brains. How they do so is unclear, but one possibility is that they increase sprouting or aid in the formation of appropriate synapses. The fact that gangliosides adhere to neuron membranes suggests that they contribute to the recognition of one neuron by another. Daily injections of gangliosides aid the recovery of behavior after several kinds of brain damage in laboratory animals (Cahn, Borziex, Aldinio, Toffano, & Cahn, 1989; Ramirez et al., 1987a, 1987b; Sabel, Slavin, & Stein, 1984), and in limited trials, they have shown some promise in helping people regain function after partial damage to the spinal cord (Geisler et al., 2001). In several studies of laboratory mammals, females have recovered better than males from frontal cortex damage, especially if the damage occurred when they had high levels of the hormone progesterone (D. G. Stein & Fulop, 1998). Progesterone apparently exerts its benefits by increasing release of the neurotrophin BDNF, which promotes axon sprouting and the formation of new synapses (Gonzalez et al., 2004).
Denervation Supersensitivity
A postsynaptic cell that is deprived of most of its synaptic inputs develops increased sensitivity to the neurotransmitters that it still receives. For example, a normal muscle cell responds to the neurotransmitter acetylcholine only at the neuromuscular junction. If the axon is cut or if it is inactive for days, the muscle cell builds additional receptors, becoming sensitive to acetylcholine over a wider area of its surface ( Johns & Thesleff, 1961; Levitt-Gilmour & Salpeter, 1986). The same process occurs in neurons. Heightened sensitivity to Axon a neurotransmitter after the de1 Dendrites struction of an incoming axon is known as denervation supersensitivity (Glick, 1974). Heightened Axon 2 sensitivity as a result of inactivity by an incoming axon is called disuse supersensitivity. The mechaCollateral Axon injured, Cell body nisms of supersensitivity include sprouting degenerates. an increased number of receptors (Kostrzewa, 1995) and increased effectiveness of receptors, perhaps by changes in second-messenger systems. At first Loss of an axon Sprouting to fill vacant synapses Denervation supersensitivity Figure 5.16 Collateral sprouting is a way of compensating for deA surviving axon grows a new branch to replace the synapses left vacant by a creased input. In some cases, it damaged axon. enables people to maintain nearly 5.2
Plasticity After Brain Damage
141
normal behavior even after losing most of the axons in some pathway (Sabel, 1997). However, it can also have unpleasant consequences. For example, spinal cord injury often results in prolonged pain, and research with rats supports an explanation in terms of denervation supersensitivity: Because the injury damages many of the axons, postsynaptic neurons develop increased sensitivity to the remaining ones. Therefore, even normal inputs from the remaining axons produce intense responses (Hains, Everhart, Fullwood, & Hulsebosch, 2002).
STOP
&
CHECK
5. Is collateral sprouting a change in axons or dendritic receptors? 6. Is denervation supersensitivity a change in axons or dendritic receptors? Check your answers on page 147.
Reorganized Sensory Representations and the Phantom Limb As described in the first module of this chapter, experiences can modify the connections within the cerebral cortex to increase the representation of personally important information. Recall that after someone has played a string instrument for many years, the somatosensory cortex has an enlarged representation of
the fingers of the left hand. Such changes represent either collateral sprouting of axons or increased receptor sensitivity by the postsynaptic neurons. Similar processes occur after nervous system damage, with sometimes surprising results. Consider how the cortex reorganizes after an amputation. Reexamine Figure 4.24 (p. 99): Each section along the somatosensory cortex receives input from a different part of the body. Within the area marked “fingers” in that figure, a closer examination reveals that each subarea responds more to one finger than to another. Figure 5.17 shows the arrangement for a monkey brain. In one study, experimenters amputated finger 3 of an owl monkey. The cortical cells that previously responded to information from that finger now had no input. Soon they became more responsive to finger 2, finger 4, or part of the palm, until the cortex developed the pattern of responsiveness shown in Figure 5.17b (Kaas, Merzenich, & Killackey, 1983; Merzenich et al., 1984). What happens if an entire arm is amputated? For many years, neuroscientists assumed that the cortical area corresponding to that arm would remain permanently silent because axons from other cortical areas could not sprout far enough to reach the area representing the arm. Then came a surprise. Investigators recorded from the cerebral cortices of monkeys that had the sensory nerves cut from one forelimb 12 years previously. They found that the large stretch of cortex previously responsive to the limb had become responsive to the face (Pons et al., 1991). How did such connections form? After loss of sensory input from the forelimb, the axons representing the forelimb degen-
Image not available due to copyright restrictions
142
Chapter 5
Development and Plasticity of the Brain
METHODS 5.1
Histochemistry Histology is the study of the structure of tissues. Histochemistry deals with the chemical components of tissues. One example is as follows: Investigators inject a chemical called horseradish peroxidase (HRP) into the brain of a laboratory animal. The axon terminals in that area absorb the chemical and transport it back to their
cell bodies. Later, investigators treat slices of the brain with a second chemical that reacts with HRP to form granules that are visible in a microscope. By finding those granules, investigators can determine the point of origin for the axons that terminated at the spot where investigators had injected the HRP.
erated, leaving vacant synaptic sites at several levels of the CNS. Axons representing the face sprouted into those sites in the spinal cord, brainstem, and thalamus (Florence & Kaas, 1995; E. G. Jones & Pons, 1998). (Or perhaps axons from the face had already innervated those sites but were overwhelmed by the normal input. After removal of the normal input, the weaker synapses became stronger.) Also, lateral connections sprouted from the face-sensitive cortical areas into the previously hand-sensitive areas of the cortex, according to results from histochemistry (Florence, Taub, & Kaas, 1998) (see Methods 5.1). Brain scan studies confirm that the same processes occur with humans. Now consider what happens when cells in a reorganized cortex become activated. Previously, those neurons responded to an arm, and now they receive information from the face. Does the response feel like stimulation on the face or on the arm? The answer: It feels like the arm (Davis et al., 1998). Physicians have long noted that many people with amputations experience a phantom limb, a continuing sensation of an amputated body part. That experience can range from occasional tingling to intense pain. It is possible to have a phantom hand, foot, intestines, breast, penis, or anything else that has been amputated. Sometimes the phantom sensation fades within days or weeks, but sometimes it lasts a lifetime (Ramachandran & Hirstein, 1998). Until the 1990s, no one knew what caused phantom pains, and most believed that the sensations were coming from the stump of the amputated limb. Some physicians even performed additional amputations, removing more and more of the limb in a futile attempt to eliminate the phantom sensations. But modern methods have demonstrated that phantom limbs develop only if the relevant portion of the somatosensory cortex reorganizes and becomes responsive to alternative inputs (Flor et al., 1995). For example, axons representing the face may come to activate the cortical area previously devoted to an amputated hand. Whenever the face is touched, the person still feels it on the face but
also feels a sensation in the phantom hand. It is possible to map out which part of the face stimulates sensation in which part of the phantom hand, as shown in Figure 5.18 (Aglioti, Smania, Atzei, & Berlucchi, 1997). The relationship between phantom sensations and brain reorganization enables us to understand some otherwise puzzling observations. Note in Figure 4.24
Image not available due to copyright restrictions
5.2 Plasticity After Brain Damage
143
his cortex gradually shifted back to hand sensitivity (Giraux, Sirigu, Schneider, & Dubernard, 2001). One important message from these studies is that connections in the brain remain plastic throughout life. There are limits on the plasticity, certainly, but they are less strict than neuroscientists once supposed.
STOP
&
CHECK
7. Cite an example in which reorganization of the brain is helpful and one in which it is harmful. Check your answer on page 147.
AP
Learned Adjustments in Behavior
Amputees who feel a phantom limb are likely to lose those phantom sensations if they learn to use an artificial limb.
(p. 99) that the part of the cortex responsive to the feet is immediately next to the part responsive to the genitals. Two patients, after amputation, felt a phantom foot during sexual arousal! One in fact reported feeling orgasm not only in the genitals but also in the phantom foot—and intensely enjoying it (Ramachandran & Blakeslee, 1998). Evidently, the representation of the genitals had spread into the cortical area responsible for foot sensation. If a phantom sensation is painful, is there any way to relieve it? In some cases, yes. Amputees who learn to use an artificial arm report that their phantom sensations gradually disappear (Lotze et al., 1999). Apparently, they start attributing sensations to the artificial arm, and in doing so, they displace abnormal connections from the face. Similarly, a study of one man found that after his hands were amputated, the area of his cortex that usually responds to the hands partly shifted to face sensitivity, but after he received hand transplants,
144
Chapter 5
Development and Plasticity of the Brain
So far, the discussion has focused on changes in the wiring of the brain. In fact, much recovery from brain damage is based on learning, not any structural adjustments. If you can’t find your keys, perhaps you accidentally dropped them while hiking through the forest (so you will never find them), or perhaps you absentmindedly put them in an unusual place (where you will find them if you keep looking). Similarly, an individual with brain damage who seems to have lost some ability may indeed have lost it forever or may be able to find it with enough effort. Much, probably most, recovery from brain damage depends on learning to make better use of the abilities that were spared. For example, if you lost your peripheral vision, you would learn to move your head from side to side to see better (Marshall, 1985). Sometimes a person or animal with brain damage appears unable to do something but is in fact not trying. For example, consider an animal that has incurred damage to the sensory nerves linking one forelimb to the spinal cord, as in Figure 5.19. The animal can no longer feel the limb, although the motor nerves still connect to the muscles. We say the limb is deafferented because it has lost its afferent (sensory) input. A monkey with a deafferented limb does not spontaneously use it for walking, picking up objects, or any other voluntary behaviors (Taub & Berman, 1968). Investigators initially assumed that the monkey could not use a limb that it didn’t feel. In a later experiment, however, they cut the afferent nerves of both forelimbs; despite this more extensive damage, the monkey regained use of both deafferented limbs. It could walk, climb the walls of metal cages, and even pick up a raisin between its thumb and forefinger. Apparently, a monkey fails to use a deafferented forelimb only because walking on three limbs is easier than using the impaired
Dorsal root (sensory) Rat learns to approach white card. White matter Central canal Gray matter
Correct Ventral root (motor)
Damage to visual cortex
Figure 5.19 Cross-section through the spinal cord A cut through the dorsal root (as shown) deprives the animal of touch sensations from part of the body but leaves the motor nerves intact. At first, rat does not choose correctly.
limb. When it has no choice but to use its deafferented limbs, it can. For another example, consider a rat with damage to its visual cortex. Prior to the damage, it had learned to approach a white card instead of a black card for food, but just after the damage, it approaches one card or the other randomly. Has it completely forgotten the discrimination? Evidently not because it can more easily relearn to approach the white card than learn to approach the black card (T. E. LeVere & Morlock, 1973) (Figure 5.20). Thomas LeVere (1975) proposed that a lesion to the visual cortex does not destroy the memory trace but merely impairs the rat’s ability to find it. As the animal recovers, it regains access to misplaced memories. Similarly, many people with brain damage are capable of more than they realize they can do. Often, they find ways of getting through the tasks of their day without relying on their impaired skills; for example, someone with impaired language will rely on a spouse to do the talking, or someone with impaired face recognition will learn to recognize people by their voices. Therapy for people with brain damage focuses on showing them how much they already can do and encouraging them to practice those skills. For example, someone with damage to the motor cortex of one hemisphere learns to use the pathways from the intact hemisphere more efficiently (Chollet & Weiller, 1994). Treatment begins with careful evaluation of a patient’s abilities and disabilities. Such evaluations are the specialty of neuropsychologists (see Table 1.1, p. 9), who use standardized tests and sometimes improvise new tests to try to pinpoint the problems. For example, someone who has trouble carrying out spoken instructions might be impaired in hearing, memory, language, muscle control, or alertness. Often, neuropsychologists need to test someone repeatedly under different conditions because performance fluctuates,
?
Correct
However, it relearns original discrimination easily.
Correct
But if retrained on the opposite discrimination, it learns slowly.
? Correct
Figure 5.20 Memory impairment after cortical damage Brain damage impairs retrieval of a memory but does not destroy it completely. (Source: Based on T. E. LeVere & Morlock, 1973)
especially when a person with brain damage becomes fatigued. After identifying the problem, a neuropsychologist might refer a patient to a physical therapist or occupational therapist, who will help the patient practice the impaired skills. Therapists often remark that they get their best results if they start soon after a patient’s stroke, and animal research confirms this impression.
5.2
Plasticity After Brain Damage
145
In one study, rats received damage to the parietal cortex of one hemisphere, resulting in poor coordination of the contralateral forepaw. Some of the rats received experiences designed to encourage them to practice with the impaired limb. Those who began practice 5 days after the damage recovered better than those who started after 14 days, who in turn recovered better than those who started after 30 days (Biernaskie, Chernenko, & Corbett, 2004). Evidently, the brain goes through an era of plasticity during the first days after damage. One important generalization is that behavior recovered after brain damage is effortful, and the recovery is precarious. Someone who appears to be functioning normally is working harder than usual to achieve the same end. Behavior deteriorates markedly after a couple of beers, a physically tiring day, or other kinds of stresses that would minimally affect a normal, healthy person (Fleet & Heilman, 1986). It also deteriorates more than average in old age (Corkin, Rosen, Sullivan, & Clegg, 1989).
STOP
&
CHECK
8. Suppose someone has suffered a spinal cord injury that interrupts all sensation from the left arm. Now he or she uses only the right arm. Of the following, which is the most promising therapy: electrically stimulate the skin of the left arm, tie the right arm behind the person’s back, or blindfold the person? Check your answer on page 147.
Module 5.2 In Closing: Brain Damage and Recovery The mammalian body is well equipped to replace lost blood cells or skin cells but poorly prepared to deal with lost brain cells. Even the responses that do occur after brain damage, such as collateral sprouting of axons or reorganization of sensory representations, are not always helpful. It is tempting to speculate that we did not evolve many mechanisms of recovery from brain damage because, through most of our evolutionary history, an individual with brain damage was not likely to survive long enough to recover. Today, many people with brain and spinal cord damage survive for years, and we need continuing research on how to improve their lives.
146
Chapter 5
Development and Plasticity of the Brain
For decades now, researchers have been optimistic about developing new drug therapies or surgeries for people with brain or spinal cord injury. Many kinds of treatment are in the experimental phase, although so far none has shown a good benefits-to-risk ratio. In Chapter 8, we shall focus on one of these approaches, the idea of transplanting healthy cells, preferably stem cells from an embryo, to replace dying brain cells. Greater success may be possible for replacing glia cells, such as those that produce myelin sheaths (Keirstead et al., 2005). We can and should remain optimistic about therapies of the future, but unfortunately, we don’t know when that future will arrive. Still, in the process of doing the research, we learn much about the nervous system. We can now explain some phenomena, such as the phantom limb, that previously were completely mysterious.
Summary 1. Brain damage has many causes, including blows to the head, obstruction of blood flow to the brain, or a ruptured blood vessel in the brain. Strokes kill neurons largely by overexcitation. (p. 137) 2. During the first 3 hours after an ischemic stroke, tissue plasminogen activator (tPA) can reduce cell loss by breaking up the blood clot. Theoretically, it should also be possible to minimize cell loss by preventing overexcitation of neurons, but so far none of the methods based on this idea have produced demonstrable benefits in humans. (p. 138) 3. When one brain area is damaged, other areas become less active than usual because of their loss of input. Stimulant drugs can help restore normal function of these undamaged areas. (p. 139) 4. A cut axon regrows in the peripheral nervous system of mammals and in either the central or peripheral nervous system of many fish. Researchers have tried to promote axon growth in the mammalian CNS as well, but so far without much success. (p. 140) 5. After an area of the CNS loses its usual input, other axons begin to excite it as a result of either sprouting or denervation supersensitivity. In some cases, this abnormal input produces odd sensations such as the phantom limb. (p. 141) 6. Most recovery of function after brain damage relies on learning to make better use of spared functions. Many individuals with brain damage are capable of more than they show because they avoid using skills that have become impaired or difficult. (p. 144)
Answers to STOP
&
CHECK
Questions 1. The more common form, ischemia, is the result of an occlusion of an artery. The other form, hemorrhage, is the result of a ruptured artery. (p. 139) 2. The drug tPA breaks up blood clots, and the problem in hemorrhage is a ruptured blood vessel, not a blood clot. (p. 139) 3. Refuse the blanket. Recovery will be best if the stroke victim remains cold for the first 3 days. (p. 139) 4. It is best to direct the amphetamine to the cells that had been receiving input from the damaged cells. Presumably, the loss of input has produced diaschisis. (p. 140) 5. Axons (p. 142) 6. Dendritic receptors (p. 142) 7. The small-scale reorganization that enables increased representation of a violinist’s or Braille reader’s fingers is helpful. The larger-scale reorganization that occurs after amputation is harmful. (p. 144)
8. Tie the right arm behind the back to force the person to use the impaired arm instead of only the normal arm. Stimulating the skin of the left arm would accomplish nothing, as the sensory receptors have no input to the CNS. Blindfolding would be either irrelevant or harmful (by decreasing the visual feedback from left-hand movements). (p. 146)
Thought Questions 1. Ordinarily, patients with advanced Parkinson’s disease (who have damage to dopamine-releasing axons) move very slowly if at all. However, during an emergency (e.g., a fire in the building), they may move rapidly and vigorously. Suggest a possible explanation. 2. Drugs that block dopamine synapses tend to impair or slow limb movements. However, after people have taken such drugs for a long time, some experience involuntary twitches or tremors in their muscles. Based on something in this chapter, propose a possible explanation.
Chapter Ending
Key Terms and Activities Terms apoptosis (p. 128)
fetal alcohol syndrome (p. 130)
neurotrophin (p. 129)
closed head injury (p. 137)
focal hand dystonia (p. 135)
penumbra (p. 138)
collateral sprout (p. 141)
ganglioside (p. 141)
phantom limb (p. 143)
deafferent (p. 144)
hemorrhage (p. 137)
proliferation (p. 123)
denervation supersensitivity (p. 141)
ischemia (p. 137)
stem cells (p. 125)
migration (p. 123)
diaschisis (p. 139)
myelination (p. 124)
stroke (or cerebrovascular accident) (p. 137)
differentiation (p. 124) disuse supersensitivity (p. 141)
nerve growth factor (NGF) (p. 128)
edema (p. 138)
neural Darwinism (p. 128)
synaptogenesis (p. 125) tissue plasminogen activator (tPA) (p. 138)
Chapter Ending
147
Suggestions for Further Reading Azari, N. P., & Seitz, R. J. (2000). Brain plasticity and recovery from stroke. American Scientist, 88, 426–431. Good review of some of the mechanisms of recovery from brain damage. Levi-Montalcini, R. (1988). In praise of imperfection. New York: Basic Books. Autobiography by the discoverer of nerve growth factor. Ramachandran, V. S., & Blakeslee, S. (1998). Phantoms in the brain. New York: Morrow. One of the most thought-provoking books ever written about human brain damage, including the phantom limb phenomenon.
Websites to Explore You can go to the Biological Psychology Study Center and click this link. While there, you can also check for suggested articles available on InfoTrac College Edition. The Biological Psychology Internet address is: http://psychology.wadsworth.com/book/kalatbiopsych9e/
National Stroke Association Home Page http://www.stroke.org/
148
Chapter Ending
Neurobehavioral Teratology Society Home Page A society dedicated to understanding the effects of prenatal alcohol and other drugs on development of the brain and behavior http://www.nbts.org
Exploring Biological Psychology CD Sperry Experiment (animation) Phantom Limb (animation) Critical Thinking (essay questions) Chapter Quiz (multiple-choice questions)
http://www.thomsonedu.com Go to this site for the link to ThomsonNOW, your one-stop study shop, Take a Pre-Test for this chapter, and ThomsonNOW will generate a Personalized Study Plan based on your test results. The Study Plan will identify the topics you need to review and direct you to online resources to help you master these topics. You can then take a Post-Test to help you determine the concepts you have mastered and what you still need work on.
View this animation of studies of a phantom limb.
This animation explains Sperry’s results showing that axons grow back to their original targets.
Chapter Ending
149
6
Vision
Chapter Outline
Main Ideas
Module 6.1
1. Each sensory neuron conveys a particular type of experience; for example, anything that stimulates the optic nerve is perceived as light.
Visual Coding and the Retinal Receptors General Principles of Perception The Eye and Its Connections to the Brain Visual Receptors: Rods and Cones Color Vision In Closing: Visual Receptors Summary Answers to Stop & Check Questions Thought Question Module 6.2
The Neural Basis of Visual Perception An Overview of the Mammalian Visual System Processing in the Retina Pathways to the Lateral Geniculate and Beyond Pattern Recognition in the Cerebral Cortex Disorders of Object Recognition The Color, Motion, and Depth Pathways Visual Attention In Closing: From Single Cells to Vision Summary Answers to Stop & Check Questions Thought Question Module 6.3
Development of Vision Infant Vision Early Experience and Visual Development In Closing: The Nature and Nurture of Vision Summary Answers to Stop & Check Questions Thought Questions Terms Suggestions for Further Reading Websites to Explore Exploring Biological Psychology CD ThomsonNOW
Opposite: Later in this chapter, you will understand why this prairie falcon has tilted its head. Source: © Tom McHugh/Photo Researchers
2. Vertebrate vision depends on two kinds of receptors: cones, which contribute to color vision, and rods, which do not. 3. Every cell in the visual system has a receptive field, an area of the visual world that can excite or inhibit it. 4. After visual information reaches the brain, concurrent pathways analyze different aspects, such as shape, color, and movement. 5. Neurons of the visual system establish approximately correct connections and properties through chemical gradients that are present before birth. However, visual experience can fine-tune or alter those properties, especially early in life.
S
everal decades ago, a graduate student taking his final oral exam for a PhD in psychology was asked, “How far can an ant see?” He suddenly turned pale. He did not know the answer, and evidently, he was supposed to. He mentally reviewed everything he had read about the compound eye of insects. Finally, he gave up and admitted he did not know. With an impish grin the professor told him, “Presumably, an ant can see 93 million miles—the distance to the sun.” Yes, this was a trick question. However, it illustrates an important point: How far an ant can see, or how far you or I can see, depends on how far the light travels. We see because light strikes our eyes, not because we send out “sight rays.” But that principle is far from intuitive. In fact, it was not known until the Arab philosopher Ibn al-Haythem (965–1040) demonstrated that light rays bounce off any object in all directions, but we see only those rays that strike the retina perpendicularly (Gross, 1999). Even today, a distressingly large number of college students believe that energy comes out of their eyes when they see (Winer, Cottrell, Gregg, Fournier, & Bica, 2002). The sensory systems, especially vision, are quite complex and do not match our commonsense notions.
151
Module 6.1
Visual Coding and the Retinal Receptors
I
magine that you are a piece of iron. So there you are, sitting around doing nothing, as usual, when along comes a drop of water. What will be your perception of the water? You will have the experience of rust. From your point of view, water is above all rustish. Now return to your perspective as a human. You know that rustishness is not really a property of water itself but of how it reacts with iron. The same is true of human perception. In vision, for example, when you look at a tree’s leaves, you perceive them as green. But green is no more a property of the leaves than rustish is a property of water. Greenness is what happens when the light bouncing off the leaves reacts with the neurons in your brain. The greenness is in us—just as the rust is really in the piece of iron.
head, we would have to explain how he or she perceives the picture. (Maybe there is an even littler person inside the little person’s head?) The early scientists and philosophers might have avoided this error if they had started by studying olfaction instead; we are less tempted to imagine that we create a little flower for a little person in the head to smell. The main point is that your brain’s activity does not duplicate the objects that you see. For example, when you see a table, the representation of the top of the table does not have to be on the top of your retina or on the top of your head. Consider an analogy to computers: When a computer stores a photograph, the top of the photograph does not have to be toward the top of the computer’s memory bank.
Law of Specific Nerve Energies
General Principles of Perception Each receptor is specialized to absorb one kind of energy and transduce (convert) it into an electrochemical pattern in the brain. For example, visual receptors can absorb and sometimes respond to as little as one photon of light and transduce it into a receptor potential, a local depolarization or hyperpolarization of a receptor membrane. The strength of the receptor potential determines the amount of excitation or inhibition the receptor delivers to the next neuron on the way to the brain. After all the information from millions of receptors reaches the brain, how does the brain make sense of it?
From Neuronal Activity to Perception Let us consider what is not an answer. The 17th-century philosopher René Descartes believed that the brain’s representation of a physical stimulus had to resemble the stimulus itself. That is, when you look at something, the nerves from the eye would project a pattern of impulses arranged as a picture on your visual cortex—right-side up. The problem with this theory is that it assumes a little person in the head who can look at the picture. Even if there were a little person in the
152
Chapter 6
Vision
An important aspect of all sensory coding is which neurons are active. Impulses in one neuron indicate light, whereas impulses in another neuron indicate sound. In 1838, Johannes Müller described this insight as the law of specific nerve energies. Müller held that whatever excites a particular nerve establishes a special kind of energy unique to that nerve. In modern terms, any activity by a particular nerve always conveys the same kind of information to the brain. We can state the law of specific nerve energies another way: No nerve has the option of sending the message “high C note” at one time, “bright yellow” at another time, and “lemony scent” at yet another. It sends only one kind of message—action potentials. The brain somehow interprets the action potentials from the auditory nerve as sounds, those from the olfactory nerve as odors, and those from the optic nerve as light. Admittedly, the word “somehow” glosses over a deep mystery, but the idea is that some experiences are given. You don’t have to learn how to perceive green; a certain pattern of activity in particular produces that experience automatically. Here is a demonstration: If you rub your eyes, you may see spots or flashes of light even in a totally dark room. You have applied mechanical pressure, but that mechanical pressure excited visual receptors in your
eye; anything that excites those receptors is perceived as light. (If you try this demonstration, first remove any contact lenses. Then shut your eyes and rub gently.)
STOP
&
try it yourself
CHECK
1. If it were possible to flip your entire brain upside down, without breaking any of its connections to the sense organs, what would happen to your perceptions of what you see, hear, and so forth? 2. If someone electrically stimulated the receptors in your ear, how would you perceive it? Check your answers on page 164.
The Eye and Its Connections to the Brain Light enters the eye through an opening in the center of the iris called the pupil (Figure 6.1). It is focused by the lens (adjustable) and cornea (not adjustable) and projected onto the retina, the rear surface of the eye, which is lined with visual receptors. Light from the left side of the world strikes the right half of the retina, and vice versa. Light from above strikes the bottom half of the retina, and light from below strikes the top half. As in a camera, the image is reversed. However, the inversion of the image poses no problems for the nervous system. Remember, the visual system does not simply duplicate the image; it represents it by a code of various kinds of neuronal activity.
The Route Within the Retina
Rods and cones Iris (colored area) Fovea Blind spot
Pupil
In a sense, the retina is built inside out. If you or I were designing an eye, we would probably send the receptors’ messages directly back to the brain. In the vertebrate retina, however, the receptors, located on the back of the eye, send their messages not toward the brain but to bipolar cells, neurons located closer to the center of the eye. The bipolar cells send their messages to ganglion cells, located still closer to the center of the eye. The ganglion cells’ axons join one another, loop around, and travel back to the brain
Cornea Lens Ciliary muscle (controls the lens)
Retina
Optic nerve
Figure 6.1 Cross-section of the vertebrate eye Note how an object in the visual field produces an inverted image on the retina. Also note that the optic nerve exits the eyeball on the nasal side (the side closer to the nose).
6.1
Visual Coding and the Retinal Receptors
153
Figure 6.2 Visual path within the eyeball The receptors send their messages to bipolar and horizontal cells, which in turn send messages to the amacrine and ganglion cells. The axons of the ganglion cells loop together to exit the eye at the blind spot. They form the optic nerve, which continues to the brain.
Blood vessels
Optic nerve Horizontal cell Amacrine cell
Axons from ganglion cells Ganglion cells Bipolar cells Receptors
(Figures 6.1 and 6.2). Additional cells called amacrine cells get information from bipolar cells (Figure 6.3) and send it to other bipolar cells, other ama-
Image not available due to copyright restrictions
crine cells, or ganglion cells. Amacrine cells are numerous and diverse; about 50 types have been identified so far (Wässle, 2004). Different types control the ability of ganglion cells to respond to shapes, movement, or other specific aspects of visual stimuli (Fried, Münch, & Werblin, 2002; Sinclair, Jacobs, & Nirenberg, 2004). One consequence of this anatomy is that light has to pass through the ganglion cells and bipolar cells before it reaches the receptors. However, these cells are transparent, and light passes through them without distortion. A more important consequence of the eye’s anatomy is the blind spot. The ganglion cell axons band together to form the optic nerve (or optic tract), an axon bundle that exits through the back of the eye. The point at which it leaves (which is also where the major blood vessels leave) is called the blind spot because it has no receptors. Every person is therefore blind in part of each eye.1 You can demonstrate your own blind spot using Figure 6.4. Close your left eye and focus your right eye on the o at the top. Then move the page toward you and away, noticing what happens to the x. When the page is about 25 cm (10 in.) away, the x disappears because its image has struck the blind spot of your retina. 1This statement applies to all people, without qualification. Can you think of any other absolutes in psychology? Most statements are true “on the average,” “in general,” or “with certain exceptions.”
154
Chapter 6
Vision
Fovea and Periphery of the Retina O
O
Figure 6.4 Two demonstrations of the blind spot of the retina Close your left eye and focus your right eye on the o in the top part. Move the page toward you and away, noticing what happens to the x. At a distance of about 25 cm (10 in.), the x disappears. Now repeat this procedure with the bottom part. At that same distance, what do you see?
Now repeat the procedure with the lower part of the figure. When the page is try it again about 25 cm away from your eyes, yourself what do you see? The gap disappears! Note how your brain in effect fills in the gap. When the blind spot interrupts a straight line or other regular pattern, you tend to perceive that pattern in O N L I N E the blind area, as if you actually saw it. For another interesting demonstration of this try it phenomenon, see the Online Try It Yourself yourself exercise “Filling in the Blind Spot.” Some people have a much larger blind spot because glaucoma has destroyed parts of the optic nerve. Generally, they do not notice it any more than you notice your smaller one. Why not? Mainly, they do not see a black spot in their blind areas, but simply no sensation at all, the same as you see in your blind spot or out the back of your head.
STOP
&
CHECK
3. What makes the blind spot of the retina blind? Check your answer on page 164.
When you look at any detail, such as a letter on this page, you fixate it on the central portion of your retina, especially the fovea (meaning “pit”), a tiny area specialized for acute, detailed vision (see Figure 6.1). Because blood vessels and ganglion cell axons are almost absent near the fovea, it has the least impeded vision available. The tight packing of receptors also aids perception of detail. Furthermore, each receptor in the fovea connects to a single bipolar cell, which in turn connects to a single ganglion cell, which then extends its axon to the brain. The ganglion cells in the fovea of humans and other primates are called midget ganglion cells because each is small and responds to just a single cone. As a result, each cone in the fovea has in effect a direct line to the brain, which can register the exact location of the input. Toward the periphery, more and more receptors converge onto bipolar and ganglion cells. As a result, the brain cannot detect the exact location or shape of a peripheral light source. However, the summation enables perception of much fainter lights in the periphery. In short, foveal vision has better acuity (sensitivity to detail), and peripheral vision has better sensitivity to dim light. Peripheral vision can identify a shape by itself more easily than one surrounded by other objects (Parkes, Lund, Angelucci, Solomon, & Morgan, 2001). For example, fixate your right eye on the x in each of the following displays. With the upper display, you can probably detect the direction of the sloping lines at the right. With the lower display, you probably cannot, at least not clearly, because the try it surrounding elements interfere with detail yourself perception. x
///
x
####### ##///## #######
You have heard the expression “eyes like a hawk.” In many bird species, the eyes occupy most of the head, compared to only 5% of the head in humans. Furthermore, many bird species have two foveas per eye, one pointing ahead and one pointing to the side (Wallman & Pettigrew, 1985). The extra foveas enable perception of detail in the periphery. Hawks and other predatory birds have a greater density of visual receptors on the top half of their retinas (looking down) than on the bottom half (looking up). That arrangement is highly adaptive because predatory birds spend most of their day soaring high in the air looking down. However, when the bird lands and needs to see above it, it must turn its head, as Figure 6.5 shows (Waldvogel, 1990). 6.1
Visual Coding and the Retinal Receptors
155
Conversely, in many prey species such as rats, the greater density of receptors is on the bottom half of the retina (Lund, Lund, & Wise, 1974). As a result, they can see above them better than they can below.
Visual Receptors: Rods and Cones
E. R. Lewis, F. S. Werblin, & Y. Y. Zeevi
Chase Swift
The vertebrate retina contains two types of receptors: rods and cones (Figure 6.6). The rods, which are most abundant in the periphery of the human retina, respond to faint light but are not useful in bright daylight because bright light bleaches them. Cones, which are most abundant in and around the fovea, are less active in dim light, more useful in bright light, and essential for color vision. Because of the distribution of rods and cones, you have good color vision in the fovea but not in the periphery. The differences between foFigure 6.5 A behavioral consequence of how veal and peripheral vision are summarized in Table 6.1. receptors are arranged on the retina Although rods outnumber cones about 20 to 1 in the One owlet has turned its head almost upside down to see human retina, cones have a much more direct route to above itself. Birds of prey have a great density of receptors the brain. Remember the midget ganglion cells: In the on the upper half of the retina, enabling them to see below fovea (all cones), each receptor has its own line to the them in great detail during flight. But they see objects brain. In the periphery (mostly rods), each receptor above themselves poorly, unless they turn their heads. Take shares a line with tens or hundreds of others. A typianother look at the prairie falcon at the start of this chapter. cal count shows about 10 cone-driven responses in the It is not a one-eyed bird; it is a bird that has tilted its head. brain for every rod-driven response (Masland, 2001). Do you now understand why? The ratio of rods to cones varies among species. A 20-to-1 ratio may sound high, but in fact, the ratio is much higher in rodents and other species that are active at night. An extreme case is that of South American oilbirds, which live in caves and emerge only at night to search for fruits. They have about 15,000 rods per cone. Furthermore, as an adaptation to detect faint lights at night, their rods are packed three deep throughout the retina (Martin, Rojas, Ramírez, & McNeil, 2004). Both rods and cones contain photopigments, chemicals that release energy when struck by light. Photopigments consist of 11-cisretinal (a derivative of vitamin A) bound to proteins called opsins, which modify the photopigments’ sensitivity to different waveRod Cone lengths of light. The 11-cis-retinal is stable in the dark; light energy (a) (b) converts it suddenly to another Figure 6.6 Structure of rod and cone form, all-trans-retinal, in the pro(a) Diagram of a rod and a cone. (b) Photo of rods and a cone, produced with a cess releasing energy that activates scanning electron microscope. Magnification x 7000. second messengers within the cell
156
Chapter 6
Vision
Table 6.1 Human Foveal Vision and Peripheral Vision Characteristic
Foveal Vision
Peripheral Vision
Receptors
Cones in the fovea itself; cones and rods mix in the surrounding area
Proportion of rods increases toward the periphery; the extreme periphery has only rods
Convergence of receptors
Just a few receptors send their input to each postsynaptic cell
Increasing numbers of receptors send input to each postsynaptic cell
Brightness sensitivity
Useful for distinguishing among bright lights; responds poorly to faint lights
Responds well to faint lights; less useful for making distinctions in bright light
Sensitivity to detail
Good detail vision because few receptors funnel their input to a postsynaptic cell
Poor detail vision because so many receptors send their input to the same postsynaptic cell
Color vision
Good (many cones)
Poor (few cones)
(Q. Wang, Schoenlein, Peteanu, Mathies, & Shank, 1994). (The light is absorbed in this process; it does not continue to bounce around in the eye.) Just as neurotransmitters are virtually the same across species, so are photopigments and opsins. The opsin found in human eyes is similar to that of other vertebrates and many invertebrates (Arendt, Tessmar-Raible, Snyman, Dorresteijn, & Wittbrodt, 2004).
Color Vision
In the human visual system, the shortest visible wavelengths, about 350 nm (1 nm = nanometer, or 10–9 m), are perceived as violet; progressively longer wavelengths are perceived as blue, green, yellow, orange, and red, near 700 nm (Figure 6.7). The “visible” wavelengths vary depending on a species’ receptors. For example, birds’ receptors enable them to see shorter wavelengths than humans can. That is, wavelengths STOP CHECK that we describe as “ultraviolet” are simply violet to birds. (Of course, we don’t know what their experience looks like.) In general, small songbirds see further into 4. You sometimes find that you can see a faint star on a the ultraviolet range than do predatory birds such as dark night better if you look slightly to the side of the hawks and falcons. Many songbird species have taken star instead of straight at it. Why? advantage of that tendency by evolving feathers that 5. If you found a species with a high ratio of cones to strongly reflect very short wavelengths, which can be rods in its retina, what would you predict about its way seen more easily by their own species than by predaof life? tors (Håstad, Victorsson, & Ödeen, 2005). Check your answers on page 164. Discrimination among colors poses a special coding problem for the nervous system. A cell in the visual system, like any other neuron, can vary only its frequency of action potentials or, in a cell with graded potentials, its membrane polarization. If the cell’s response indicates brightness, then it cannot simultaneously signal color. Conversely, 350 500 600 700 if each response indicates a difViolet Green Yellow Red ferent color, the cell cannot signal brightness. The inevitable concluUltra AC XInfrared sion is that no single neuron can Gamma rays Radar FM AM TV violet circuits ray rays simultaneously indicate brightrays ness and color; our perceptions 101 103 105 107 109 1011 1013 1015 10–3 10–1 must depend on a combination of Wavelength (nm) responses by different neurons. Scientists of the 1800s proposed Figure 6.7 A beam of light separated into its wavelengths two major interpretations of color Although the wavelengths vary over a continuum, we perceive them as several vision: the trichromatic theory and distinct colors. the opponent-process theory.
&
6.1
Visual Coding and the Retinal Receptors
157
color appears brighter but still yellow. When all three types of cones are equally active, we see white or gray. Note that the response of any one cone is ambiguous. For example, a low response rate by a middlewavelength cone might indicate low-intensity 540-nm light or brighter 500-nm light or still brighter 460nm light. A high response rate could indicate either bright light at 540 nm or bright white light, which includes 540 nm. The nervous system can determine the color and brightness of the light only by comparing the responses of the three types of cones. Given the desirability of seeing all colors in all locations, we might suppose that the three kinds of cones would be equally abundant and evenly distributed. In fact, they are not. Long- and medium-wavelength cones are far more abundant than short-wavelength (blue) cones, and consequently, it is easier to see tiny red, yellow, or green dots than blue dots (Roorda & Williams, 1999). Try this: Look at the dots in the following display, first from close and then from greater distances. You probably will notice that the blue dots look blue when close but appear black from try it a greater distance. The other colors are still yourself visible when the blue is not.
The Trichromatic (Young-Helmholtz) Theory
Percentage of maximum response
People can distinguish red, green, yellow, blue, orange, pink, purple, greenish-blue, and so forth. If we don’t have a separate receptor for every distinguishable color, how many types do we have? The first person to approach this question fruitfully was an amazingly productive man named Thomas Young (1773–1829). Young was the first to decipher the Rosetta stone, although his version was incomplete. He also founded the modern wave theory of light, defined energy in its modern form, founded the calculation of annuities, introduced the coefficient of elasticity, discovered much about the anatomy of the eye, and made major contributions to many other fields (Martindale, 2001). Previous scientists thought they could explain color by understanding the physics of light. Young was among the first to recognize that color required a biological explanation. He proposed that we perceive color by comparing the responses across a few types of receptors, each of which was sensitive to a different range of wavelengths. This theory, later modified by Hermann von Helmholtz, is now known as the trichromatic theory of color vision, or the Young-Helmholtz theory. According to this theory, we perceive color through the relative rates of response by three kinds of cones, each kind maximally sensitive to a different set of wavelengths. (Trichromatic means “three colors.”) How did Helmholtz decide on the number three? He collected psychophysical observations, reports by observers concerning their perceptions of various stimuli. He found that people could match any color Response of shortRods Response of mediumResponse of longby mixing appropriate amounts of just three wavelength cones wavelength cones wavelength cones wavelengths. Therefore, he concluded that three kinds of receptors—we now call them cones—are sufficient to account for human 100 color vision. Figure 6.8 shows wavelength-sensitivity 75 functions for the three cone types: shortwavelength, medium-wavelength, and longwavelength. Note that each cone responds 50 to a broad band of wavelengths but to some more than others. 25 According to the trichromatic theory, we discriminate among wavelengths by 0 the ratio of activity across the three types 400 450 500 550 600 650 of cones. For example, light at 550 nm exWavelength (nanometers) cites the medium-wavelength and longwavelength receptors about equally and Figure 6.8 Response of rods and three kinds of cones to the short-wavelength receptor almost not various wavelengths of light at all. This ratio of responses among the Note that each kind responds somewhat to a wide range of wavelengths three cones determines a perception of yelbut best to wavelengths in a particular range. (Source: From J. K. Bowmaker low. More intense light increases the activand H. J. A. Dartnell, “Visual pigments of rods and cones in a human retina,” ity of all three cones without much change Journal of Physiology, 298, 501–511. Copyright © 1980. Reprinted by permission in their ratio of responses. As a result, the of the author.)
158
Chapter 6
Vision
Images not available due to copyright restrictions
Furthermore, the three kinds of cones are distributed randomly within the retina (Roorda, Metha, Lennie, & Williams, 2001; Roorda & Williams, 1999). Figure 6.9 shows the distribution of short-, medium-, and longwavelength cones in two people’s retinas, with colors artificially added to distinguish the three cone types. Note how few short-wavelength cones are present. Note also the patches of all medium- or all long-wavelength cones, which vary from one person to another. Some people have more than 10 times as many of one kind as the other over the whole retina. Surprisingly, these variations do not produce any easily measured difference in people’s ability to discriminate one color from another (Miyahara, Pokorny, Smith, Baron, & Baron, 1998). Evidently, the brain can compensate for variations in input over a wide range. That compensation breaks down, however, in the periphery of the retina, where cones are scare and their connections are somewhat haphazard (Diller et al., 2004; Martin, Lee, White, Solomon, & Rütiger, 2001). Try this: Get someone to mark a colored dot on the tip of your finger, or attach a colored sticker, without telling you the color. Slowly move it from behind your head into your field of vision and then gradually toward your fovea. At what point do you see the color? You will see your finger long before you can identify the color. As a rule, the smaller the dot, the farther you will have to move it into your visual field— that is, the part of the world that you see—before you can identify the color. The OnONLINE line Try It Yourself exercise “Color Blindness in the Visual try it try it Periphery” provides a further yourself yourself illustration.
at it under a bright light, without moving your eyes, for a full minute. (The brighter the light and the longer you stare, the stronger the effect.) Then look at a plain
The trichromatic theory correctly predicted the discovery of three kinds of cones, but it is incomplete as a theory of color vision. For example, try the following demonstration: Pick any point in the top portion of Figure 6.10—such as the tip of the nose—and stare
AP
The Opponent-Process Theory Figure 6.10 Stimulus for demonstrating negative color afterimages Stare at any point on the face under bright light for about a minute and then look at a white field. You should see a negative afterimage. 6.1 Visual Coding and the Retinal Receptors
159
white surface, such as a wall or a blank sheet of paper. Keep your eyes steady. You will now see a negative color afterimage, a replacement of the red you had been staring at with green, green try it with red, yellow and blue with each other, yourself and black and white with each other. To explain this and related phenomena, Ewald Hering, a 19th-century physiologist, proposed the opponent-process theory: We perceive color in terms of paired opposites: red versus green, yellow versus blue, and white versus black (Hurvich & Jameson, 1957). That is, there is no such thing as reddish green, greenish red, or yellowish blue. The brain has some mechanism that perceives color on a continuum from red to green and another from yellow to blue. Here is one possible mechanism for opponent processes: Consider the bipolar cell diagrammed in Figure 6.11. It is excited by short-wavelength (blue) light and inhibited by both long-wavelength and mediumwavelength light, and especially by a mixture of both, which we see as yellow. An increase in this bipolar cell’s activity produces the experience blue and a decrease produces the experience yellow. If short-wavelength (blue) light stimulates this cell long enough, the cell becomes fatigued. If we now substitute white light, the cell is more inhibited than excited, responds less than its baseline level, and therefore produces an experience of yellow. Many neurons from the bipolar cells through the cerebral cortex are excited by one set of wavelengths and inhibited by another (DeValois & Jacobs, 1968; Engel, 1999). That explanation of negative color afterimages is appealing because of its simplicity. However, it cannot be the whole story. First, try this: Stare at the x in the following diagram for at least a mintry it ute under the brightest light you can find yourself and then look at a brightly lit white page.
Longwavelength cone (responds well to red or yellow)
Excitatory synapse
Mediumwavelength cone (responds best to green, less to yellow)
Shortwavelength cone (responds best to blue)
Excitatory synapse
Excitatory synapse
Horizontal cell Excitatory synapse Inhibitory synapse Bipolar cell
To ganglion cells
Figure 6.11 Possible wiring for one bipolar cell
X
Short-wavelength light (which we see as blue) excites the bipolar cell and (by way of the intermediate horizontal cell) also inhibits it. However, the excitation predominates, so blue light produces net excitation. Red, green, or yellow light inhibits this bipolar cell because they produce inhibition (through the horizontal cell) without any excitation. The strongest inhibition is from yellow light, which stimulates both the long- and medium-wavelength cones. Therefore, we can describe this bipolar cell as excited by blue and inhibited by yellow. White light produces as much inhibition as excitation and therefore no net effect.
For the afterimage of the surrounding box, you saw red, as expected from the theory. But what about the circle inside? Theoretically, you should have seen a gray or black afterimage (the opposite of white), but
in fact, if you used a bright enough light, you saw a green afterimage. Here is another demonstration: First, look at Figure 6.12. Note that although it shows four red quarter-
160
Chapter 6
Vision
Image not available due to copyright restrictions
circles, you have the illusion of a whole red square. (Look carefully to convince yourself that it is an illusion.) Now stare at the tiny x in Figure 6.12. Again, to get good results, stare for at least try it a minute under bright lights. Then look at yourself any white surface. People usually report that the afterimage fluctuates. Sometimes they see four green quarter circles:
And sometimes they see a whole green square (Shimojo, Kamitani, & Nishida, 2001).
Note that a whole green square is the afterimage of an illusion! The red square you “saw” wasn’t really there. This demonstration suggests that afterimages depend at least partly on whatever area of the brain produces the illusion—presumably the cerebral cortex, not the retina itself.
The Retinex Theory The trichromatic theory and the opponent-process theory have limitations, especially in explaining color constancy. Color constancy is the ability to recognize the color of an object despite changes in lighting (Ken-
nard, Lawden, Morland, & Ruddock, 1995; Zeki, 1980, 1983). If you put on green-tinted glasses or replace your white light bulb with a green one, you will notice the tint, but you still identify bananas as yellow, paper as white, walls as brown (or whatever), and so forth. You do so by comparing the color of one object with the color of another, in effect subtracting a fixed amount of green from each. Color constancy requires you to compare a variety of objects. To illustrate, examine the top part of Figure 6.13 (Purves & Lotto, 2003). Although different colors of light illuminate the two scenes, you can easily identify which of the little squares are red, yellow, blue, and so forth. Note the effects of removing context. The bottom part shows the squares that looked red in the top part. Note that they no longer look red. Without the context that indicated yellow light or blue light, those on the left look orange and those on the right look purple. (For this reason, we should avoid talking about “red light” or any other color of light. A certain wavelength of light that looks red under normal circumstances can appear to be some other color, or even white or black, depending on the background.) Similarly, our perception of the brightness of an object requires comparing it with other objects. Examine Figure 6.14 (Purves, Shimpi, & Lotto, 1999). You see what appears to have a gray top and a white bottom. Now cover the center (the border between the top and the bottom) with your fingers. You will notice that the top of the object has exactly the same brightness as the bottom! For additional examples like this, visit this website: http://www.purveslab.net You can also try the Online Try It Your- O N L I N E self exercise “Brightness Contrast.” try it To account for color and brightness yourself constancy, Edwin Land proposed the retinex theory (a combination of the words retina and cortex): The cortex compares information from various parts of the retina to determine the brightness and color for each area (Land, Hubel, Livingstone, Perry, & Burns, 1983). For example, if the cortex notes a constant amount of green throughout a scene, it subtracts some green from each object to determine its true color. Dale Purves and colleagues have expressed a similar idea in more general terms: Whenever we see anything, we make an inference or construction. For example, when you look at the objects in Figures 6.13 and 6.14, you ask yourself, “On occasions when I have seen something that looked like this, what did it turn out to be?” You go through the same process for perceiving shapes, motion, or anything else: You calculate what objects probably produced the pattern of stimulation you just had (Lotto & Purves, 2002; Purves & Lotto, 2003). That is, visual perception requires a kind of reasoning process, not just retinal stimulation.
6.1 Visual Coding and the Retinal Receptors
161
(a)
(b)
(c)
Figure 6.13 Effects of context on color perception In each block, we identify certain tiles as looking red. However, after removal of the context, those that appeared red on the left now look orange; those on the right appear purple. (Source: Why We See What We Do, by D. Purves and R. B. Lotto, figure 6.10, p. 134. Copyright 2003 Sinauer Associates, Inc. Reprinted by permission.)
STOP
&
CHECK
6. Suppose a bipolar cell received excitatory input from medium-wavelength cones and inhibitory input from all three kinds of cones. When it is highly excited, what color would one see? When it is inhibited, what color perception would result? 7. When a television set is off, its screen appears gray. When you watch a program, parts of the screen appear black, even though more light is actually show-
162
Chapter 6
Vision
ing on the screen than when the set was off and the screen appeared gray. What accounts for the black perception? 8. Figure 6.8 shows 500 nm light as blue and 550 nm light as yellow. Why should we nevertheless not call them “blue light” and “yellow light”? Check your answers on page 164.
Image not available due to copyright restrictions
Color Vision Deficiency Encyclopedias describe many examples of discoveries in astronomy, biology, chemistry, and physics, but what are psychologists’ discoveries? You might give that question some thought. One of my nominations is color blindness, better described as color vision deficiency, an impairment in perceiving color differences compared to other people. (Complete color blindness, the inability to perceive anything but shades of black and white, is rare.) Before color vision deficiency was discovered in the 1600s, people assumed that vision copies the objects we see (Fletcher & Voke, 1985). That is, if an object is round, we see the roundness; if it is yellow, we see the yellowness. Investigators discovered that it is possible to have otherwise satisfactory vision without seeing color. That is, color depends on what our brains do with incoming light; it is not a property of the light by itself. Several types of color vision deficiency occur. For genetic reasons, some people lack one or two of the types of cones. Some have all three kinds, but one kind has abnormal properties (Nathans et al., 1989). In the most common form of color vision deficiency, people have trouble distinguishing red from green because their long- and medium-wavelength cones have the same photopigment instead of different ones. The gene causing this deficiency is on the X chromosome. About 8% of men are red-green color blind compared with less than 1% of women (Bowmaker, 1998).
E X T E N S I O N S A N D A P P L I C AT I O N S
sponse to a slightly longer wavelength than other people’s receptors (Stockman & Sharpe, 1998). The gene controlling this receptor is on the X chromosome, so— because men have only one X chromosome—men have only one of these LW receptor types or the other. However, women have two X chromosomes. In each cell, one X chromosome is activated and the other is inhibited, apparently at random.2 Therefore, nearly half of women have both kinds of long-wavelength genes and two kinds of red receptors (Neitz, Kraft, & Neitz, 1998). Several studies have found that such women draw slightly finer color distinctions than other people do. That is, sometimes two objects that seem the same color to other people look different to women with two kinds of LW receptors (Jameson, Highnote, & Wasserman, 2001). This effect is small, however, and emerges only with careful testing. For more information about the retina and vision and vision in general, this site provides an excellent treatment: http://www.webvision.med.utah.edu
STOP
&
CHECK
9. As mentioned on page 158, most people can use varying amounts of three colors to match any other color that they see. Who would be an exception to this rule, and how many would they need? Check your answer on page 164.
People with Four Cone Types Might anyone have more than three kinds of cones? Apparently, some women do. People vary in the gene controlling the long-wavelength (LW) cone receptor (sensitive mainly in the red end of the spectrum). Consequently, some people’s LW receptors have a peak re-
2There is a good reason one X chromosome per cell is inhibited. The genes on any chromosome produce proteins. If both X chromosomes were active in women, then either women would be getting an overdose of the X-related proteins or men would be getting an underdose. Because only one X chromosome is active per cell in women, both men and women get the same dose (Arnold, 2004).
6.1 Visual Coding and the Retinal Receptors
163
Module 6.1 In Closing: Visual Receptors I remember once explaining to my then-teenage son a newly discovered detail about the visual system, only to have him reply, “I didn’t realize it would be so complicated. I thought the light strikes your eyes and then you see it.” As you should now be starting to realize— and if not, the next module should convince you— vision requires extremely complicated processing. If you tried to build a robot with vision, you would quickly discover that shining light into its eyes accomplishes nothing unless its visual detectors are connected to devices that identify the useful information and use it to select the proper action. We have such devices in our brains, although we are still far from fully understanding them.
Summary 1. Sensory information is coded so that the brain can process it. The coded information bears no physical similarity to the stimuli it describes. (p. 152) 2. According to the law of specific nerve energies, the brain interprets any activity of a given sensory neuron as representing the sensory information to which that neuron is tuned. (p. 152) 3. Light passes through the pupil of a vertebrate eye and stimulates the receptors lining the retina at the back of the eye. (p. 153)
164
Chapter 6
Vision
4. The axons from the retina loop around to form the optic nerve, which exits from the eye at a point called the blind spot. (p. 154) 5. Visual acuity is greatest in the fovea, the central area of the retina. (p. 155) 6. Because so many receptors in the periphery converge their messages to their bipolar cells, our peripheral vision is highly sensitive to faint light but poorly sensitive to detail. (p. 155) 7. The retina has two kinds of receptors: rods and cones. Rods are more sensitive to faint light; cones are more useful in bright light. Rods are more numerous in the periphery of the eye; cones are more numerous in the fovea. (p. 156) 8. Light stimulates the receptors by triggering a molecular change in 11-cis-retinal, releasing energy, and thereby activating second messengers within the cell. (p. 156) 9. According to the trichromatic (or Young-Helmholtz) theory of color vision, color perception begins with a given wavelength of light stimulating a distinctive ratio of responses by the three types of cones. (p. 158) 10. According to the opponent-process theory of color vision, visual system neurons beyond the receptors themselves respond with an increase in activity to indicate one color of light and a decrease to indicate the opposite color. The three pairs of opposites are red–green, yellow–blue, and white–black. (p. 159)
11. According to the retinex theory, the cortex compares the responses representing different parts of the retina to determine the brightness and color of each area. (p. 161) 12. For genetic reasons, certain people are unable to distinguish one color from another. Red-green color blindness is the most common type. (p. 163)
Answers to STOP
&
CHECK
Questions 1. Your perceptions would not change. The way visual or auditory information is coded in the brain does not depend on the physical location within the brain. That is, seeing something as “on top” or “to the left” depends on which neurons are active but does not depend on the physical location of those neurons. (p. 153) 2. Because of the law of specific nerve energies, you would perceive it as sound, not as shock. (p. 153) 3. The blind spot has no receptors because it is occupied by exiting axons and blood vessels. (p. 155) 4. If you look slightly to the side, the light falls on an area of the retina that has rods, which are more sensitive to faint light. That portion of the retina also has more convergence of input, which magnifies sensitivity to faint light. (p. 157)
5. We should expect this species to be highly active during the day and seldom active at night. (p. 157) 6. Excitation of this cell should yield a perception of green under normal circumstances. Inhibition would produce the opposite sensation, red. (p. 162) 7. The black experience arises by contrast with the other brighter areas. The contrast occurs by comparison within the cerebral cortex, as in the retinex theory of color vision. (p. 162) 8. Color perception depends not just on the wavelength of light from a given spot but also the light from surrounding areas. As in Figure 6.13, the context can change the color perception. (p. 162) 9. Red-green color-blind people would need only two colors. People with four kinds of cones might need four. (p. 163)
Thought Question How could you test for the presence of color vision in a bee? Examining the retina does not help; invertebrate receptors resemble neither rods nor cones. It is possible to train bees to approach one visual stimulus and not another. The difficulty is that if you trained some bees to approach, say, a yellow card and not a green card, you do not know whether they solved the problem by color or by brightness. Because brightness is different from physical intensity, you cannot assume that two colors equally bright to humans are also equally bright to bees. How might you get around the problem of brightness to test color vision in bees?
6.1
Visual Coding and the Retinal Receptors
165
Module 6.2
The Neural Basis of Visual Perception
L
ong ago, people assumed that anyone who saw an object at all saw everything about it, including its shape, color, and movement. The discovery of color blindness was a huge surprise in its time, although not today. However, you may be surprised—as were late 20th-century psychologists—by the analogous phenomenon of motion blindness: Some people with otherwise satisfactory vision fail to see that an object is moving, or at least have great trouble determining its direction and speed. “How could anyone not see the movement?” you might ask. Your question is not very different from the question raised in the 1600s: “How could anyone see something without seeing the color?” The fundamental fact about the visual cortex takes a little getting used to and therefore bears repeating: You have no central processor that sees every aspect of a visual stimulus at once. Different parts of the cortex process separate aspects somewhat independently of one another.
An Overview of the Mammalian Visual System Let’s begin with a general outline of the anatomy of the mammalian visual system and then examine certain stages in more detail. The structure and organization are the same across individuals and even across species, except for quantitative differences, but the quantitative differences are larger than we might have supposed. Some people have two or three times as many axons in their optic nerve as others do, and correspondingly more cells in their visual cortex (Andrews, Halpern, & Purves, 1997; Stevens, 2001; Sur & Leamey, 2001), and greater ability to detect brief, faint, or rapidly changing visual stimuli (Halpern, Andrews, & Purves, 1999). The rods and cones of the retina make synaptic contact with horizontal cells and bipolar cells (see Figures 6.2 and 6.15). The horizontal cells make inhibitory contact onto bipolar cells, which in turn make synapses onto amacrine cells and ganglion cells.
166
Chapter 6
Vision
All these cells are within the eyeball. Even at this beginning stage, different cells are specialized for different visual functions, and the same is true at each succeeding stage. The axons of the ganglion cells form the optic nerve, which leaves the retina and travels along the lower surface of the brain. The optic nerves from the two eyes meet at the optic chiasm (Figure 6.16a on page 168), where, in humans, half of the axons from each eye cross to the opposite side of the brain. As shown in Figure 6.16b, information from the nasal half of each eye crosses to the contralateral hemisphere. Information from the other half goes to the ipsilateral hemisphere. The percentage of crossover varies from one species to another depending on the location of the eyes. In species with eyes far to the sides of the head, such as rabbits and guinea pigs, nearly all the axons cross to the opposite side. Most of the ganglion cell axons go to the lateral geniculate nucleus, a nucleus of the thalamus specialized for visual perception. (The term geniculate comes from the Latin root genu, meaning “knee.” To genuflect is to bend the knee. The lateral geniculate looks a little like a knee, if you use some imagination.) A smaller number of axons go to the superior colliculus, and even fewer go to several other areas, including part of the hypothalamus that controls the waking–sleeping schedule (see Chapter 9). At any rate, most of the optic nerve goes to the lateral geniculate, which in turn sends axons to other parts of the thalamus and to the visual areas of the occipital cortex. The cortex returns many axons to the thalamus, so the thalamus and cortex constantly feed back and forth to modify each other’s activity (Guillery, Feig, & van Lieshout, 2001).
STOP
&
CHECK
1. Where does the optic nerve start and where does it end? Check your answer on page 183.
Figure 6.15 The vertebrate retina
© Ed Reschke
Image not available due to copyright restrictions
(b)
Processing in the Retina The human retina contains roughly 120 million rods and 6 million cones. We cannot intelligently process that many independent messages; we need to
(b) Photo of a cross-section through the retina. This section from the periphery of the retina has relatively few ganglion cells; a slice closer to the fovea would have a greater density.
extract the meaningful patterns, such as the edge between one object and the next. To do so, we have cells that respond to particular patterns of visual Receptors information. The responses of any cell in the visual system depend on the excitatory and inhibitory messages it receives. To understand this idea, let’s explore one Bipolar example in detail. A good, well-undercells stood example is lateral inhibition, which occurs in the first cells of the Ganglion retina. cells Lateral inhibition is the retina’s way Axons from ganglion cells of sharpening contrasts to emphasize the border between one object and the next. We begin with the rods and cones. They have spontaneous levels of activity, and light striking them actually decreases their output. They have inhibitory synapses onto the bipolar cells, and therefore, light decreases their inhibitory output. Because you and I have trouble thinking in double negatives, for simplicity’s sake, let’s think of their output as excitation of the bipolar cells. In the fovea, each cone attaches to just one bi-
6.2 The Neural Basis of Visual Perception
167
Visual cortex
1
2
3
4
5
6
7
8
9
10 11 12 13
14 15
Receptors
Superior colliculus Lateral geniculate nucleus of thalamus
Optic chiasm Retina
Optic nerve
1
2
4
3
6
5
Right
Visual field of right retina (reversed by the lens)
Optic nerve Optic chiasm Nasal half of each visual field crosses to opposite hemisphere.
Optic tract Lateral geniculate
Left visual cortex’s view of the right visual field
Primary visual cortex
9
8
10
11 12 13 14 15
Right visual cortex’s view of the right visual field
Now let’s add the next element, the horizontal cells. Each receptor excites a horizontal cell, which inhibits the bipolar cells. Because the horizontal cell spreads widely, excitation of any receptor inhibits many bipolar cells. However, because the horizontal cell is a local cell, with no axon and no action potentials, its depolarization decays with distance. Mild excitation of, say, receptor 8 excites the bipolar cell to which it connects, bipolar cell 8. Receptor 8 also stimulates the horizontal cell to inhibit bipolars 7 and 9 strongly, bipolars 6 and 10 a bit less, and so on. The result is that bipolar cell 8 shows net excitation; the excitatory synapse here outweighs the effect of the horizontal cell’s inhibition. However, the bipolar cells to either side (laterally) get no excitation but some inhibition by the horizontal cell. Bipolars 7 and 9 are strongly inhibited, so their activity falls well below their spontaneous level. Bipolars 6 and 10 are inhibited somewhat less, so their activity decreases a bit less. In this diagram, green arrows represent excitation from bipolar cells; red arrows represent inhibition from the horizontal cell.
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
Receptors
(b)
Figure 6.16 Major connections in the visual system of the brain (a) Part of the visual input goes to the thalamus and from there to the visual cortex. Another part of the visual input goes to the superior colliculus. (b) Axons from the retina maintain their relationship to one another—what we call their retinotopic organization—throughout their journey from the retina to the lateral geniculate and then from the lateral geniculate to the cortex.
polar cell. Outside the fovea, larger numbers connect to each bipolar cell, as shown in Figure 6.2, p. 154. We’ll consider the simplest case of a cone in the fovea connected to just one bipolar. In the following diagram, the green arrows represent excitation.
168
Chapter 6
Vision
Bipolar cells
Direction of light
(a)
Left
7
Horizontal cell 1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
Bipolar cells
Direction of light
Now imagine what happens if light excites receptors 6–10. These receptors excite bipolar cells 6–10 and the horizontal cell. So bipolar cells 6–10 receive both excitation and inhibition. The excitation from the receptors is stronger than the inhibition from the horizontal cell, so bipolars 6–10 receive net excitation. However, they do not all receive the same amount of inhibition. Remember, the response of the horizon-
tal cell decays over distance. Bipolar cells 7, 8, and 9 are inhibited because of input on both their sides, but bipolar cells 6 and 10 are each inhibited from one side and not the other. That is, the bipolar cells on the edge of the excitation are inhibited less than those in the middle. Therefore, the overall result is that bipolar cells 6 and 10 respond more than bipolars 7–9. Now think about bipolar cell 5. What excitation does it receive? None. However, it is inhibited by the horizontal cell because of the excitation of receptors 6 and 7. Therefore, bipolar 5, receiving inhibition but no excitation, responds even less than bipolars 1–4. Area of darkness
1
2
3
Area of light on the retina
4
5
6
7
8
9
11
12
13
14 15
Receptors Horizontal cell 1
2
3
4
5
6
7
8
9
10
11
12 13
14
STOP
&
CHECK
2. When light strikes a receptor, what effect does the receptor have on the bipolar cells (excitatory or inhibitory)? What effect does it have on horizontal cells? What effect does the horizontal cell have on bipolar cells?
Area of darkness
10
These results illustrate lateral inhibition, the reduction of activity in one neuron by activity in neighboring neurons (Hartline, 1949). The main function of lateral inhibition is to heighten the contrasts. When light falls on a surface, as shown here, the bipolars just inside the border are most excited, and those outside the border are least responsive.
15
Bipolar cells
3. If light strikes only one receptor, what is the net effect (excitatory or inhibitory) on the nearest bipolar cell that is directly connected to that receptor? What is the effect on bipolar cells off to the sides? What causes that effect? 4. Examine Figure 6.17. You should see grayish diamonds at the crossroads among the black squares. Explain why. Check your answers on page 183.
Low activity
Low activity High activity Greatest activity Least activity
Figure 6.17 An illustration of lateral inhibition Do you see dark diamonds at the “crossroads”?
Pathways to the Lateral Geniculate and Beyond Look out your window. Perhaps you see someone walking by. Although your perception of that person seems to be an integrated whole, different parts of your brain are analyzing different aspects. One set of neurons identifies the person’s shape, another set concentrates on the colors, and another sees the speed and direction of movement (Livingstone, 1988; Livingstone & Hubel, 1988; Zeki & Shipp, 1988). Your visual pathway begins its division of labor long before it reaches the cerebral cortex. Even at the level of the ganglion cells in the retina, different cells react differently to the same input. Each cell in the visual system of the brain has what we call a receptive field, which is the part of the visual field that either excites it or inhibits it. For a receptor, the receptive field is simply the point in space from which light strikes the receptor. Other visual cells derive their receptive fields from the pattern of excitatory and inhibitory connections to them. For example, a ganglion cell is connected to a group of bipolar cells, which in turn are connected to receptors, so the receptive field of the ganglion cell is the combined receptive fields of those receptors, as shown in Figure 6.18. The receptive fields of the ganglion cells converge to 6.2
The Neural Basis of Visual Perception
169
Receptor Receptive field of this receptor (point in visual field that can affect it)
Ganglion cell
Bipolar cells
Receptors
Three receptors that connect through bipolar cells to a given ganglion cell
Combined receptive field of the ganglion cell
Figure 6.18 Receptive fields The receptive field of a receptor is simply the area of the visual field from which light strikes that receptor. For any other cell in the visual system, the receptive field is determined by which receptors connect to the cell in question.
form the receptive fields of the next level of cells and so on. To find a receptive field, an investigator can shine light in various locations while recording from a neuron. If light from a particular spot excites the neuron, then that location is part of the neuron’s excitatory receptive field. If it inhibits activity, the location is in the inhibitory receptive field. The receptive field of a ganglion cell can be described as a circular center with an antagonistic doughnut-shaped surround. That is, light in the center of the receptive field might be excitatory, with the surround inhibitory, or the opposite.
Inhibitory part of receptive field
170
Chapter 6
Vision
Excitatory part of receptive field
Nearly all primate ganglion cells fall into three major categories: parvocellular, magnocellular, and koniocellular (Shapley, 1995). The parvocellular neurons, with smaller cell bodies and small receptive fields, are located mostly in or near the fovea. (Parvocellular means “small celled,” from the Latin root parv, meaning “small.”) The magnocellular neurons, with larger cell bodies and receptive fields, are distributed fairly evenly throughout the retina. (Magnocellular means “large celled,” from the Latin root magn, meaning “large.” The same root appears in magnify and magnificent.) The koniocellular neurons have small cell bodies, similar to the parvocellular neurons, but they occur throughout the retina instead of being clustered near the fovea. (Koniocellular means “dust celled,” from the Greek root meaning “dust.” They got this name because of their granular appearance.) The parvocellular neurons, with their small receptive fields, are well suited to detect visual details. They are also responsive to color, each neuron being excited by some wavelengths and inhibited by others. The high sensitivity to detail and color reflects the fact that parvocellular cells are located mostly in and near the fovea, where we have many cones. Parvocellular neurons connect only to the lateral geniculate nucleus of the thalamus. The magnocellular neurons, in contrast, have larger receptive fields and are not color sensitive. They respond strongly to moving stimuli and to large overall patterns but not to details. Magnocellular neurons are found throughout the retina, including the periphery, where we are sensitive to movement but not to color or details. Most magnocellular neurons connect to the lateral geniculate nucleus, but a few have connections to other visual areas of the thalamus. Koniocellular neurons have several kinds of functions, and their axons terminate in several locations (Hendry & Reid, 2000). Various types of koniocellular neurons connect to the lateral geniculate nucleus, other parts of the thalamus, and the superior colliculus. The existence of so many kinds of ganglion cells implies that the visual system analyzes information in several ways from the start. Table 6.2 summarizes the three kinds of primate ganglion cells. The axons from the ganglion cells form the optic nerve, which exits the eye at the blind spot and proceeds to the optic chiasm, where half of axons (in humans) cross to the opposite hemisphere. Whereas the retina has more than 120 million receptors, their responses converge onto about 1 million axons in each optic nerve. Although that may sound like a loss of information, a million axons per eye still conveys an enormous amount of information—estimated to be more than one-third of all the information reaching the brain from all sources combined (Bruesch & Arey, 1942). Most axons of the optic nerve go to the lateral geniculate nucleus of the thalamus. Cells of the lateral genic-
Table 6.2 Three Kinds of Primate Ganglion Cells Parvocellular Neurons
Magnocellular Neurons
Koniocellular Neurons
Cell bodies
Smaller
Larger
Small
Receptive fields
Smaller
Larger
Mostly small; variable
Retinal location
In and near fovea
Throughout the retina
Throughout the retina
Color sensitive
Yes
No
Some are
Respond to
Detailed analysis of stationary objects
Movement and broad outlines of shape
Varied and not yet fully described
ulate have receptive fields that resemble those of the ganglion cells—either an excitatory or an inhibitory central portion and a surrounding ring of the opposite effect. Again, some have large (magnocellular) receptive fields, and others have small (parvocellular) fields. However, after the information reaches the cerebral cortex, the receptive fields begin to become more specialized and complicated.
Pattern Recognition in the Cerebral Cortex Most visual information from the lateral geniculate nucleus of the thalamus goes to the primary visual cortex in the occipital cortex, also known as area V1 or the striate cortex because of its striped appearance. It is the area of the cortex responsible for the first stage of visual processing. Area V1 is apparently essential for most, if not all, conscious vision. If you close your eyes and imagine a visual scene, activity increases in area V1 (Kosslyn & Thompson, 2003). People who become blind because of eye damage continue having visual dreams if their visual cortex is intact. However, people with extensive damage to the visual cortex report no conscious vision, no visual imagination, and no visual images in their dreams (Hurovitz, Dunn, Domhoff, & Fiss, 1999). Nevertheless, some people with extensive area V1 damage show a surprising phenomenon called blindsight, an ability to respond in some ways to visual information that they report not seeing. If a light flashes within an area where they report no vision, they can nevertheless point to it or turn their eyes toward it, while insisting that they saw nothing and are only guessing (Bridgeman & Staggs, 1982; Weiskrantz, Warrington, Sanders, & Marshall, 1974). The explanation remains controversial. After damage to area V1, other branches of the optic nerve deliver visual information to the superior colliculus (in the midbrain) and several other areas, including parts of the cerebral cortex (see Figure 6.16a). Perhaps those areas control the blindsight responses (Cowey & Stoerig, 1995; Moore, Rodman, Repp, & Gross, 1995). How-
ever, many people with area V1 damage do not show blindsight or show it for stimuli only in certain locations (Schärli, Harman, & Hogben, 1999; Wessinger, Fendrich, & Gazzaniga, 1997). An alternative explanation is that tiny islands of healthy tissue remain within an otherwise damaged visual cortex, not large enough to provide conscious perception but nevertheless enough for blindsight (Fendrich, Wessinger, & Gazzaniga, 1992). Some patients experience blindsight after extensive (but not total) damage to the optic nerve, so the “surviving island” theory does appear valid for some cases (Wüst, Kasten, & Sabel, 2002). Perhaps both hypotheses are correct. In some patients, a small amount of recordable activity in area V1 accompanies blindsight, supporting the “islands” explanation. In other patients, no activity in V1 is apparent, and blindsight evidently depends on other brain areas (Morland, Lê, Carroll, Hofmann, & Pambakian, 2004). In one study, experimenters temporarily suppressed a wide area of the visual cortex by transcranial magnetic stimulation (see page 110). Although people were not aware of a spot flashed on the screen during the period of suppression, the spot influenced their eye movements (Ro, Shelton, Lee, & Chang, 2004). That result also indicates that activity outside V1 can produce visually guided behavior. Still, all these blindsight responses occur without consciousness. The conclusion remains that conscious visual perception requires strong activity in area V1.
Pathways in the Visual Cortex The primary visual cortex sends information to the secondary visual cortex (area V2), which processes the information further and transmits it to additional areas, as shown in Figure 6.19. The connections in the visual cortex are reciprocal; for example, V1 sends information to V2 and V2 returns information to V1. Neuroscientists have distinguished 30 to 40 visual areas in the brain of a macaque monkey (Van Essen & Deyoe, 1995) and believe that the human brain has even more. Within the cerebral cortex, a ventral pathway, with mostly parvocellular input, is sensitive to details of shape. Another ventral pathway, with mostly magnocellular input, is sensitive to movement. Still another, 6.2
The Neural Basis of Visual Perception
171
Integration of vision with movement To posterior parietal cortex V2 MST MT
V1
Movement perception (a) Mostly magnocellular path
V2 V4 V1 Posterior inferior temporal cortex Color and brightness (b) Mixed magnocellular/parvocellular path
V4
V2 V1
Inferior temporal cortex Complex shape analysis (c) Mostly parvocellular path
Figure 6.19 Three visual pathways in the monkey cerebral cortex (a) A pathway originating mainly from magnocellular neurons. (b) A mixed magnocellular/parvocellular pathway. (c) A mainly parvocellular pathway. Neurons are only sparsely connected with neurons of other pathways. (Sources: Based on DeYoe, Felleman, Van Essen, & McClendon, 1994; Ts’o & Roe, 1995; Van Essen & DeYoe, 1995)
with mixed input, is sensitive mainly to brightness and color, with partial sensitivity to shape (E. N. Johnson, Hawken, & Shapley, 2001). Note in Figure 6.19 that although the shape, movement, and color/brightness pathways are separate, they all lead to the temporal cortex. The path into the parietal cortex, with mostly magnocellular input, integrates vision with movement. Researchers refer collectively to the visual paths in the temporal cortex as
172
Chapter 6
Vision
the ventral stream, or the “what” pathway, because it is specialized for identifying and recognizing objects. The visual path in the parietal cortex is the dorsal stream, or the “where” or “how” pathway, because it helps the motor system find objects and determine how to move toward them, grasp them, and so forth. Don’t imagine a 100% division of labor; cells in both streams have some responsiveness to shape (“what”), for example (Denys et al., 2004). People who have damage to the ventral stream (temporal cortex) cannot fully describe the size or shape of what they see. They are also impaired in their ability to imagine shapes and faces—for example, to recall from memory whether George Washington had a beard (Kosslyn, Ganis, & Thompson, 2001). Activity in the ventral stream correlates highly with people’s ability to report conscious visual perception. However, even when the ventral stream is damaged or inactive, the dorsal “where” stream can still respond strongly (Fang & He, 2005). Sometimes people with ventral stream damage reach toward objects, or walk around objects in their path, without being able to name or describe the objects. In short, they see “where” but not “what.” In contrast, people with damage to the dorsal stream (parietal cortex) can accurately describe what they see, but they cannot convert their vision into action. They cannot accurately reach out to grasp an object, even after describing its size, shape, and color (Goodale, 1996; Goodale, Milner, Jakobson, & Carey, 1991). They are also unable to imagine locations or describe them from memory—for example, to describe the rooms of a house or the arrangement of furniture in any room (Kosslyn et al., 2001).
STOP
&
CHECK
5. As we progress from bipolar cells to ganglion cells to later cells in the visual system, are receptive fields ordinarily larger, smaller, or the same size? Why? 6. What are the differences between the magnocellular and parvocellular systems? 7. If you were in a darkened room and researchers wanted to “read your mind” just enough to know whether you were having visual fantasies, what could they do? 8. What is an example of an “unconscious” visually guided behavior? 9. Suppose someone can describe an object in detail but stumbles and fumbles when trying to walk toward it and pick it up. Which path is probably damaged, the dorsal path or the ventral path? Check your answers on page 183.
For additional detail about the visual cortex, check this website: http://www.webvision.med.utah.edu/ VisualCortex.html#introduction
The Shape Pathway In the 1950s, David Hubel and Torsten Wiesel (1959) began a research project in which they shone light patterns on the retina while recording from cells in a cat’s or monkey’s brain (Methods 6.1). At first, they presented just dots of light, using a slide projector and a screen, and found little response by cortical cells. The first time they got a big response was when they were moving a slide into place. They quickly realized that the cell was responding to the edge of the slide and had a bar-shaped receptive field (Hubel & Wiesel, 1998). Their research, for which they received a Nobel Prize, has often been called “the research that launched a thousand microelectrodes” because it inspired so much further research. By now, it has probably launched a million microelectrodes. Hubel and Wiesel distinguished several types of cells in the visual cortex. The receptive fields shown in Figure 6.20 are typical of simple cells, which are found exclusively in the primary visual cortex. The receptive field of a simple cell has fixed excitatory and inhibitory zones. The more light shines in the excitatory zone, the more the cell responds. The more light shines in the inhibitory zone, the less the cell responds. For example, Figure 6.20c shows a vertical receptive field for a simple cell. The cell’s response decreases sharply if the bar of light is moved to the left or right or tilted from the vertical because light then strikes the inhibitory regions as well (Figure 6.21). Most simple cells have bar-shaped or edge-shaped receptive fields, which may be at vertical, horizontal, or intermediate orientations. The vertical and horizontal orientations outnumber the diagonals, and that disparity probably makes sense, considering the importance of horizontal and vertical objects in our world (Coppola, Purves, McCoy, & Purves, 1998). Unlike simple cells, complex cells, located in either area V1 or V2, have receptive fields that cannot
(a)
(b)
(c)
(f) (d) (e)
Figure 6.20 Typical receptive fields for simple visual cortex cells of cats and monkeys Areas marked with a plus (+) are the excitatory receptive fields; areas marked with a minus (–) are the inhibitory receptive fields. (Source: Based on Hubel & Wiesel, 1959)
be mapped into fixed excitatory and inhibitory zones. A complex cell responds to a pattern of light in a particular orientation (e.g., a vertical bar) anywhere within its large receptive field, regardless of the exact location of the stimulus (Figure 6.22). It responds most strongly to a stimulus moving perpendicular to its axis—for example, a vertical bar moving horizontally or a horizontal bar moving vertically. If a cell in the visual cortex responds to a bar-shaped pattern of light, the best way to classify the cell is to move the bar slightly in different directions. A cell that responds to the light in only one location is a simple cell; one that responds strongly to the light throughout a large area is a complex cell. Researchers for decades assumed that complex cells receive input from a combination of simple cells, and eventually, it became possible to demonstrate this point. Researchers used the inhibitory transmitter GABA to block input from the lateral geniculate to the
METHODS 6.1
Microelectrode Recordings David Hubel and Torsten Wiesel pioneered the use of microelectrode recordings to study the properties of individual neurons in the cerebral cortex. In this method, investigators begin by anesthetizing an animal and drilling a small hole in the skull. Then they insert a thin electrode—either a fine metal wire insulated except at the
tip or a narrow glass tube containing a salt solution and a metal wire. They direct the electrode either next to or into a single cell and then record its activity while they present various stimuli, such as patterns of light. Researchers use the results to determine what kinds of stimuli do and do not excite the cell.
6.2
The Neural Basis of Visual Perception
173
Image not available due to copyright restrictions
simple cells, and they found that as soon as the simple cells stopped responding, the complex cells stopped too (Martinez & Alonso, 2001).
Time when stimulus is present
End-stopped, or hypercomplex, cells resemble complex cells with one additional feature: An endstopped cell has a strong inhibitory area at one end of its bar-shaped receptive field. The cell responds to a bar-shaped pattern of light anywhere in its broad receptive field provided that the bar does not extend beyond a certain point (Figure 6.23). Table 6.3 summarizes the properties of simple, complex, and endstopped cells.
High response
High response
High response
Strong response
Strong response
Strong response
Weak or no response
Low response
Low response
Time
Figure 6.22 The receptive field of a complex cell in the visual cortex
Figure 6.23 The receptive field of an end-stopped cell
Like a simple cell, its response depends on a bar of light’s angle of orientation. However, a complex cell responds the same for a bar in any position within the receptive field.
The cell responds to a bar in a particular orientation (in this case, horizontal) anywhere in its receptive field provided that the bar does not extend into a strongly inhibitory area.
174
Chapter 6
Vision
Table 6.3 Summary of Cells in the Primary Visual Cortex Characteristic
Simple Cells
Complex Cells
End-Stopped Cells
Location
V1
V1 and V2
V1 and V2
Binocular input
Yes
Yes
Yes
Size of receptive field
Smallest
Medium
Largest
Receptive field
Bar- or edge-shaped, with fixed excitatory and inhibitory zones
Bar- or edge-shaped, without fixed excitatory or inhibitory zones; responds to stimulus anywhere in receptive field, especially if moving perpendicular to its axis
Same as complex cell but with strong inhibitory zone at one end
The Columnar Organization of the Visual Cortex Cells having various properties are grouped together in the visual cortex in columns perpendicular to the surface (Hubel & Wiesel, 1977) (see Figure 4.22, p. 98). For example, cells within a given column respond either mostly to the left eye, mostly to the right eye, or to both eyes about equally. Also, cells within a given column respond best to lines of a single orientation. Figure 6.24 shows what happens when an investigator lowers an electrode into the visual cortex and records from each cell that it reaches. Each red line represents a neuron and shows the angle of orientation of its receptive field. In electrode path A, the first series of cells are all in one column and show the same orientation preferences. However, after passing through the white matter, the end of path A invades two columns with different preferred orientations. Electrode path B, which is not perpendicular to the surface of the cortex, crosses through three columns and encounters cells with different properties. In short, the cells within a given column process similar information.
downward motion, leaving unopposed the detectors that detect the opposite motion. You can see the same effect if you watch your computer try it screen scroll slowly for about a minute and yourself then examine an unmoving display. However, just as a medium-wavelength cone responds somewhat to the whole range of wavelengths, a cortical cell that responds best to one stimulus also responds to many others. Any object stimulates a large population of cells, and any cell in the visual system responds somewhat to many stimuli (Tsunoda, Yamane, Nishizaki, & Tanifuji, 2001). The response of any cell is ambiguous unless it is compared to the responses of other cells.
A B
Are Visual Cortex Cells Feature Detectors? Given that neurons in area V1 respond strongly to baror edge-shaped patterns, it seems natural to suppose that the activity of such a cell is (or at least is necessary for) the perception of a bar, line, or edge. That is, such cells might be feature detectors—neurons whose responses indicate the presence of a particular feature. Cells in later areas of the cortex respond to more complex shapes, and perhaps they are square detectors, circle detectors, and so forth. Supporting the concept of feature detectors is the fact that prolonged exposure to a given visual feature decreases sensitivity to that feature, as if one has fatigued the relevant detectors. For example, if you stare at a waterfall for a minute or more and then look away, the rocks and trees next to the waterfall appear to be flowing upward. This effect, the waterfall illusion, suggests that you have fatigued the neurons that detect
Gray matter White matter
Figure 6.24 Columns of neurons in the visual cortex When an electrode passes perpendicular to the surface of the cortex (first part of A), it encounters a sequence of neurons responsive to the same orientation of a stimulus. (The colored lines show the preferred stimulus orientation for each cell.) When an electrode passes across columns (B, or second part of A), it encounters neurons responsive to different orientations. Column borders are shown here to make the point clear; no such borders are visible in the real cortex. (Source: From “The visual cortex of the brain,” by David H. Hubel, November 1963, Scientific American, 209, 5, p. 62. Copyright © Scientific American.)
6.2
The Neural Basis of Visual Perception
175
Furthermore, Hubel and Wiesel tested only a limited range of stimuli. Later researchers have tried other kinds of stimuli and found that a cortical cell that responds well to a single bar or line
tical patterns, could represent anything anyone could see. Still, we obviously do not perceive the world as an assembly of sine waves, and the emergence of object perception remains a puzzle (Hughes, Nozawa, & Kitterle, 1996). Indeed, the activities of areas V1 and V2 are probably preliminary steps that organize visual material and send it to more specialized areas that actually identify objects (Lennie, 1998).
Shape Analysis Beyond Area V1 also responds, generally even more strongly, to a sine wave grating of bars or lines:
Different cortical neurons respond best to gratings of different spatial frequencies (i.e., wide bars or narrow bars), and many are very precisely tuned; that is, they respond strongly to one frequency and hardly at all to a slightly higher or lower frequency (DeValois, Albrecht, & Thorell, 1982). Most visual researchers therefore believe that neurons in area V1 respond to spatial frequencies rather than to bars or edges. How do we translate a series of spatial frequencies into perception? From a mathematical standpoint, sine wave spatial frequencies are easy to work with. In a branch of mathematics called Fourier analysis, it can be demonstrated that a combination of sine waves can produce an unlimited variety of other more complicated patterns. For example, the graph at the top of the following display is the sum of the five sine waves below it:
Therefore, a series of spatial frequency detectors, some sensitive to horizontal patterns and others to ver-
176
Chapter 6
Vision
As visual information goes from the simple cells to the complex cells and then on to later areas of visual processing, the receptive fields become larger and more specialized. For example, in area V2 (next to V1), many cells still respond best to lines, edges, and sine wave gratings, but some cells respond selectively to circles, lines that meet at a right angle, or other complex patterns (Hegdé & Van Essen, 2000). In area V4, many cells respond selectively to a particular slant of a line in three-dimensional space (Hinkle & Connor, 2002). Response patterns are even more complex in the inferior temporal cortex (see Figure 6.19). Because cells in this area have huge receptive fields, always including the foveal field of vision, their responses provide almost no information about stimulus location. However, many of them respond selectively to complex shapes and are insensitive to many distinctions that are critical for other cells. For example, some cells in the inferior temporal cortex respond about equally to a black square on a white background, a white square on a black background, and a squareshaped pattern of dots moving across a stationary pattern of dots (Sáry, Vogels, & Orban, 1993). On the other hand, a cell that responds about equally to and may hardly respond at all to (Vogels, Biederman, Bar, & Lorincz, 2001). Evidently, cortical neurons can respond specifically to certain details of shape. Most inferior temporal neurons that respond strongly to a particular shape respond almost equally to its mirror image (Rollenhagen & Olson, 2000). For example, a cell might respond equally to and to . Cells also respond equally after a reversal of contrast, where white becomes black and black becomes white. However, they do not respond the same after a figure– ground reversal. Examine Figure 6.25. Researchers measured responses in monkeys’ inferior temporal cortex to a number of stimuli and then to three kinds of transformations. The response of a neuron to each original stimulus correlated highly with its response to the contrast reversal and mirror image but correlated poorly with its response to the figure–ground reversal (Baylis & Driver, 2001). That is, cells in this area detect an object, no matter how it is displayed, and not the amount of light or darkness in any location on the retina. The ability of inferior temporal neurons to ignore changes in size and direction probably contributes to
Original
Contrast reversal
as different. (Consider the difficulty some children have in learning the difference between b and d or p and q.)
Disorders of Object Recognition
Mirror image
Figure-ground reversal
Figure 6.25 Three transformations of an original drawing In the inferior temporal cortex, cells that respond strongly to the original respond about the same to the contrast reversal and mirror image but not to the figure–ground reversal. Note that the figure–ground reversal resembles the original very strongly in terms of the pattern of light and darkness; however, it is not perceived as the same object. (Source: Based on Baylis & Driver, 2001)
© Eric Lessing/Art Resource, NY
our capacity for shape constancy—the ability to recognize an object’s shape even as it changes location or direction. However, although shape constancy helps us recognize an object from different angles, it interferes in reading, where we need to treat mirror-image letters
Damage to the shape pathway of the cortex should lead to specialized deficits in the ability to recognize objects. Neurologists have reported such cases for decades, although they frequently met with skepticism. Now that we understand how such specialized defects might arise, we find them easier to accept. An inability to recognize objects despite otherwise satisfactory vision is called visual agnosia (meaning “visual lack of knowledge”). It usually results from damage somewhere in the temporal cortex. A person with brain damage might be able to point to visual objects and slowly describe them but fail to recognize what they are or mean. For example, one patient, when shown a key, said, “I don’t know what that is; perhaps a file or a tool of some sort.” When shown a stethoscope, he said that it was “a long cord with a round thing at the end.” When he could not identify a pipe, the examiner told him what it was. He then replied, “Yes, I can see it now,” and pointed out the stem and bowl of the pipe. Then the examiner asked, “Suppose I told you that the last object was not really a pipe?” The patient replied, “I would take your word for it. Perhaps it’s not really a pipe” (Rubens & Benson, 1971). Many other types of agnosia occur. One closed head injury patient could recognize faces of all kinds, including cartoons and face pictures made from objects (Figure 6.26). However, he could not recognize any of
Figure 6.26 Faces made from other objects One man, after a closed head injury, could recognize these as faces and could point out the eyes, nose, and so forth, but could not identify any of the component objects. He was not even aware that the faces were composed of objects. 6.2
The Neural Basis of Visual Perception
177
the individual objects that composed the face (Moscovitch, Winocur, & Behrmann, 1997). The opposite disorder—inability to recognize faces—is known as prosopagnosia (PROSS-oh-pagNOH-see-ah). People with prosopagnosia can recognize other objects, including letters and words, and they can recognize familiar people from their voices or other cues, so their problem is neither vision in general nor memory in general but just face recognition (Farah, Wilson, Drain, & Tanaka, 1998). Furthermore, if they feel clay models of faces, they are worse than other people at determining whether two clay models are the same or different (Kilgour, de Gelder, & Lederman, 2004). Again, the conclusion is that the problem is not with vision but something special about faces. When people with prosopagnosia look at a face, they can describe whether the person is old or young, male or female, but they cannot identify the person. (You would perform about the same if you viewed faces quickly, upside-down.) One patient was shown 34 photographs of famous people and was offered a choice of two identifications for each. By chance alone he should have identified 17 correctly; in fact, he got 18. He remarked that he seldom enjoyed watching movies or television programs because he had trouble keeping track of the characters. Curiously, his favorite movie was Batman, in which the main characters wore masks much of the time (Laeng & Caviness, 2001). Prosopagnosia occurs after damage to the fusiform gyrus of the inferior temporal cortex, especially after such damage in the right hemisphere (Figure 6.27). According to fMRI scans, recognizing a face depends on increased activity in the fusiform gyrus and part of the prefrontal cortex (McCarthy, Puce, Gore, & Allison, 1997; Ó Scalaidhe, Wilson, & Goldman-Rakic, 1997). The fusiform gyrus also increases activity when people look at the faces of dogs (Blonder et al., 2004) or a blurry area on a picture at the top of a body where a
face should be (Cox, Meyers, & Sinha, 2004). That is, it responds to something about the idea of a face, not to a particular pattern of visual stimulation. A controversy has developed about how narrowly these areas are specialized for face recognition. Is the fusiform gyrus, or part of it, a built-in face perception “module”? Or is it something a bit broader, pertaining to expert visual recognition in whatever field we might achieve expertise? When people develop enough expertise to recognize brands of cars at a glance or species of birds or types of flowers, looking at those objects activates the fusiform gyrus, and the greater the expertise, the greater the level of activation (Tarr & Gauthier, 2000). People with damage to the fusiform gyrus have trouble recognizing cars, bird species, and so forth (Farah, 1990). Furthermore, when people with intact brains are shown “greebles,” the unfamiliar-looking objects shown in Figure 6.28, at first they have trouble recognizing them individually, and the fusiform gyrus responds to them only weakly. As people gain familiarity and learn to recognize them, the fusiform gyrus becomes more active and reacts to them more like faces (Gauthier, Tarr, Anderson, Skudlarski, & Gore, 1999; Rossion, Gauthier, Goffaux, Tarr, & Crommelinck, 2002). These results imply that the fusiform gyrus pertains to visual expertise of any kind. However, even in people with extreme levels of expertise, the fusiform gyrus cells that respond most vigorously to faces do not respond equally well to anything else (Grill-Spector,
Image not available due to copyright restrictions
Dr. Dana Copeland
Fusiform gyrus
Figure 6.27 The fusiform gyrus Many cells here are especially active during recognition of faces.
178
Chapter 6
Vision
Knouf, & Kanwisher, 2004; Kanwisher, 2000). Also, the fusiform gyrus continues responding strongly to faces, regardless of whether the viewer is paying attention to the face as a whole or just one part, such as the mouth (Yovel & Kanwisher, 2004). So face recognition may indeed be special, not quite like any other kind of expert pattern recognition.
STOP
&
CHECK
10. How could a researcher determine whether a given neuron in the visual cortex was simple or complex? 11. What is prosopagnosia and what does its existence tell us about separate shape recognition systems in the visual cortex? Check your answers on page 183.
The Color, Motion, and Depth Pathways Color perception depends on both the parvocellular and koniocellular paths, whereas brightness depends mostly on the magnocellular path. Particular clusters of neurons in cortical areas V1 and V2 respond selectively to color (Xiao, Wang, & Felleman, 2003); they then send their output through particular parts of area V4 to the posterior inferior temporal cortex, as shown in Figure 6.19b. Several investigators have found that either area V4 or a nearby area is particularly important for color constancy (Hadjikhani, Liu, Dale, Cavanagh, & Tootell, 1998; Zeki, McKeefry, Bartels, & Frackowiak, 1998). Recall from the discussion of the retinex theory that color constancy is the ability to recognize the color of an object even if the lighting changes. Monkeys with damage to area V4 can learn to pick up a yellow object to get food but cannot find it if the overhead lighting is changed from white to blue (Wild, Butler, Carden, & Kulikowski, 1985). That is, they retain color vision but lose color constancy. In humans also, after damage to an area that straddles the temporal and parietal cortices, perhaps corresponding to monkey area V4, people recognize and remember colors but lose their color constancy (Rüttiger et al., 1999). In addition to a role in color vision, area V4 has cells that contribute to visual attention (Leopold & Logothetis, 1996). Animals with damage to V4 have trouble shifting their attention from the larger, brighter stimulus to any less prominent stimulus. Many of the cells of the magnocellular pathway are specialized for stereoscopic depth perception, the
ability to detect depth by differences in what the two eyes see. To illustrate, hold a finger in front of your eyes and look at it, first with just the left eye and then just the right eye. Try again, holding your finger at different distances. Note that the two eyes see your finger differently and that the closer your finger is to your face, the greater the difference between the two views. Certain cells in the magnocellular pathway detect the discrepancy between the two views, presumably mediating stereoscopic depth perception. try it When you look at something with just one yourself eye, the same cells are almost unresponsive.
Structures Important for Motion Perception Moving objects grab our attention, for good reasons. A moving object might be alive, and if so it might be dangerous, it might be good to eat, or it might be a potential mate. If nonliving things such as rocks are moving, they may be a sign of an earthquake or landslide. Almost any moving object calls for a decision of whether to chase it, ignore it, or run away from it. Several brain areas are specialized to detect motion. Imagine yourself sitting in a small boat on a river. The waves are all flowing one direction, and in the distance, you see rapids. Meanwhile, a duck is swimming slowly against the current, the clouds are moving yet another direction, and your perspective alters as the boat rocks back and forth. In short, you simultaneously see several kinds of motion. Viewing a moving pattern activates many brain areas spread among all four lobes of the cerebral cortex (Sunaert, Van Hecke, Marchal, & Orban, 1999; Vanduffel et al., 2001). Two temporal lobe areas that are consistently and strongly activated by any kind of visual motion are area MT (for middletemporal cortex), also known as area V5, and an adjacent region, area MST (medial superior temporal cortex) (see Figure 6.19). Areas MT and MST receive their direct input from a branch of the magnocellular path, although they also receive some parvocellular input (Yabuta, Sawatari, & Callaway, 2001). The magnocellular path detects overall patterns, including movement over large areas of the visual field. The parvocellular path includes cells that detect the disparity between the views of the left and right eyes, an important cue to distance (Kasai & Morotomi, 2001). Most cells in area MT respond selectively to a stimulus moving in a particular direction, almost independently of its size, shape, brightness, or color of the object (Perrone & Thiele, 2001). They also respond somewhat to a still photograph that implies movement, such as a photo of people running or cars racing (Kourtzi & Kanwisher, 2000). To other kinds of stationary stimuli they show little response. Many cells in area MT respond best to moving borders within their receptive fields. Cells in the dor6.2
The Neural Basis of Visual Perception
179
Figure 6.29 Stimuli that excite the dorsal part of area MST Cells here respond if a whole scene expands, contracts, or rotates. That is, such cells respond if the observer moves forward or backward or tilts his or her head.
sal part of area MST respond best to the expansion, contraction, or rotation of a large visual scene, as illustrated in Figure 6.29. That kind of experience occurs when you move forward or backward or tilt your head. These two kinds of cells—the ones that record movement of single objects and the ones that record movement of the entire background—converge their messages onto neurons in the ventral part of area MST, where cells respond whenever an object moves in a certain direction relative to its background (K. Tanaka, Sugita, Moriya, & Saito, 1993) (Figure 6.30). A cell with such properties is enormously useful in determining the motion of objects. When you move your head or eyes from left to right, all the objects in your visual field move across your retina as if the world itself had moved right to left. (Go ahead and try it.) Yet when you do so, the world looks stationary because the objects are stationary with respect to one another. Many neurons in area MST are silent during
Expansion
Rotation
eye movements (Thiele, Henning, Kubischik, & Hoffmann, 2002). However, MST neurons respond briskly if an object really is moving during the eye movement—that is, if it is moving relative to the background. In short, MST neurons enable you to distinguish between the result of eye movements and the result of object movements. Several other brain areas have specialized roles for particular types of motion perception. For example, the brain is particularly adept at detecting biological motion—the kinds of motion produced by people and animals. If you attach glow-in-the-dark dots to someone’s elbows, knees, hips, shoulders, and a few other points, then when that person moves in an otherwise dark room, you perceive a moving person, even though you are actually seeing only a few spots of light. Perceiving biological motion activates an area near, but not identical with, area MT (Grossman & Blake, 2001; Grossman et al., 2000). Other areas become active when people pay close attention to the speed or direction of movement (Cornette et al., 1998; Sunaert, Van Hecke, Marchal, & Orban, 2000).
EXTENSIONS A N D A P P L I C AT I O N S
Suppressed Vision During Eye Movements
Figure 6.30 Stimuli that excite the ventral part of area MST Cells here respond when an object moves relative to its background. They therefore react either when the object moves or when the object is steady and the background moves.
180
Chapter 6
Vision
The temporal cortex has cells that distinguish between moving objects and visual changes due to head movements. An additional mechanism prevents confusion or blurring during eye movements. Before the explanation, try this demonstration: Look at yourself in a mirror and focus on your left eye. Then shift your focus
to your right eye. (Please do this now.) Did you see your eyes move? No, you did not. (I said to try this. I bet you didn’t. None of this is try it going to make any sense unless you try the yourself demonstration!) Why didn’t you see your eyes move? Your first impulse is to say that the movement was too small or too fast. Wrong. Try looking at someone else’s eyes while he or she focuses first on your left eye and then on your right. You do see the other person’s eyes move. So an eye movement is neither too small nor too fast for you to see. One reason you do not see your own eyes move is that your brain decreases the activity in its visual cortex during quick eye movements, known as saccades. In effect, the brain areas that monitor saccades tell the visual cortex, “We’re about to move the eye muscles, so take a rest for the next split second, or you will see nothing but a blur anyway.” Consequently, neural activity and blood flow in the visual cortex decrease shortly before and during eye movements (Burr, Morrone, & Ross, 1994; Paus, Marrett, Worsley, & Evans, 1995). Even direct electrical stimulation of the retina produces less effect than usual during a saccade (Thilo, Santoro, Walsh, & Blakemore, 2004). However, neural activity does not cease altogether, and so, for example, you would detect a sudden flash of light during your saccade (García-Pérez & Peli, 2001). Nevertheless, processing by the visual cortex is slowed during a saccade (Irwin & Brockmole, 2004). For example, if two stimuli flash on the screen during a saccade, 100 ms apart, the delay seems shorter than if the same stimuli flashed while no saccade was occurring (Morrone, Ross, & Burr, 2005). This finding fits with many other reports that people underestimate the duration of less-attended stimuli.
Motion Blindness Some people with brain damage become motion blind, able to see objects but unable to determine whether they are moving or, if so, in which direction or how fast. People with motion blindness have trouble with the same tasks as monkeys with damage in area MT (Marcar, Zihl, & Cowey, 1997) and probably have damage in that same area (Greenlee, Lang, Mergner, & Seeger, 1995). One patient with motion blindness reported that she felt uncomfortable with people walking around because “people were suddenly here or there but I have not seen them moving.” She could not cross a street without help: “When I’m looking at the car first, it seems far away. But then, when I want to cross the road, suddenly the car is very near.” Even such a routine task as pouring coffee became difficult; the flowing liquid appeared to be frozen and unmoving, so she did not
stop pouring until the cup overfilled (Zihl, von Cramon, & Mai, 1983). The opposite of motion blindness also occurs: Some people are blind except for the ability to detect which direction something is moving. You might wonder how someone could see something moving without seeing the object itself. A possible explanation is that area MT gets some visual input directly from the lateral geniculate nucleus of the thalamus. Therefore, even after extensive damage to area V1 (enough to produce blindness), area MT still has enough input to permit motion detection (Sincich, Park, Wohlgemuth, & Horton, 2004).
&
STOP
CHECK
12. Why is it useful to suppress activity in the visual cortex during saccadic eye movements? 13. What symptoms occur after damage limited to area MT? What may occur if MT is intact but area V1 is damaged? Check your answers on page 183.
Visual Attention Of all the stimuli striking your retina at any moment, you attend to only a few. A stimulus can grab your attention by its size, brightness, or movement, but you can also voluntarily direct your attention to one stimulus or another in what is called a “top-down” process— that is, one governed by other cortical areas, principally the frontal and parietal cortex. To illustrate, keep your eyes fixated on the central x in the following display. Then attend to the G at the right, and step by step shift your attention clockwise around the circle. Notice how you can indeed see different parts of the circle without moving your eyes.
Z
A
V R
W x
B
G
N
K F
J
P
The difference between attended and unattended stimuli pertains to the amount and duration of activity in a cortical area. For example, suppose the letter Q flashes for a split second on a screen you are watch6.2
The Neural Basis of Visual Perception
181
ing. That stimulus automatically produces a response in your area V1, but only a brief one if you are attending to something else on the screen. However, if you pay attention to the Q (perhaps because someone told you to count the times a letter flashes on the screen), then the brief response in V1 excites V2 and other areas, which feed back onto the V1 cells to enhance and prolong their responses (Kanwisher & Wojciulik, 2000; Supér, Spekreijse, & Lamme, 2001). Activity in V1 then feeds back to enhance activity in the corresponding portions of the lateral geniculate (O’Connor, Fukui, Pinsk, & Kastner, 2002). While you are increasing your brain’s response to the attended stimulus, the responses to other stimuli decrease (Wegener, Freiwald, & Kreiter, 2004). Similarly, suppose you are looking at a screen that shows a plaid grating, like this:
puter chip. Individual neurons must make mistakes all the time, but your brain as a whole continues functioning well. It might make some stupid decisions, but it doesn’t “crash.” Why not? The computer scientists surmised, correctly, that your brain has a great deal of redundancy so that the system as a whole works well even when individual units fail. The visual system offers many examples of this point. In the retina, each ganglion cell shares information with its neighbors, such that adjoining ganglion cells respond similarly (Puchalla, Schneidman, Harris, & Berry, 2005). In area MT (or V5), no one neuron consistently detects a moving dot within its receptive field, but a population of cells almost always detects the movement within a tenth of a second (Osborne, Bialek, & Lisberger, 2004). In short, each individual neuron contributes to vision, but no neuron is ever indispensable for any perception. Vision arises from the simultaneous activity of many cells in many brain areas.
Summary When you are instructed to attend to the lines in one direction or the other, activity increases in the V1 neurons that respond to lines in that orientation. As your attention shifts from one orientation to the other, so does activity in the two sets of neurons. In fact, someone who is monitoring your visual cortex with fMRI could tell you which set of lines you were attending to (Kamitani & Tong, 2005). Also, if you are told to pay attention to color or motion, activity increases in the areas of your visual cortex responsible for color or motion perception (Chawla, Rees, & Friston, 1999). In fact, activity increases in those areas even before the stimulus (Driver & Frith, 2000). Somehow the instructions prime those areas, so that they can magnify their responses to any appropriate stimulus. They in turn feed back to area V1, enhancing that area’s response to the stimulus. Again, it appears that the feedback increase in V1 responses is necessary for attention or conscious awareness of a stimulus (Pascual-Leone & Walsh, 2001).
Module 6.2 In Closing: From Single Cells to Vision In this module, you have read about single cells that respond to shape, movement, and other aspects of vision. Does any single cell identify what you see? Several decades ago, the early computers used to crash frequently. Some of the pioneers of computer science were puzzled. A single neuron in the brain, they realized, was surely no more reliable than a single com-
182
Chapter 6
Vision
1. The optic nerves of the two eyes join at the optic chiasm, where half of the axons from each eye cross to the opposite side of the brain. Most of the axons then travel to the lateral geniculate nucleus of the thalamus, which communicates with the visual cortex. (p. 166) 2. Lateral inhibition is a mechanism by which stimulation in any area of the retina suppresses the responses in neighboring areas, thereby enhancing the contrast at light–dark borders. (p. 167) 3. Lateral inhibition in the vertebrate retina occurs because receptors stimulate bipolar cells and also stimulate the much wider horizontal cells, which inhibit both the stimulated bipolar cells and those to the sides. (p. 168) 4. Each neuron in the visual system has a receptive field, an area of the visual field to which it is connected. Light in the receptive field excites or inhibits the neuron depending on the light’s location, wavelength, movement, and so forth. (p. 169) 5. The mammalian vertebrate visual system has a partial division of labor. In general, the parvocellular system is specialized for perception of color and fine details; the magnocellular system is specialized for perception of depth, movement, and overall patterns. (p. 170) 6. Area V1 is apparently essential to conscious visual perception. Without it, people report no vision, even in dreams. However, some kinds of response to light (blindsight) can occur after damage to V1, despite the lack of conscious perception. (p. 171)
7. The ventral stream in the cortex is important for shape perception (“what”), and the dorsal stream is specialized for localizing visual perceptions and integrating them with action (“where”). (p. 172) 8. Within the primary visual cortex, neuroscientists distinguish simple cells, which have fixed excitatory and inhibitory fields, and complex cells, which respond to a light pattern of a particular shape regardless of its exact location. (p. 173) 9. Neurons sensitive to shapes or other visual aspects may or may not act as feature detectors. In particular, cells of area V1 are highly responsive to spatial frequencies, even though we are not subjectively aware of spatial frequencies in our visual perception. (p. 175) 10. Specialized kinds of visual loss can follow brain damage. For example, after damage to the fusiform gyrus of the temporal cortex, people have trouble recognizing faces. (p. 176) 11. The visual cortex is specialized to detect visual motion and to distinguish it from apparent changes due to head movement. The visual cortex becomes less responsive during quick eye movements. (p. 179) 12. When two or more objects are present in the visual field, attention to one of them is related to increased cortical response to that object and decreased response to other objects. (p. 181)
Answers to STOP
&
CHECK
Questions 1. It starts with the ganglion cells in the eye. Most of its axons go to the lateral geniculate nucleus of the thalamus; some go to the hypothalamus, superior colliculus, and elsewhere. (p. 166) 2. The receptor excites both the bipolar cells and the horizontal cell. The horizontal cell inhibits the same bipolar cell that was excited plus additional bipolar cells in the surround. (p. 169) 3. It produces more excitation than inhibition for the nearest bipolar cell. For surrounding bipolar cells, it produces only inhibition. The reason is that the receptor excites a horizontal cell, which inhibits all bipolar cells in the area. (p. 169) 4. In the parts of your retina that look at the long white arms, each neuron is maximally inhibited by input on two of its sides (either above and below or left and right). In the crossroads, each neuron is maximally inhibited by input on all four sides.
Therefore, the response in the crossroads is decreased compared to that in the arms. (p. 169) 5. They become larger because each cell’s receptive field is made by inputs converging at an earlier level. (p. 172) 6. Neurons of the parvocellular system have small cell bodies with small receptive fields, are located mostly in and near the fovea, and are specialized for detailed and color vision. Neurons of the magnocellular system have large cell bodies with large receptive fields, are located in all parts of the retina, and are specialized for perception of large patterns and movement. (p. 172) 7. Researchers could use fMRI, EEG, or other recording methods to see whether activity was high in your primary visual cortex. (p. 172) 8. In blindsight, someone can point toward an object or move the eyes toward the object, despite insisting that he or she sees nothing. (p. 172) 9. The inability to guide movement based on vision implies damage to the dorsal path. (p. 172) 10. First identify a stimulus, such as a horizontal line, that stimulates the cell. Then shine the stimulus at several points in the cell’s receptive field. If the cell responds only in one location, it is a simple cell. If it responds in several locations, it is a complex cell. (p. 179) 11. Prosopagnosia is the inability to recognize faces. Its existence implies that the cortical mechanism for identifying faces is different from the mechanism for identifying other complex stimuli. (p. 179) 12. Vision during quick eye movements is sure to be blurry. Besides, suppressing vision during saccades may help us distinguish between changes due to eye movements and those due to movements of object. (p. 181) 13. Damage in area MT can produce motion blindness. If area MT is intact but area V1 is damaged, the person may be able to report motion direction, despite no conscious identification of the moving object. (p. 181)
Thought Question After a receptor cell is stimulated, the bipolar cell receiving input from it shows an immediate strong response. A fraction of a second later, the bipolar’s response decreases, even though the stimulation from the receptor cell remains constant. How can you account for that decrease? (Hint: What does the horizontal cell do?)
6.2
The Neural Basis of Visual Perception
183
Module 6.3
Development of Vision
uppose you had lived all your life in the dark. Then today, for the first time, you came out into the light and looked around. Would you understand anything? Unless you were born blind, you did have this experience—on the day you were born. At first, presumably you had no idea what you were looking at. Within months, however, you were beginning to recognize faces and crawl toward your favorite toys. How did you learn to make sense of what you saw?
Infant Vision When cartoonists show an infant character, they draw the eyes large in proportion to the head. Infant eyes approach full size sooner than the rest of the head does. Even a newborn has some functional vision, although much remains to develop.
Attention to Faces and Face Recognition Human newborns come into the world predisposed to pay more attention to some stimuli than others. Even in the first 2 days, they spend more time looking at faces than at other stationary displays (Figure 6.31). That tendency is interesting, as it supports the idea of a built-in “face recognition module,” presumably centered in the fusiform gyrus (see p. 178). However, the infant’s concept of “face” is not clearly developed. Experimenters recorded infants’ times of gazing at one face or the other, as shown in Figure 6.32. Newborns showed a strong preference for a right-side-up face over an upside-down face, regardless of whether the face was realistic (left pair) or badly distorted (central pair). When confronted with two right-side-up faces (right pair), they showed no significant preference between a realistic one and a distorted one (Cassia, Turati, & Simion, 2004). Evidently, a newborn’s concept of “face” is not well developed, except that it requires most of its content (e.g., eyes) to be on top. Of course, a person gradually becomes more and more adept at recognizing faces. Some day you may go to a high school reunion and quickly recognize people
184
Chapter 6
Vision
you haven’t seen in decades, in spite of the fact that they have grown older, gained or lost weight (mostly gained), changed their hair style or gone bald, and so forth. Artificial intelligence specialists have been trying to build machines that would be equally good at recognizing faces but have found the task to be stunningly difficult. Face recognition improves with practice early in life. For example, most adults are poor at recognizing monkey faces, but infants who get frequent practice at it between ages 6 and 9 months develop much better ability to recognize monkey faces (Pascalis et al., 2005).
Visual Attention and Motor Control The ability to control visual attention develops gradually. A highly attractive display, such as twirling dots on a computer screen, can capture an infant’s gaze and hold it, sometimes until the infant starts to cry (M. H. Johnson, Posner, & Rothbart, 1991). From about 4 to 6 months, infants can look away from the display briefly but then they shift back to it (Clohessy, Posner, Rothbart, & Veccra, 1991). Not until about age 6 months can an infant shift visual attention from one object to another.
36 Percent of fixation time
S
32 28 24 20 16 12 8 4 0 getable and alova, paraded after Court, hois núch for perset iits of photogr why later t
Face
Circles Newsprint
White
Yellow
Red
Figure 6.31 Amount of time infants spend looking at various patterns Even in the first 2 days after birth, infants look more at faces than at most other stimuli. (Source: Based on Fantz, 1963)
Image not available due to copyright restrictions
Here is a fascinating demonstration that combines visual attention with motor control: Hold one hand to the left of a child’s head and the other hand to the right. Tell the child that when you wiggle a finger of one hand, he or she should look at the other hand. Before about age 5 or 6 years, most children find it almost impossible to ignore the wiggling finger and to look at the other one. Ability to perform this task smoothly develops gradually all the way to age 18, requiring areas of the prefrontal cortex that mature slowly. Even some adults—especially those with neurotry it logical or psychiatric disorders—have trou- yourself ble on this task (Munoz & Everling, 2004). To examine visual development in more detail, investigators turn to studies of animals. The research in this area has greatly expanded our understanding of brain development and has helped alleviate certain human abnormalities.
Early Experience and Visual Development Developing axons of the visual system approach their targets by following chemical gradients, as discussed in Chapter 5. In a newborn mammal, the lateral geniculate and visual cortex already resemble an adult’s
(Gödecke & Bonhoeffer, 1996; Horton & Hocking, 1996), and many of the normal properties develop even for animals with retinal damage (Rakic & Lidow, 1995; Shatz, 1996) or those reared in complete darkness (Lein & Shatz, 2001; White, Coppola, & Fitzpatrick, 2001). However, without normal visual experience, those properties deteriorate. The visual system can mature to a certain point without experience, but it needs visual experience to maintain and fine-tune its connections.
Early Lack of Stimulation of One Eye What would happen if a young animal could see with one eye but not the other? For cats and primates— which have both eyes pointed in the same direction— most neurons in the visual cortex receive binocular input (stimulation from both eyes). As soon as a kitten opens its eyes at about age 9 days, each neuron responds to approximately corresponding areas in the two retinas—that is, areas that focus on about the same point in space (Figure 6.33). If an experimenter sutures one eyelid shut for a kitten’s first 4 to 6 weeks of life, synapses in the visual cortex gradually become unresponsive to input from the deprived eye (Rittenhouse, Shouval, Paradiso, & Bear, 1999). Consequently, even after the deprived eye is opened, the kitten does not respond to it (Wiesel, 1982; Wiesel & Hubel, 1963). 6.3 Development of Vision
185
Figure 6.33 The anatomical basis for binocular vision in cats and primates
Point in the visual field
Light from a point in the visual field strikes points in each retina. Those two retinal areas send their axons to separate layers of the lateral geniculate, Light strikes corresponding which in turn send cells in the two retinas. axons to a single cell in the visual cortex. That cell is connected (via the lateral geniculate) to corresponding areas of the two retinas. Axons from the two retinal areas go to lateral geniculate.
widely available in the cortex (Fagiolini & Hensch, 2000). Evidently, the changes that occur during the sensitive period require excitation of some synapses and inhibition of others. The critical period ends with the onset of some chemicals that inhibit axonal sprouting. Rearing an animal in the dark delays the spread of these chemicals and therefore prolongs the sensitive period (Pizzorusso et al., 2002). One reason the sensitive period is longer for some visual functions and shorter for others is that some changes require only local rearrangements of axons instead of axon growth over greater distances (Tagawa, Kanold, Majdan, & Shatz, 2005).
Uncorrelated Stimulation in the Two Eyes
Almost every neuron in the human visual cortex responds to approximately corresponding areas of both eyes. (The excepLateral geniculate tion: A few cortical neurons respond to Contact with neurons in different layers (Some layers are for left eye and some for right.) only what the left eye sees at the extreme left or what the right eye sees at the extreme right.) By Axons from lateral geniculate go to visual cortex, where comparing the slightly different inputs from the two input from both eyes converges onto a single neuron. eyes, you achieve stereoscopic depth perception, a powerful method of perceiving distance. Stereoscopic depth perception requires the brain to detect retinal disparity, the discrepancy between Early Lack of Stimulation what the left eye sees and what the right eye sees. But of Both Eyes how do cortical neurons adjust their connections to If both eyes are kept shut for the first few weeks, we detect retinal disparity? Genetic instructions could not might expect the kitten to become blind in both eyes, provide enough information; different individuals have but it does not. Evidently, when one eye remains shut slightly different head sizes, and the genes cannot preduring early development, the active synapses from the dict how far apart the two eyes will be. The fine-tuning open eye displace the inactive synapses from the closed of binocular vision must depend on experience. eye. If neither eye is active, no axon displaces any other. And indeed, it does. Suppose an experimenter sets For at least 3 weeks, the kitten’s cortex remains norup a procedure in which a kitten can see with the left mally responsive to both eyes. If the eyes remain shut eye one day, the right eye the next day, and so forth. The still longer, the cortical responses become sluggish kitten therefore receives the same amount of stimulaand lose their crisp, sharp receptive fields (Crair, tion in both eyes, but it never sees with both eyes at the Gillespie, & Stryker, 1998). That is, they respond to visame time. After several weeks, almost every neuron in sual stimuli, but not much more strongly to one orithe visual cortex responds to one eye or the other but entation than to another. not both. The kitten therefore cannot detect retinal Because the effects of abnormal experiences on cordisparities and has no stereoscopic depth perception. tical development depend on age, researchers identify Similarly, suppose a kitten has weak or damaged a sensitive period or critical period, when experiences eye muscles, and its eyes cannot focus in the same dihave a particularly strong and long-lasting influence. rection at the same time. Both eyes are active simulThe length of the sensitive period varies from one taneously, but no cortical neuron consistently gets a species to another and from one aspect of vision to anmessage from both eyes at the same time. Again, the other (Crair & Malenka, 1995; T. L. Lewis & Maurer, result is that each neuron in the visual cortex becomes 2005). It lasts longer during complete visual deprivafully responsive to one eye, ignoring the other (Blake tion—for example, if a kitten is kept in total darkness— & Hirsch, 1975; Hubel & Wiesel, 1965). than in the presence of limited experience (Kirkwood, A similar phenomenon occurs in humans. Certain Lee, & Bear, 1995). The critical period begins when children are born with strabismus (or strabismic amGABA, the brain’s main inhibitory transmitter, becomes blyopia), a condition in which the eyes do not point
186
Chapter 6
Vision
to the deprived eye. However, in the long run, it recovers better if it goes through a few days with the opposite eye deprived of vision (Mitchell, Gingras, & Kind, 2001). Evidently, the deprived eye regains more cortical functioning if it doesn’t have to overcome a competitor. This animal research relates to the human condition called lazy eye, a result of strabismus or amblyopia, in which a child fails to attend to the vision in one eye. The animal results imply that the best way to facilitate normal vision in the ignored eye is to prevent the child from using the active eye. A physician puts a patch over the active eye, and the child gradually increases his or her attention to vision in the previously ignored eye. Eventually, the child is permitted to use both eyes together. The patch is most effective if it begins early, although humans do not appear to have a sharply defined critical period. That is, the division between “early enough” and “too late” is gradual, not sudden (T. L. Lewis & Maurer, 2005).
in the same direction. Such children do not develop stereoscopic depth perception; they perceive depth no better with two eyes than they do with one. The apparent mechanism is that each cortical cell increases its responsiveness to groups of axons with synchronized activity (Singer, 1986). For example, if a portion of the left retina frequently focuses on the same object as some portion of the right eye, then axons from those two retinal areas frequently carry synchronous messages, and a cortical cell strengthens its synapses with both axons. However, if the two eyes carry unrelated inputs, the cortical cell strengthens its synapses with axons from only one eye (usually the contralateral one).
Restoration of Response After Early Deprivation of Vision After the cortical neurons have become insensitive to the inactive eye, can experience restore their sensitivity? Yes, if the restorative experience comes soon enough. If a kitten is deprived of vision in one eye for a few days during the sensitive period (losing sensitivity to light in that eye) and then receives normal experience with both eyes, it rapidly regains sensitivity
STOP
&
CHECK
1. How is vision affected by closing one eye early in life? What are the effects of closing both eyes?
© Sue Ford/Science Source/Photo Researchers
© Biophoto Associates/Science Source/Photo Researchers
2. What is “lazy eye” and how can it be treated?
Two examples of “lazy eye.”
3. What early experience is necessary to maintain binocular input to the neurons of the visual cortex? Check your answers on page 191.
Early Exposure to a Limited Array of Patterns If a kitten spends its entire early sensitive period wearing goggles with horizontal lines painted on them (Figure 6.34), nearly all its visual cortex cells become responsive only to horizontal lines (Stryker & Sherk, 1975; Stryker, Sherk, Leventhal, & Hirsch, 1978). Even after months of later normal experience, the cat does not respond to vertical lines (D. E. Mitchell, 1980). What happens if human infants are exposed mainly to vertical or horizontal lines instead of both equally? You might wonder how such a bizarre thing could happen. No parents would let an experimenter subject their child to such a procedure, and it never happens accidentally in nature. Right? Wrong. In fact, it probably happened to you! About 70% of all infants have astigmatism, a blurring of vision for lines in one direction (e.g., horizontal, vertical, or one of the diagonals). Astigmatism is caused by an asymmetric curvature of the eyes (Howland & Sayles, 1984). The prevalence of astigmatism declines to about 10% in 4-year-old children as a result of normal growth.
6.3
Development of Vision
187
Image not available due to copyright restrictions
You can informally test yourself for astigmatism with Figure 6.35. Do the lines in some directions look faint or fuzzy? If so, rotate the page. You will notice that the appearance of the lines depends on their position. If you wear corrective lenses, try this demonstration with and without them. If you see a difference in the lines only without your lenses, then the lenses have corrected your astigmatism.
If your eyes had strong astigmatism during the sensitive period for the development of your visual cortex, you saw lines more clearly in one direction than in another direction. If your astigmatism was not corrected early, then the cells of your visual cortex probably became more responsive to the kind of lines you saw more clearly, and you will continue throughout life to see lines in other directions as slightly faint or blurry, even if your eyes are now perfectly spherical (Freedman & Thibos, 1975). However, if you began wearing corrective lenses before age 3 to 4 years, you thereby improved your adult vision (Friedburg & Klöppel, 1996). The moral of the story: Children should be tested for astigmatism early and given corrective lenses as soon as possible. Adults’ cortical neurons can also change in response to altered visual experience, but the effects are smaller. If an adult mammal is trained to respond to a particular stimulus, some of its visual neurons become more responsive to that stimulus and less responsive to other stimuli (Dragoi, Rivadulla, & Sur, 2001; Schoups, Vogels, Qian, & Orban, 2001). In short, the cortex develops specializations for dealing with the patterns it encounters most often. What happens if kittens grow up without seeing anything move? You can imagine the difficulty of arranging such a world; the kitten’s head would move, even if nothing else did. Max Cynader and Garry Chernenko (1976) used an ingenious procedure: They raised kittens in an environment illuminated only by a strobe light that flashed eight times a second for 10 microseconds each. In effect, the kittens saw a series of still photographs. After 4 to 6 months, each neuron in the visual cortex responded normally to shapes but not to moving stimuli. The kittens had become motion blind.
People with Vision Restored After Early Deprivation
Figure 6.35 An informal test for astigmatism Do the lines in one direction look darker or sharper than the other lines do? If so, notice what happens when you rotate either the page or your head. The lines really are identical; certain lines appear darker or sharper because of the shape of your eye. If you wear corrective lenses, try this demonstration both with and without your lenses.
188
Chapter 6
Vision
What about humans? If someone were born completely blind and got to see years later, would the new vision make any sense? No cases quite like that have occurred, but several cases approximate it. Some humans become visually impaired because of cataracts—cloudy areas on the lenses. Although cataracts can be removed surgically, the delay before removal varies, and therefore, some people have a longer or shorter history of limited vision. In one study, investigators examined 14 people who had been born with cataracts in both eyes but had
them repaired at ages 2–6 months. Although they all eventually developed nearly normal vision, they had lingering problems in subtle regards. For example, for the faces shown in Figure 6.36, they had no trouble detecting the difference between the two lower faces, which have different eyes and mouth, but had trouble detecting any difference between the two upper faces, which differ in the spacing between the eyes or between the eyes and the mouth (Le Grand, Mondloch, Maurer, & Brent, 2001). We might imagine that an early cataract on just one eye would not pose a problem, but it does if it is on the left eye. Remember from p. 178 that prosopagnosia is linked most strongly to damage to the fusiform gyrus in the right hemisphere. Apparently, the right hemisphere needs early experience to develop its particular expertise at face recognition.
In an adult, a cataract on just one eye would not affect one hemisphere much more than the other because each hemisphere receives input from both eyes: Left visual field
Right visual field
Left retina
To left hemisphere of brain
Right retina
Optic chiasm
To right hemisphere of brain
However, during early infancy, the crossed pathways from the two eyes develop faster than the uncrossed pathways: Left visual field
Right visual field
Images not available due to copyright restrictions
Left retina
To left hemisphere of brain
Right retina
Optic chiasm
To right hemisphere of brain
6.3 Development of Vision
189
Consequently, each hemisphere gets its input almost entirely from the contralateral eye. Furthermore, the corpus callosum is immature in infancy, so information reaching one hemisphere does not cross to the other. In short, an infant with a left eye cataract has limited visual input to the right hemisphere. Researchers examined people who had left eye cataracts during the first few months of infancy. When they were tested at ages 8–29 years, they continued to show mild impairments in face recognition. For example, when confronted with a pair of photos like those in Figure 6.36a, they had difficulty determining whether two photos were “the same” or “different” (Le Grand, Mondloch, Maurer, & Brent, 2003). In short, good face recognition depends on practice and experience by the right hemisphere early in life. The impairment is more extreme if the cataracts remain until later in life. Patient PD developed cataracts at approximately age 11⁄2 years. His physician treated him with eye drops to dilate the pupils wide enough to “see around” the cataracts, with limited success. After removal of his cataracts at age 43, his ability to perceive detail improved quickly but never reached normal levels. Evidently, all those years without detailed pattern vision had made his cortical cells less able to respond sharply to patterns (Fine, Smallman, Doyle, & MacLeod, 2002). He remarked that the edges between one object and another were exaggerated. For example, where a white object met a dark one, the border of the white object looked extremely bright and the edge of the dark one looked extremely dark—suggesting lateral inhibition well beyond what most people experience. He was amazed by the strong emotional expressions on people’s faces. He had been able to see faces before but not in so much detail. He was also struck by the brightness of colors. “In fact, it made me kind of angry that people were walking around in this colorful world that I had never had access to” (Fine et al., 2002, p. 208). A more extreme case is patient MM. When he was 31⁄2, hot corrosive chemicals splashed on his face, destroying one eye and obliterating the cornea of the other. For the next 40 years, he could see only light and dark through the surviving eye, with no patterns. He had no visual memories or visual imagery. At age 43, he received a corneal transplant. Immediately, he could identify simple shapes such as a square, detect whether a bar was tilted or upright, state the direction of a moving object, and identify which of two objects is “in front.” These aspects of vision were evidently well established by age 31⁄2 and capable of emerging again without practice (Fine et al., 2003). However, his perception of detail was poor and did not improve. Because his retina was normal, the failure to develop detail perception implied a limitation in his visual cortex. Over the next 2 years, he improved in his ability to understand what he was seeing but only to a limited
190
Chapter 6
Vision
extent. Prior to the operation, he had competed as a blind skier. (Blind contestants memorize the hills.) Immediately after the operation, he found all the sights frightening as he skied, so he deliberately closed his eyes while skiing! After 2 years, he found vision somewhat helpful on the easy slopes, but he still closed his eyes on the difficult slopes, where vision was still more frightening than helpful. He summarized his progress, “The difference between today and over two years ago is that I can guess better what I am seeing. What is the same is that I am still guessing” (Fine et al., 2003, p. 916). What can we conclude? In humans as in cats and other species, the visual cortex is more plastic in infancy than in adulthood. A few aspects of vision, such as motion perception, are well established early and continue to function after decades without further experience. However, detail perception suffers and cannot recover much. The kinds of visual expertise that most of us take for granted depend on years of practice.
STOP
&
CHECK
4. Why does a cataract on one eye produce greater visual impairments in infants than in adults? Check your answer on page 191.
Module 6.3 In Closing: The Nature and Nurture of Vision The nature–nurture issue arises in one disguise or another in almost every area of psychology. In vision, consider what happens when you look out your window. How do you know that what you see are trees, people, and buildings? In fact, how do you know they are objects? How do you know which objects are close and which are distant? Were you born knowing how to interpret what you see or did you have to learn to understand it? The main message of this module is that vision requires a complex mixture of nature and nurture. We are indeed born with a certain amount of understanding, but we need experience to maintain, improve, and refine it. As usual, the influences of heredity and environment are not fully separable.
Summary 1. Even newborn infants gaze longer at faces than other stationary objects. However, they are as responsive to distorted as to realistic faces, provided
the eyes are on top. Patterned visual experience early in life, especially by the right hemisphere, is necessary to develop good face recognition. (p. 184) 2. For the first few months, infants have trouble shifting their gaze away from a captivating stimulus. For years, children have trouble shifting their attention away from a moving object toward an unmoving one. (p. 184) 3. The cells in the visual cortex of infant kittens have nearly normal properties. However, experience is necessary to maintain and fine-tune vision. For example, if a kitten has sight in one eye and not in the other during the early sensitive period, its cortical neurons become responsive only to the open eye. (p. 185) 4. Cortical neurons become unresponsive to axons from the inactive eye mainly because of competition from the active eye. If both eyes are closed, cortical cells remain somewhat responsive to axons from both eyes, although that response becomes sluggish and unselective as the weeks of deprivation continue. (p. 186) 5. To develop good stereoscopic depth perception, kittens must have experience seeing the same object with corresponding portions of the two eyes early in life. Otherwise, each neuron in the visual cortex becomes responsive to input from just one eye. (p. 186) 6. Abnormal visual experience has a stronger effect during an early sensitive period than later in life. The sensitive period begins when sufficient GABA levels become available in the visual cortex. It ends with the onset of chemicals that slow axonal growth. (p. 186) 7. If cortical cells have become unresponsive to an eye because it was inactive during the early sensitive period, normal visual experience later does not restore normal responsiveness. However, prolonged closure of the previously active eye can increase the response to the previously inactive eye. (p. 187) 8. If a kitten sees only horizontal or vertical lines during its sensitive period, most of the neurons in its visual cortex become responsive to such lines only. For the same reason, a young child who has a strong astigmatism may have permanently decreased responsiveness to one kind of line or another. (p. 187) 9. Those who do not see motion early in life lose their ability to see it. (p. 188)
10. Some people have cataracts or other impediments to vision during infancy or childhood and then, after surgery, regain vision in adulthood. Visual impairment for the first few months leaves subtle visual deficits that evidently last throughout life. Someone who had vision, lost it in childhood, and then regained it decades later shows retention of some aspects of vision (e.g., motion perception) but loss of detail and many other aspects of vision. (p. 188)
Answers to STOP
&
CHECK
Questions 1. If one eye is closed during early development, the cortex becomes unresponsive to it. If both eyes are closed, cortical cells remain somewhat responsive to both eyes for several weeks and then gradually become sluggish and unselective in their responses. (p. 187) 2. “Lazy eye” is inattentiveness to one eye, in most cases an eye that does not move in conjunction with the active eye. It can be treated by closing or patching over the active eye, forcing the child to use the ignored eye. (p. 187) 3. To maintain binocular responsiveness, cortical cells must receive simultaneous activity from both eyes fixating on the same object at the same time. (p. 187) 4. First, infants’ brains are more plastic; an adult’s brain is already fairly set and resists change in the event of distorted or deficient input. Furthermore, in the infant brain, each hemisphere gets nearly all its visual input from its contralateral eye. The crossed paths from the eyes to the hemispheres are more mature than the uncrossed paths, and the corpus callosum is immature. (p. 190)
Thought Questions 1. A rabbit’s eyes are on the sides of its head instead of in front. Would you expect rabbits to have many cells with binocular receptive fields—that is, cells that respond to both eyes? Why or why not? 2. Would you expect the cortical cells of a rabbit to be just as sensitive to the effects of experience as are the cells of cats and primates? Why or why not?
6.3
Development of Vision
191
Chapter Ending
Key Terms and Activities Terms receptor potential (p. 152)
binocular input (p. 185)
law of specific nerve energies (p. 152)
bipolar cell (p. 153)
lazy eye (p. 187)
retinal disparity (p. 186)
blind spot (p. 154)
magnocellular neuron (p. 170)
retinex theory (p. 161)
blindsight (p. 171)
midget ganglion cell (p. 155)
rod (p. 156)
color constancy (p. 161)
motion blindness (p. 181)
saccade (p. 181)
color vision deficiency (p. 163)
MST (p. 179)
complex cell (p. 173)
MT (or area V5) (p. 179)
secondary visual cortex (or area V2) (p. 171)
cone (p. 156)
negative color afterimage (p. 160)
dorsal stream (p. 172)
opponent-process theory (p. 160)
sensitive period (or critical period) (p. 186)
end-stopped cell (p. 174)
optic nerve (p. 154)
shape constancy (p. 177)
feature detector (p. 175)
parvocellular neuron (p. 170)
simple cell (p. 173)
fovea (p. 155)
photopigment (p. 156)
ganglion cell (p. 153)
primary visual cortex (or area V1) (p. 171)
stereoscopic depth perception (p. 179)
astigmatism (p. 187)
horizontal cell (p. 166) hypercomplex cell (p. 174) inferior temporal cortex (p. 176) koniocellular neuron (p. 170) lateral geniculate nucleus (p. 166) lateral inhibition (p. 169)
prosopagnosia (p. 178) psychophysical observations (p. 158) pupil (p. 153) receptive field (p. 169)
Suggestions for Further Reading Chalupa, L. M., & Werner, J. S. (Eds.). (2004). The visual neurosciences. Cambridge, MA: MIT Press. A collection of 114 chapters detailing research on nearly all aspects of vision. Purves, D., & Lotto, R. B. (2003). Why we see what we do: An empirical theory of vision. Sunderland, MA: Sinauer Associates. Discussion of how our perception of color, size, and other visual qualities depends on our previous experience with objects and not just on the light striking the retina.
retina (p. 153)
strabismus (or strabismic amblyopia) (p. 186) trichromatic theory (or YoungHelmholtz theory) (p. 158) ventral stream (p. 172) visual agnosia (p. 177) visual field (p. 159)
Websites to Explore You can go to the Biological Psychology Study Center and click these links. While there, you can also check for suggested articles available on InfoTrac College Edition. The Biological Psychology Internet address is: http://psychology.wadsworth.com/book/kalatbiopsych9e/
Colorblind Home Page http://colorvisiontesting.com/
The Primary Visual Cortex, by Matthew Schmolesky http://webvision.med.utah.edu/VisualCortex.html# introduction
192
Chapter Ending
Exploring Biological Psychology CD The Retina (animation) Virtual Reality Eye (virtual reality) Blind Spot (Try It Yourself) Color Blindness in Visual Periphery (Try It Yourself) Brightness Contrast (Try It Yourself) Motion Aftereffect (Try It Yourself) Visuo-Motor Control (Try It Yourself) Critical Thinking (essay questions) Chapter Quiz (multiple-choice questions)
An animation of the eye shows its parts and illustrates how light excites receptors.
http://www.thomsonedu.com Go to this site for the link to ThomsonNOW, your one-stop study shop, Take a Pre-Test for this chapter, and ThomsonNOW will generate a Personalized Study Plan based on your test results. The Study Plan will identify the topics you need to review and direct you to online resources to help you master these topics. You can then take a Post-Test to help you determine the concepts you have mastered and what you still need work on.
Children have trouble directing a response to the direction opposite of a stimulus. Is the same true to any extent in adulthood?
Chapter Ending
193
7
The Other Sensory Systems Chapter Outline
Main Ideas
Module 7.1
1. Our senses have evolved not to give us complete information about the world but to give us information we can use.
Audition Sound and the Ear Pitch Perception The Auditory Cortex Hearing Loss Sound Localization In Closing: Functions of Hearing Summary Answers to Stop & Check Questions Thought Questions Module 7.2
The Mechanical Senses Vestibular Sensation Somatosensation Pain Itch In Closing: The Mechanical Senses Summary Answers to Stop & Check Questions Thought Questions Module 7.3
The Chemical Senses General Issues About Chemical Coding Taste Olfaction Vomeronasal Sensation and Pheromones Synesthesia In Closing: Different Senses as Different Ways of Knowing the World Summary Answers to Stop & Check Questions Thought Questions Terms Suggestions for Further Reading Websites to Explore Exploring Biological Psychology CD ThomsonNOW Opposite: The sensory world of bats—which find insects by echolocation—must be very different from that of humans. Source: Frans Lanting/Corbis
2. Different sensory systems code information in different ways. As a rule, the activity in a single sensory axon is ambiguous by itself; its meaning depends on its relationship to a pattern across a population of axons.
A
ccording to a Native American saying, “A pine needle fell. The eagle saw it. The deer heard it. The bear smelled it” (Herrero, 1985). Different species are sensitive to different kinds of information. Bats locate insect prey by echoes from sonar waves that they emit at 20000 to 100000 hertz (Hz, cycles per second), well above the range of adult human hearing (Griffin, Webster, & Michael, 1960). Ganglion cells in a frog’s eye detect small, dark, moving objects such as insects (Lettvin, Maturana, McCulloch, & Pitts, 1959). The ears of the green tree frog, Hyla cinerea, are highly sensitive to sounds at two frequencies—900 and 3000 Hz— prominent in the adult male’s mating call (Moss & Simmons, 1986). Mosquitoes have a specialized receptor that detects the odor of human sweat—and therefore enables them to find us and bite us (Hallem, Fox, Zwiebel, & Carlson, 2004). Many biting insects also smell carbon dioxide (which is odorless to us); carbon dioxide also helps them find mammals (Grant & Kline, 2003). Humans’ visual and auditory abilities respond to a wider range of stimuli, perhaps because so many stimuli are biologically relevant to us. However, humans too have important sensory specializations. For example, our sense of taste alerts us to the bitterness of poisons (Richter, 1950; Schiffman & Erickson, 1971) but does not respond to substances such as cellulose that are neither helpful nor harmful to us. Our olfactory systems are unresponsive to gases that it would be useless for us to detect (e.g., carbon dioxide) and are highly responsive to such biologically useful stimuli as the smell of rotting meat. Thus, this chapter concerns not how our sensory systems enable us to perceive reality but how they process biologically useful information.
195
Module 7.1
Audition
f a tree falls in a forest where no one can hear it, does it make a sound? The answer depends on what we mean by “sound.” If we define it simply as a vibration, then of course, a falling tree makes a sound. However, we usually define sound as a psychological phenomenon, a vibration that some organism hears. By that definition, a vibration is not a sound unless someone hears it. The human auditory system enables us to hear not only falling trees but also the birds singing in the branches and the wind blowing through the leaves. Some people who are blind learn to click their heels as they walk and use the echoes to locate obstructions. Our auditory systems are amazingly well adapted for detecting and interpreting useful information.
Sound and the Ear Sound waves are periodic compressions of air, water, or other media. When a tree falls, both the tree and the ground vibrate, setting up sound waves in the air that strike the ears. If something hit the ground on the moon, where there is no air, people would not hear it—unless they put an ear to the ground.
Physical and Psychological Dimensions of Sound Sound waves vary in amplitude and frequency. The amplitude of a sound wave is its intensity. A very intense compression of air, such as that produced by a bolt of lightning, produces sound waves of great amplitude. Loudness, the perception of intensity, is related to amplitude but not the same as it. When a sound doubles its amplitude, its loudness increases but does not double. Many factors influence loudness; for example, a rapidly talking person sounds louder than slow music of the same physical amplitude. So if you complain that television advertisements are louder than the program, one reason is that the people in the advertisements talk faster. The frequency of a sound is the number of compressions per second, measured in hertz (Hz, cycles per second). Pitch is a perception closely related to fre-
196
Chapter 7
The Other Sensory Systems
quency. As a rule, the higher the frequency of a sound, the higher its pitch. Figure 7.1 illustrates the amplitude and frequency of sounds. The height of each wave corresponds to amplitude, and the number of waves per second corresponds to frequency. Most adult humans can hear air vibrations ranging from about 15 Hz to somewhat less than 20000 Hz. Children can hear higher frequencies than adults because the ability to perceive high frequencies decreases with age and with exposure to loud noises (B. A. Schneider, Trehub, Morrongiello, & Thorpe, 1986).
Structures of the Ear Rube Goldberg (1883–1970) drew cartoons about complicated, far-fetched inventions. For example, a person’s tread on the front doorstep would pull a string that raised a cat’s tail, awakening the cat, which would then chase a bird that had been resting on a balance, which would swing up to strike a doorbell. The func-
Low frequency
Higher frequency Amplitude
I
Low amplitude
Higher amplitude
0.1 second
Figure 7.1 Four sound waves The time between the peaks determines the frequency of the sound, which we experience as pitch. Here the top line represents five sound waves in 0.1 second, or 50 Hz—a very low-frequency sound that we experience as a very low pitch. The other three lines represent 100 Hz. The vertical extent of each line represents its amplitude or intensity, which we experience as loudness.
tioning of the ear may remind you of a Rube Goldberg device because sound waves are transduced into action potentials through a many-step, roundabout process. Unlike Goldberg’s inventions, however, the ear actually works. Anatomists distinguish among the outer ear, the middle ear, and the inner ear (Figure 7.2). The outer ear includes the pinna, the familiar structure of flesh and cartilage attached to each side of the head. By altering the reflections of sound waves, the pinna helps us locate the source of a sound. We have to learn to use that information because each person’s pinna is shaped differently from anyone else’s (Van Wanrooij & Van Opstal, 2005). Rabbits’ large movable pinnas enable them to localize sound sources even more precisely.
Hammer
After sound waves pass through the auditory canal (see Figure 7.2), they strike the tympanic membrane, or eardrum, in the middle ear. The tympanic membrane vibrates at the same frequency as the sound waves that strike it. The tympanic membrane is attached to three tiny bones that transmit the vibrations to the oval window, a membrane of the inner ear. These bones are sometimes known by their English names (hammer, anvil, and stirrup) and sometimes by their Latin names (malleus, incus, and stapes). The tympanic membrane is about 20 times larger than the footplate of the stirrup, which is connected to the oval window. As in a hydraulic pump, the vibrations of the tympanic membrane are transformed into more forceful vibrations of the smaller stirrup. The net effect of the system is to
Semicircular canals
Anvil
Auditory nerve Cochlea Round window
External auditory canal Tympanic membrane (eardrum)
Pinna (a)
Stirrup
Figure 7.2 Structures of the ear When sound waves strike the tympanic membrane in (a), they cause it to vibrate three tiny bones—the hammer, anvil, and stirrup— that convert the sound waves into stronger vibrations in the fluidfilled cochlea (b). Those vibrations displace the hair cells along the basilar membrane in the cochlea. (c) A cross-section through the cochlea. (d) A closeup of the hair cells.
Basilar membrane Anvil Hammer Scala vestibuli
Scala media
Hair cells Tympanic membrane
Auditory nerve
Stirrup (b)
Oval window (membrane behind stirrup) Round window
Tectorial membrane
(c)
Scala tympani Hair cells Cochlear neuron (d)
Basilar membrane
7.1
Audition
197
convert the sound waves into waves of greater pressure on the small oval window. This transformation is important because more force is required to move the viscous fluid behind the oval window than to move the eardrum, which has air on both sides of it. In the inner ear is a snail-shaped structure called the cochlea (KOCK-lee-uh, Latin for “snail”). A crosssection through the cochlea, as in Figure 7.2c, shows three long fluid-filled tunnels: the scala vestibuli, scala media, and scala tympani. The stirrup makes the oval window vibrate at the entrance to the scala vestibuli, thereby setting in motion all the fluid in the cochlea. The auditory receptors, known as hair cells, lie between the basilar membrane of the cochlea on one side and the tectorial membrane on the other (Figure 7.2d). Vibrations in the fluid of the cochlea displace the hair cells. A hair cell responds within microseconds to displacements as small as 10–10 meter (0.1 nanometer, about the diameter of one atom), thereby opening ion channels in its membrane (Fettiplace, 1990; Hudspeth, 1985). Figure 7.3 shows electron micrographs of the hair cells of three species. The hair cells synaptically excite the cells of the auditory nerve, which is part of the eighth cranial nerve.
Our ability to understand speech or enjoy music depends on our ability to differentiate among sounds of different frequencies. How do we do it?
Frequency Theory and Place Theory According to the frequency theory, the basilar membrane vibrates in synchrony with a sound, causing auditory nerve axons to produce action potentials at the same frequency. For example, a sound at 50 Hz would cause 50 action potentials per second in the auditory nerve. The downfall of this theory in its simplest form is that the refractory period of a neuron is about 1⁄ 1,000 second, so the maximum firing rate of a neuron is about 1 Hz, far short of the highest frequencies we hear. According to the place theory, the basilar membrane resembles the strings of a piano in that each area along the membrane is tuned to a specific frequency and vibrates in its presence. (If you loudly sound one note with a tuning fork near a piano, you vibrate the piano string tuned to that note.) According to this theory, each frequency activates the hair cells at only one place along the basilar membrane, and the nervous system distinguishes among frequencies based on which neurons are activated. The downfall of this theory is that the various parts of the basilar membrane are bound together too tightly for any part to resonate like a piano string.
198
Chapter 7
The Other Sensory Systems
Hudspeth, 1985
Pitch Perception Figure 7.3 Hair cells from the auditory systems of three species (a, b) Hair cells from a frog sacculus, an organ that detects ground-borne vibrations. (c) Hair cells from the cochlea of a cat. (d) Hair cells from the cochlea of a fence lizard. Kc = kinocilium, one of the components of a hair bundle.
The current theory combines modified versions of both frequency and place theories. For low-frequency sounds (up to about 100 Hz—more than an octave below middle C in music, which is 264 Hz), the basilar membrane does vibrate in synchrony with the sound waves, in accordance with the frequency theory, and auditory nerve axons generate one action potential per wave. Weak sounds activate few neurons, whereas stronger sounds activate more. Thus, at low frequencies, the frequency of impulses identifies the pitch, and the number of firing cells identifies the loudness. Because of the refractory period of the axon, as sounds exceed 100 Hz, it is harder and harder for a neuron to continue firing in synchrony with the sound waves. At higher frequencies, it fires on every second, third, fourth, or later wave. Its action potentials are phase-locked to the peaks of the sound waves (i.e., they occur at the same phase in the sound wave), as illustrated here:
Sound wave (about 1000 Hz)
4000 Hz 3000 Hz
Action potentials from one auditory neuron
Other auditory neurons also produce action potentials that are phase-locked with peaks of the sound wave, but they can be out of phase with one another:
800 Hz
600 Hz
(Floppy) Apex
5000 Hz
2000 Hz
Sound wave
1000 Hz
Neuron 1 Neuron 2 Neuron 3 Sum of neurons
If we consider the auditory nerve as a whole, we find that with a tone of a few hundred Hz, each wave excites at least a few auditory neurons. According to the volley principle of pitch discrimination, the auditory nerve as a whole can have volleys of impulses up to about 4000 per second, even though no individual axon approaches that frequency by itself (Rose, Brugge, Anderson, & Hind, 1967). Beyond about 4000 Hz, even staggered volleys of impulses can’t keep pace with the sound waves. Neuroscientists assume that these volleys contribute to pitch perception, although no one understands how the brain uses the information. Most human hearing takes place below 4000 Hz, the approximate limit of the volley principle. For comparison, the highest key on a piano is 4224 Hz. Frequencies much above that level are not important in music or human speech, although they are important in the lives of rats, mice, bats, and other small animals. When we hear these very high frequencies, we use a mechanism similar to the place theory. The basilar membrane varies from stiff at its base, where the stirrup meets the cochlea, to floppy at the other end of the cochlea, the apex (von Békésy, 1956) (Figure 7.4). The hair cells along the basilar membrane have different properties based on their location, and they act as tuned resonators that vibrate only for sound waves of a particular frequency. The highest frequency sounds vibrate hair cells near the base, and lower frequency sounds vibrate hair cells farther along the membrane (Warren, 1999). Actually, the mechanisms of hearing at frequencies well over 4000 Hz are not entirely understood, as these ultrahigh frequencies alter several of the properties of neurons and their membranes (Fridberger et al., 2004). Some people are “tone deaf.” The fancy term for this condition is amusia. Anyone who was completely insensitive to frequency differences could not understand speech because every sound we make depends on fast and slight changes in pitch. However, a substantial number of people—by one estimate, about 4%— are seriously impaired at detecting small changes in frequency, such as between C and C#. As you can imag-
400 Hz 1500 Hz
200 Hz 7000 Hz
Base of cochlea (by oval window) 20000 Hz
(Stiff)
Figure 7.4 The basilar membrane of the human cochlea High-frequency sounds excite hair cells near the base. Low-frequency sounds excite cells near the apex.
ine, they don’t enjoy music (Hyde & Peretz, 2004). The explanation for this phenomenon is not known.
STOP
&
CHECK
1. Through what mechanism do we perceive lowfrequency sounds (up to about 100 Hz)? 2. How do we perceive middle-frequency sounds (100 to 4000 Hz)? 3. How do we perceive high-frequency sounds (above 4000 Hz)? Check your answers on page 204.
The Auditory Cortex Information from the auditory system passes through several subcortical structures, with a crossover in the midbrain that enables each hemisphere of the forebrain to get most of its input from the opposite ear (Glendenning, Baker, Hutson, & Masterton, 1992). The information ultimately reaches the primary auditory cortex (area A1) in the superior temporal cortex, as shown 7.1
Audition
199
in Figure 7.5. From there, the information branches out widely. The organization of the auditory cortex strongly parallels that of the visual cortex (Poremba et al., 2003). For example, part of the parietal cortex (the dorsal “where” stream) responds strongly to the location of both visual and auditory stimuli. The superior temporal cortex includes area MT, which is important for detecting visual motion, and areas that detect the motion of sounds. Just as patients with damage in area MT become motion blind, patients with damage in parts of the superior temporal cortex become motion deaf. They can hear sounds, but the source of a sound never seems to be moving (Ducommun et al., 2004). Just as area V1 is active during visual imagery, area A1 is important for auditory imagery. In one study, people listened to several familiar and unfamiliar songs. At various points, parts of each song were replaced by 3 to 5 second gaps. When people were listening to familiar songs, they reported that they heard “in their heads” the notes or words that belonged in the gaps. That experience was accompanied by activity in area A1. During similar gaps in the unfamiliar songs, they did not hear anything “in their heads,” and area A1 showed no activation (Kraemer, Macrae, Green, & Kelley, 2005).
Auditory cortex Inferior colliculus Cochlear nucleus
Signal from left ear
Figure 7.5 Route of auditory impulses from the receptors in the ear to the auditory cortex The cochlear nucleus receives input from the ipsilateral ear only (the one on the same side of the head). All later stages have input originating from both ears.
200
Chapter 7
The Other Sensory Systems
Also like the visual system, the auditory system requires experience for full development. Just as rearing an animal in the dark impairs visual development, rearing one in constant noise impairs auditory development (Chang & Merzenich, 2003). In people who are deaf from birth, the axons leading from the auditory cortex develop less than in other people (Emmorey, Allen, Bruss, Schenker, & Damasio, 2003). However, the visual and auditory systems differ in this respect: Whereas damage to area V1 leaves someone blind, damage to area A1 does not produce deafness. People with damage to the primary auditory cortex can hear simple sounds reasonably well, unless the damage extends into subcortical brain areas (Tanaka, Kamo, Yoshida, & Yamadori, 1991). Their main deficit is in the ability to recognize combinations or sequences of sounds, like music or speech. Evidently, the cortex is not necessary for all hearing, only for advanced processing of it. When researchers record from cells in the primary auditory cortex while playing pure tones, they find that each cell has a preferred tone, as shown in Figure 7.6. Note the gradient from one area of the cortex responsive to lower tones up to areas responsive to higher and higher tones. The auditory cortex provides a kind of map of the sounds—researchers call it a tonotopic map. In alert, waking animals, each cell in area A1 gives a prolonged response to its preferred sound and little or no response to other sounds (X. Wang, Lu, Snider, & Liang, 2005). However, although each cell has a preferred frequency, many cells respond better to a complex sound than to a pure tone. That is, a cell might respond when its preferred tone is the dominant one but several other tones are present also (Barbour & Wang, 2003; Griffiths, Uppenkamp, Johnsrude, Josephs, & Patterson, Medial geniculate 2001; Wessinger et al., 2001). Most cortical cells also respond best to the kinds of sounds that capture our atSuperior tention. For example, other things olive being equal, they respond more to an unusual sound than to one that has been repeated many times (Ulanovsky, Las, & Nelken, 2003). They also Signal from right ear respond more strongly to a tone with its harmonics than to a single, pure frequency (Penagos, Melcher, & Oxenham, 2004). For example, for a tone of 400 Hz, the harmonics are 800 Hz, 1200 Hz, and so forth. We experience a tone with harmonics as “richer” than one without them. Surrounding the primary auditory cortex are several additional auditory areas, in which most cells respond more to changes in sounds than to any single prolonged sound (Seifritz et al., 2002). Just as the visual system starts with cells that respond to simple lines and progresses to cells that detect faces and other complex stimuli, the same is true for the auditory system.
Figure 7.6 The human primary auditory cortex Cells in each area respond mainly to tones of a particular frequency. Note that the neurons are arranged in a gradient, with cells responding to low-frequency tones at one end and cells responding to high-frequency tones at the other end. Corresponds to apex of basilar membrane
Primary auditory cortex
(a)
Secondary auditory cortex
Cells outside area A1 respond best to what we might call auditory “objects”—sounds such as animal cries, machinery noises, music, and so forth (Zatorre, Bouffard, & Belin, 2004). Many of these cells respond so slowly (b) that they probably are not part of the initial perception of the sound itself; rather, they interpret what the sound means (Gutschalk, Patterson, Scherg, Uppenkamp, & Rupp, 2004).
STOP
&
Corresponds to base of basilar membrane
CHECK
4. What is one way in which the auditory cortex is like the visual cortex? 5. What is one way in which the auditory and visual cortices differ? 6. What kinds of sounds most strongly activate the primary auditory cortex? Check your answers on page 204.
Hearing Loss Complete deafness is rare. About 99% of hearingimpaired people have at least some response to loud noises. We distinguish two categories of hearing impairment: conductive deafness and nerve deafness. Conductive deafness, or middle-ear deafness, occurs if the bones of the middle ear fail to transmit sound waves properly to the cochlea. Such deafness
Highest notes on the piano
An octave above highest piano notes (squeaky)
Another octave higher (barely audible for most adults)
can be caused by diseases, infections, or tumorous bone growth near the middle ear. Conductive deafness is sometimes temporary. If it persists, it can be corrected either by surgery or by hearing aids that amplify the stimulus. Because people with conductive deafness have a normal cochlea and auditory nerve, they hear their own voices, which can be conducted through the bones of the skull directly to the cochlea, bypassing the middle ear. Because they hear themselves clearly, they may blame others for talking too softly. Nerve deafness, or inner-ear deafness, results from damage to the cochlea, the hair cells, or the auditory nerve. It can occur in any degree and may be confined to one part of the cochlea, in which case someone hears certain frequencies and not others. Hearing aids cannot compensate for extensive nerve damage, but they help people who have lost receptors in part of the cochlea. Nerve deafness can be inherited (A. Wang et al., 1998), or it can develop from a variety of prenatal problems or early childhood disorders (Cremers & van Rijn, 1991; Robillard & Gersdorff, 1986), including: • Exposure of the mother to rubella (German measles), syphilis, or other diseases or toxins during pregnancy • Inadequate oxygen to the brain during birth • Deficient activity of the thyroid gland • Certain diseases, including multiple sclerosis and meningitis 7.1
Audition
201
• Childhood reactions to certain drugs, including aspirin • Repeated exposure to loud noises Many people with nerve deafness experience tinnitus (tin-EYE-tus)—frequent or constant ringing in the ears. Tinnitus has also been demonstrated after nerve deafness in hamsters, and you might wonder how anyone could know that hamsters experience ringing in their ears. (They can’t tell us.) Experimenters trained water-deprived hamsters to turn toward a sound to obtain water. Then they produced partial nerve deafness in one ear by exposing that ear to a very loud sound. Afterward, the hamster turned toward that ear even when no external sound was present, indicating that they heard something in that ear (Heffner & Koay, 2005). In some cases, tinnitus is due to a phenomenon like phantom limb, discussed in Chapter 5. Recall the example in which someone has an arm amputated, and then the axons reporting facial sensations invade the brain areas previously sensitive to the arm so that stimulation of the face produces a sensation of a phantom arm. Similarly, damage to part of the cochlea is like an amputation: If the brain no longer gets its normal input, axons representing other parts of the body may invade a brain area previously responsive to sounds, especially the high-frequency sounds. Several patients have reported ringing in their ears whenever they move their jaws (Lockwood et al., 1998). Presumably, axons representing the lower face invaded their auditory cortex. Some people report a decrease in tinnitus after they start wearing hearing aids. For practical information about coping with hearing loss, visit this website: http://www.marky.com/hearing/
STOP
&
enough for you to turn almost immediately toward a sound, and for owls in the air to locate mice on the ground in the middle of the night (Konishi, 1995). You can identify a sound’s direction even if it occurs just briefly and while you are turning your head (Vliegen, Van Grootel, & Van Opstal, 2004). One cue for sound location is the difference in intensity between the ears. For high-frequency sounds, with a wavelength shorter than the width of the head, the head creates a sound shadow (Figure 7.7), making the sound louder for the closer ear. In adult humans, this mechanism produces accurate sound localization for frequencies above 2000 to 3000 Hz and less accurate localizations for progressively lower frequencies. Another method is the difference in time of arrival at the two ears. A sound coming from directly in front of you reaches both ears at once. A sound coming directly from the side reaches the closer ear about 600 microseconds (ms) before the other. Sounds coming from intermediate locations reach the two ears at delays between 0 and 600 ms. Time of arrival is most useful for localizing sounds with a sudden onset. Most birds’ alarm calls increase gradually in loudness, making them difficult for a predator to localize. A third cue is the phase difference between the ears. Every sound wave has phases with two consecutive peaks 360 degrees apart. Figure 7.8 shows sound waves that are in phase and 45 degrees, 90 degrees, or
Sound shadow
Extra distance sound must travel to reach right ear
CHECK Path of sound to near (left) ear
7. Which type of hearing loss would be more common among members of rock bands and why? For which type of hearing loss is a hearing aid generally more successful? Check your answers on page 204.
Path of sound to far (right) ear
Sound source
Sound Localization You are walking alone when suddenly you hear a loud noise. You want to know what produced it (friend or foe), but equally, you want to know where it came from (so you can approach or escape). Determining the direction and distance of a sound requires comparing the responses of the two ears—which are in effect just two points in space. And yet this system is accurate
202
Chapter 7
The Other Sensory Systems
Figure 7.7 Differential loudness and arrival times as cues for sound localization Sounds reaching the closer ear arrive sooner as well as louder because the head produces a “sound shadow.” (Source: After Lindsay & Norman, 1972)
Sound waves in phase
Figure 7.9 Phase differences between the ears as a cue for sound localization
45° out of phase
90° out of phase
A sound coming from anywhere other than straight ahead or straight behind reaches the two ears at different phases of the sound wave. The difference in phase is a signal to the sound’s direction. With high-frequency sounds, the phases can become ambiguous.
What would happen if someone became deaf in one ear? At first, as you would expect, all sounds seem to come from directly to the side of the intact ear. (Obviously, that ear hears a sound louder and sooner than the other ear because the other ear doesn’t hear it at all.) Eventually, however, people learn to interpret loudness cues, but only with familiar sounds in a familiar location. They infer that louder sounds come from the side of the intact ear and softer sounds come from the opposite side. Their accuracy does not match that of people with two ears, but it becomes accurate enough to be useful under some conditions (Van Wanrooij & Van Opstal, 2004).
180° out of phase
STOP Figure 7.8 Sound waves can be in phase or out of phase Sound waves that reach the two ears in phase are localized as coming from directly in front of (or behind) the hearer. The more out of phase the waves, the farther the sound source is from the body’s midline.
180 degrees out of phase. If a sound originates to the side of the head, the sound wave strikes the two ears out of phase, as shown in Figure 7.9. How much out of phase depends on the frequency of the sound, the size of the head, and the direction of the sound. Phase differences provide information that is useful for localizing sounds with frequencies up to about 1500 Hz in humans. In short, humans localize low frequencies by phase differences and high frequencies by loudness differences. We can localize a sound of any frequency by its time of onset if it occurs suddenly enough.
&
CHECK
8. Which method of sound localization is more effective for an animal with a small head? Which is more effective for an animal with a large head? Why? Check your answers on page 204.
Module 7.1 In Closing: Functions of Hearing We spend much of our day listening to language, and we sometimes forget that the original, primary function of hearing has to do with simpler but extremely important issues: What do I hear? Where is it? Is it coming closer? Is it a potential mate, a potential enemy, potential food, or something irrelevant? The organization of the auditory system is well suited to resolving these questions. 7.1
Audition
203
ent axons fire for different waves, and so a volley (group) of axons fires for each wave. (p. 199)
Summary 1. We detect the pitch of low-frequency sounds by the frequency of action potentials in the auditory system. At intermediate frequencies, we detect volleys of responses across many receptors. We detect the pitch of the highest frequency sounds by the area of greatest response along the basilar membrane. (p. 198) 2. The auditory cortex resembles the visual cortex in many ways. Both have a “where” system in the parietal cortex and a “what” system in the temporal cortex. Both have specialized areas for detecting motion, and therefore, it is possible for a person with brain damage to be motion blind or motion deaf. The primary visual cortex is essential for visual imagery, and the primary auditory cortex is essential for auditory imagery. (p. 200) 3. Each cell in the primary auditory cortex responds best to a particular frequency of tones, although many respond better to complex tones than to a single frequency. (p. 200) 4. Cells in the auditory cortex respond most strongly to the same kinds of sounds to which we pay the most attention, such as unusual sounds and those rich in harmonics. (p. 200) 5. Areas bordering the primary auditory cortex analyze the meaning of sounds. (p. 201) 6. Deafness may result from damage to the nerve cells or to the bones that conduct sounds to the nerve cells. (p. 201) 7. We localize high-frequency sounds according to differences in loudness between the ears. We localize low-frequency sounds on the basis of differences in phase. (p. 202)
3. At high frequencies, the sound causes maximum vibration for the hair cells at one location along the basilar membrane. (p. 199) 4. Any of the following: (a) The parietal cortex analyzes the “where” of both visual and auditory stimuli. (b) Areas in the superior temporal cortex analyze movement of both visual and auditory stimuli. Damage there can cause motion blindness or motion deafness. (c) The primary visual cortex is essential for visual imagery, and the primary auditory cortex is essential for auditory imagery. (d) Both the visual and auditory cortex need normal experience early in life to develop normal sensitivities. (p. 201) 5. Damage to the primary visual cortex leaves someone blind, but damage to the primary auditory cortex merely impairs perception of complex sounds without making the person deaf. (p. 201) 6. Each cell in the primary auditory cortex has a preferred frequency. In addition, cells of the auditory cortex respond most strongly to the kinds of sounds most likely to capture our attention, such as unusual sounds and sounds rich in harmonics. (p. 201) 7. Nerve deafness is common among rock band members because their frequent exposure to loud noises causes damage to the cells of the ear. Hearing aids are generally successful for conductive deafness; they are not always helpful in cases of nerve deafness. (p. 202) 8. An animal with a small head localizes sounds mainly by differences in loudness because the ears are not far enough apart for differences in onset time to be very large. An animal with a large head localizes sounds mainly by differences in onset time because its ears are far apart and well suited to noting differences in phase or onset time. (p. 203)
Answers to STOP
&
CHECK
Questions 1. At low frequencies, the basilar membrane vibrates in synchrony with the sound waves, and each responding axon in the auditory nerve sends one action potential per sound wave. (p. 199) 2. At intermediate frequencies, no single axon fires an action potential for each sound wave, but differ-
204
Chapter 7
The Other Sensory Systems
Thought Questions 1. Why do you suppose that the human auditory system evolved sensitivity to sounds in the range of 20 to 20000 Hz instead of some other range of frequencies? 2. The text explains how we might distinguish loudness for low-frequency sounds. How might we distinguish loudness for a high-frequency tone?
Module 7.2
The Mechanical Senses
T
he next time you turn on your radio or stereo set, place your hand on its surface. You can feel the same vibrations that you hear. If you practiced enough, could you learn to “hear” the vibrations with your fingers? No, they would remain just vibrations. If an earless species had enough time, might its vibration detectors evolve into sound detectors? Yes! In fact, our ears evolved in just that way. The mechanical senses respond to pressure, bending, or other distortions of a receptor. They include touch, pain, and other body sensations, as well as vestibular sensation, a system that detects the position and movement of the head. Audition is a mechanical sense also because the hair cells are modified touch receptors; we considered it separately because of its complexity and great importance to humans.
Vestibular Sensation Try to read this page while you jiggle your head up and down, back and forth. You will find that you can read it fairly easily. Now hold your head steady and jiggle the book up and down, back and try it forth. Suddenly, you can hardly read it at yourself all. Why? When you move your head, the vestibular organ adjacent to the cochlea monitors each movement and directs compensatory movements of your eyes. When your head moves left, your eyes move right; when your head moves right, your eyes move left. Effortlessly, you keep your eyes focused on what you want to see (Brandt, 1991). When you move the page, however, the vestibular organ cannot keep your eyes on target. Sensations from the vestibular organ detect the direction of tilt and the amount of acceleration of the head. We are seldom aware of our vestibular sensations except under unusual conditions such as riding a roller coaster; they are nevertheless critical for guiding eye movements and maintaining balance. Astronauts, of course, become acutely aware of the lack of vestibular sensation while they are in orbit. The vestibular organ, shown in Figure 7.10, consists of the saccule and utricle and three semicircular canals. Like the hearing receptors, the vestibular recep-
tors are modified touch receptors. Calcium carbonate particles called otoliths lie next to the hair cells. When the head tilts in different directions, the otoliths push against different sets of hair cells and excite them (Hess, 2001). The three semicircular canals, oriented in three different planes, are filled with a jellylike substance and lined with hair cells. An acceleration of the head at any angle causes the jellylike substance in one of these canals to push against the hair cells. Action potentials initiated by cells of the vestibular system travel through part of the eighth cranial nerve to the brainstem and cerebellum. (The eighth cranial nerve contains both an auditory component and a vestibular component.) For the vestibular organ, so far as we can tell, the ideal size is nearly constant, regardless of the size of the animal. Whales are 10 million times as massive as mice, but their vestibular organ is only 5 times as large (Squires, 2004). To be useful, of course, vestibular sensation has to be integrated with other kinds of sensation. People with damage to the angular gyrus—an area at the border between the parietal cortex and the temporal cortex—have trouble integrating or binding one kind of sensation with another, including vestibular sensations. One patient was given several bouts of electrical stimulation to the angular gyrus as part of an exploratory procedure prior to brain surgery. The electrical stimulation presumably disrupted the normal activity. The patient reported an “out of the body experience” each time, as if viewing the body from somewhere else (Blanke, Ortigue, Landis, & Seeck, 2002). The researchers suggested that out of the body experiences represent a failure to bind vestibular sensation with vision and touch.
STOP
&
CHECK
1. People with damage to the vestibular system have trouble reading street signs while walking. Why? Check your answer on page 214.
7.2
The Mechanical Senses
205
Somatosensation The somatosensory system, the sensation of the body and its movements, is not one sense but many, including discriminative touch (which identifies the shape of an object), deep pressure, cold, warmth, pain, itch, tickle, and the position and movement of joints.
Somatosensory Receptors
(a)
Semicircular canals
Saccule and utricle Inner ear
(b)
Otoliths
Hair cell
The skin has many kinds of somatosensory receptors, including those listed in Figure 7.11. Table 7.1 lists the probable functions of these and other receptors (Iggo & Andres, 1982; Paré, Smith, & Rice, 2002). However, each receptor probably contributes to several kinds of somatosensory experience. Many respond to more than one kind of stimulus, such as touch and temperature. Others (not in the table) respond to deep stimulation, joint movement, or muscle movement. A touch receptor may be a simple bare neuron ending (e.g., many pain receptors), an elaborated neuron ending (Ruffini endings and Meissner’s corpuscles), or a bare ending surrounded by nonneural cells that modify its function (Pacinian corpuscles). Stimulation of a touch receptor opens sodium channels in the axon, thereby starting an action potential (Price et al., 2000). One example of a receptor is the Pacinian corpuscle, which detects sudden displacements or highfrequency vibrations on the skin (Figure 7.12). Inside its onionlike outer structure is the neuron membrane. When mechanical pressure bends the membrane, its resistance to sodium flow decreases, and sodium ions enter, depolarizing the membrane (Loewenstein, 1960). Only a sudden or vibrating stimulus can bend the membrane; the onionlike outer structure provides mechanical support that resists gradual or constant pressure on the skin. Certain chemicals can also stimulate the receptors for heat and cold. The heat receptor responds to capsaicin, the chemical that makes jalapeños and similar peppers taste hot. The coolness receptor responds to menthol and less strongly to mint (McKemy, Neuhausser, & Julius, 2002). So advertisements mentioning “the cool taste of menthol” are literally correct.
Vestibular nerve fibers
E X T E N S I O N S A N D A P P L I C AT I O N S
Tickle
(c)
Figure 7.10 Structures for vestibular sensation (a) Location of the vestibular organs. (b) Structures of the vestibular organs. (c) Cross-section through a utricle. Calcium carbonate particles, called otoliths, press against different hair cells depending on the direction of tilt and rate of acceleration of the head.
206
Chapter 7
The Other Sensory Systems
The sensation of tickle is interesting but poorly understood. Why does it exist at all? Why do you laugh if someone rapidly fingers your armpit, neck, or the soles of your feet? Chimpanzees respond to similar sensations with bursts of panting that resemble laughter. And yet tickling is unlike humor. Most people do not enjoy
Table 7.1 Somatosensory Receptors and Their Possible Functions Receptor
Location
Responds to
Free nerve ending (unmyelinated or thinly myelinated axons)
Near base of hairs and elsewhere in skin
Pain, warmth, cold
Hair-follicle receptors
Hair-covered skin
Movement of hairs
Meissner’s corpuscles
Hairless areas
Sudden displacement of skin; low-frequency vibration (flutter)
Pacinian corpuscles
Both hairy and hairless skin
Sudden displacement of skin; high-frequency vibration
Merkel’s disks
Both hairy and hairless skin
Tangential forces across skin
Ruffini endings
Both hairy and hairless skin
Stretch of skin
Krause end bulbs
Mostly or entirely in hairless areas, perhaps including genitals
Uncertain
Figure 7.11 Some sensory receptors found in the skin, the human body’s largest organ
Meissner’s corpuscle
Different receptor types respond to different stimuli, as described in Table 7.1.
Pain receptor
Ruffini ending
being tickled for long—if at all—and certainly not by a stranger. If one joke makes you laugh, you are more likely than usual to laugh at the next joke. But being tickled doesn’t change your likelihood of laughing at a joke (C. R. Harris, 1999). Why can’t you tickle yourself? It is for the same reason that you can’t surprise yourself. When you touch yourself, your brain compares the resulting stimulation to the “expected” stimulation and generates a weak somatosensory response. When someone else touches you, the response is stronger (Blakemore, Wolpert, & Frith, 1998). Evidently, while you are trying to tickle yourself, the brain gets messages about the movements, which subtract from the touch sensations. (Actually, some people can tickle themselves—a little—if they tickle the right side of the body try it with the left hand or the left side with the yourself right hand. Try it.)
Ed Reschke
Pacinian corpuscle
Figure 7.12 A Pacinian corpuscle Pacinian corpuscles are a type of receptor that responds best to sudden displacement of the skin or to high-frequency vibrations. They respond only briefly to steady pressure on the skin. The onionlike outer structure provides a mechanical support to the neuron inside it so that a sudden stimulus can bend it but a sustained stimulus cannot. 7.2
The Mechanical Senses
207
Input to the Spinal Cord and the Brain Information from touch receptors in the head enters the central nervous system (CNS) through the cranial nerves. Information from receptors below the head enters the spinal cord and passes toward the brain through the 31 spinal nerves (Figure 7.13), including 8 cervical nerves, 12 thoracic nerves, 5 lumbar nerves, 5 sacral nerves, and 1 coccygeal nerve. Each spinal nerve has a sensory component and a motor component. Each spinal nerve innervates, or connects to, a limited area of the body. The skin area connected to a
Brain
Spinal cord
single sensory spinal nerve is called a dermatome (Figure 7.14). For example, the third thoracic nerve (T3) innervates a strip of skin just above the nipples as well as the underarm area. But the borders between dermatomes are not so distinct as Figure 7.14 implies; there is actually an overlap of one-third to one-half between adjacent pairs. The sensory information that enters the spinal cord travels in well-defined pathways toward the brain. For example, the touch pathway in the spinal cord is separate from the pain pathway, and the pain pathway itself has different populations of axons conveying sharp pain, slow burning pain, and painfully cold sensations (Craig, Krout, & Andrew, 2001). One patient had an illness that destroyed all the myelinated somatosensory axons from below his nose but spared all his unmyelinated axons. He still felt temperature, pain, and itch, which depend on the unmyelinated axons. However, he had no sense of C7 C8
Cervical nerves (8 pairs)
C6
First thoracic vertebra
I II III
C2
T1 C5 T2
C3 C4
Thoracic nerves (12 pairs)
T3 T4 T5 T6 T7 T8 T9 T10 T11 T12
C5 T2 C6 T1 C7 C8
L1 S2 L2
S 2
L2
S2 Lumbar nerves (5 pairs)
L3
L3
S2 L5 Sacral nerves (5 pairs) Coccygeal nerves (1 pair)
S2 S2
L5
L4 S1
L4 S1 S1
L5
Figure 7.13 The human central nervous system (CNS)
Figure 7.14 Dermatomes innervated by the 31 sensory spinal nerves
Spinal nerves from each segment of the spinal cord exit through the correspondingly numbered opening between vertebrae. (Source: From C. Starr and R. Taggart, Biology: The
Areas I, II, and III of the face are not innervated by the spinal nerves but instead by three branches of the fifth cranial nerve. Although this figure shows distinct borders, the dermatomes actually overlap one another by about one-third to one-half of their width.
Unity and Diversity of Life, 5th edition, 1989, 338. Copyright © 1989 Wadsworth Publishing Co. Reprinted by permission.)
208
Chapter 7
The Other Sensory Systems
touch below the nose. Curiously, if someone lightly stroked his skin, all he experienced was a vague sense of pleasure. Recordings from his brain indicated no arousal of his primary somatosensory cortex but increased activity in the insular cortex, an area responsive to taste and to several kinds of emotional experience (Olausson et al., 2002). That is, evidently, the pleasurable, sensual aspects of touch depend on unmyelinated axons and do not require feeling the touch. The various areas of the somatosensory thalamus send their impulses to different areas of the primary somatosensory cortex, located in the parietal lobe. Two parallel strips in the somatosensory cortex respond mostly to touch on the skin; two other parallel strips respond mostly to deep pressure and movement of the joints and muscles (Kaas, 1983). In short, various aspects of body sensation remain at least partly separate all the way to the cortex. Along each strip of somatosensory cortex, different subareas respond to different areas of the body; that is, the somatosensory cortex acts as a map of body location, as shown in Figure 4.24, p. 99. Just as conscious vision and hearing depend on the primary visual and auditory cortex, the primary somatosensory cortex is essential for conscious touch experiences. When weak, brief stimuli are applied to the fingers, people are consciously aware of only those that produce a certain minimum level of arousal in the primary somatosensory cortex (Palva, LinkenkaerHansen, Näätäen, & Palva, 2005). If someone touches you quickly on two nearby points on the hand, you will probably have an illusory experience of a single touch midway between those two points. When that happens, the activity in the primary somatosensory cortex corresponds to that midway point (Chen, Friedman, & Roe, 2003). In other words, the activity corresponds to what you experience, not what has actually stimulated your receptors. After damage to the somatosensory cortex, people generally experience an impairment of body perceptions. One patient with Alzheimer’s disease, who had damage in the somatosensory cortex as well as elsewhere, had much trouble putting her clothes on correctly, and she could not point correctly in response to such directions as “show me your elbow” or “point to my knee,” although she pointed correctly to various objects in the room. When told to touch her elbow, her most frequent response was to feel her wrist and arm and suggest that the elbow was probably around there, somewhere. She acted as if she had only a blurry map of her own body parts (Sirigu, Grafman, Bressler, & Sunderland, 1991). The primary somatosensory cortex sends output to additional cortical areas that are less clearly related to any map of the body. Activity in the primary somatosensory cortex increases during attention to touch stimuli (Krupa, Weist, Shuler, Laubach, & Nicolelis, 2004),
but activity in the additional somatosensory areas depends even more strongly on attention. For example, they are only mildly activated when someone merely touches something and more strongly activated when someone touches something and tries to determine what it is (Young et al., 2004).
STOP
&
CHECK
2. In what way is somatosensation several senses instead of one? 3. What evidence suggests that the somatosensory cortex is essential for conscious perception of touch? Check your answers on page 214.
Pain Pain, the experience evoked by a harmful stimulus, directs our attention toward a danger and holds our attention. The prefrontal cortex, which responds briefly to almost any stimulus, responds to a painful stimulus as long as it lasts (Downar, Mikulis, & Davis, 2003). Have you ever wondered why morphine decreases pain after surgery but not during the surgery itself? Or why some people seem to tolerate pain so much better than others? Or why even the slightest touch on sunburned skin is so painful? Research on pain addresses these and other questions.
Pain Stimuli and the Pain Pathways Pain sensation begins with the least specialized of all receptors, a bare nerve ending (see Figure 7.11). Some pain receptors also respond to acids and heat above 43°C (110°F). Capsaicin, a chemical found in hot peppers such as jalapeños, also stimulates those receptors. Capsaicin can produce burning or stinging sensations on many parts of your body, as you may have experienced if you ever touched the insides of hot peppers and then rubbed your eyes. Animals react to capsaicin by sweating or salivating, as if they were indeed hot (Caterina et al., 2000). The axons carrying pain information have little or no myelin and therefore conduct impulses relatively slowly, in the range of 2 to 20 meters per second (m/s). The thicker and faster axons convey sharp pain; the thinnest ones convey duller pain, such as postsurgical pain. The axons enter the spinal cord, where they release two neurotransmitters. Mild pain releases the neurotransmitter glutamate, whereas stronger pain releases both glutamate and substance P (Cao et al., 1998). Mice that lack receptors for substance P react nor7.2
The Mechanical Senses
209
mally to mild pain, but they react to a severe injury as if it were a mild injury (DeFelipe et al., 1998). That is, without substance P, they do not detect the increased intensity. The pain-sensitive cells in the spinal cord relay the information to several sites in the brain. One pathway extends to the ventral posterior nucleus of the thalamus and from there to the somatosensory cortex in the parietal lobe (which also responds to touch and other body sensations). The somatosensory cortex detects the nature of the pain and its location on the body. It responds both to painful stimuli and to signals that warn of impending pain (Babiloni et al., 2005). Painful stimuli also activate a pathway through the reticular formation of the medulla and then to several of the central nuclei of the thalamus, the amygdala, hippocampus, prefrontal cortex, and cingulate cortex (Figure 7.15). These areas react not to the sensation itself but to the unpleasant emotional quality associated with it (Hunt & Mantyh, 2001). If you watch someone—especially someone you care about—expe-
riencing pain, you experience a “sympathetic pain” that shows up as activity in your cingulate cortex but not in your somatosensory cortex (Singer et al., 2004).
STOP
&
CHECK
4. How do jalapeños produce a hot sensation? 5. What would happen to a pain sensation if glutamate receptors in the spinal cord were blocked? What if substance P receptors were blocked? Check your answers on page 214.
Ways of Relieving Pain
Pain alerts us to danger, but once we are aware of it, further pain messages accomplish little. Our brains put the brakes on prolonged pain by opioid mechanisms— systems that respond to opiate drugs and similar chemicals. Candace Pert and Solomon Snyder (1973) disSomatosensory cortex covered that opiates exert their effects by binding to certain receptors found mostly in the spinal cord and the periaqueductal gray area of the midbrain. Later Cingulate cortex researchers found that opiate receptors act by blocking the release of substance P (Kondo et al., 2005; Reichling, Kwiat, & BasThalamus baum, 1988) (Figures 7.16 and 7.17). The discovery of opiate receptors was exciting because it was the first evidence that opiates act on the nervous system rather than the injured tissue. Furthermore, it implied that the nervous system must have its own opiate-type chemicals. Two of them are met-enkephalin and leuHypothalamus enkephalin. The term enkephalin (en-KEFF-ah-lin) reflects the fact that these chemicals were first found in the brain, or encephalon. The enkephAmygdala alins are chains of amino acids; metHippocampus enkephalin ends with the amino acid methionine and leu-enkephalin ends with leucine. Although the enkephalins are chemically unlike morphine, they and several other transmitters, including b-endorphin, bind to the same receptors as morphine. CollecSkin Cross-section tively, the transmitters that attach to through the spinal cord the same receptors as morphine are known as endorphins—a contraction Figure 7.15 Representation of pain in the human brain of endogenous morphines. A pathway to the thalamus, and from there to the somatosensory cortex, conveys Both pleasant and unpleasant the sensory aspects of pain. A separate pathway to the hypothalamus, amygdala, stimuli can release endorphins and and other structures produces the emotional aspects. (Source: Hunt & Mantyh, 2001) thereby inhibit pain. Inescapable pain
210
Chapter 7
The Other Sensory Systems
Figure 7.16 Synapses responsible for pain and its inhibition The pain afferent neuron releases substance P as its neurotransmitter. Another neuron releases enkephalin at presynaptic synapses; the enkephalin inhibits the release of substance P and therefore alleviates pain.
Opiate receptors
Enkephalin Pain afferent
Substance P
Figure 7.17 The periaqueductal gray area, where electrical stimulation relieves pain Periaqueductal means “around the aqueduct,” a passageway of cerebrospinal fluid between the third and fourth ventricles. Certain kinds of painful and other stimuli
Pons Medulla
is especially potent at stimulating endorphins and inhibiting further pain (Sutton et al., 1997). Presumably, the evolutionary function is that continued intense sensation of pain accomplishes nothing when escape is impossible. Endorphins are also released during sex and when you listen to thrilling Release endorphins, which music that sends a chill down your spine inhibit an inhibitory cell and (A. Goldstein, 1980). Those experiences tend therefore excite... to decrease pain. You decrease your endorphin release if you Periaqueductal brood about sad memories gray area (Zubieta et al., 2003). The discovery of endorphins provides physiological details for the gate theory, proposed decades earlier by Ronald Melzack and P. D. Wall Excites (1965). The gate theory was an Area in rostral part of medulla attempt to explain why some people withstand pain better than others and why the same Inhibits release of To spinal cord injury hurts worse at some times than substance P others. People differ partly because of genetics (Wei et al., 2001), but experiences account for Areas of spinal cord that receive pain messages much of the variation too. According to the gate theory, spinal cord neurons that receive messages from pain receptors also receive input from touch receptors and from axons descending from the brain. These other inAxons carrying pain messages puts can close the “gates” for the pain messages—at least 7.2
The Mechanical Senses
211
partly by releasing endorphins. Although some details of Melzack and Wall’s gate theory turned out wrong, the general principle is valid: Nonpain stimuli can modify the intensity of pain. You have no doubt noticed this principle yourself. When you have an injury, you can decrease the pain by gently rubbing the skin around it or by concentrating on something else. Morphine does not block the sharp pain of the surgeon’s knife; for that, you need a general anesthetic. Instead, morphine blocks the slower, duller pain that lingers after surgery. Larger diameter axons, unaffected by morphine, carry sharp pain. Thinner axons convey dull postsurgical pain, and morphine does inhibit them (Taddese, Nah, & McCleskey, 1995). Marijuana and related chemicals also block certain kinds of pain by stimulating anandamide and 2-AG receptors in the midbrain (Hohmann et al., 2005). Another approach to relieving pain uses capsaicin. As mentioned, capsaicin produces a burning or painful sensation. It does so by releasing substance P. However, it releases substance P faster than neurons can resynthesize it, leaving the cells less able to send pain messages. Also, high doses of capsaicin damage pain receptors. Capsaicin rubbed onto a sore shoulder, an arthritic joint, or other painful area produces a temporary burning sensation followed by a longer period of decreased pain. However, do not try eating hot peppers to reduce pain in, say, your legs. The capsaicin you eat passes through the digestive system without entering the blood. Therefore, eating it will not relieve your pain—unless it is your tongue that hurts (Karrer & Bartoshuk, 1991). People also sometimes experience pain relief from placebos. A placebo is a drug or other procedure with no pharmacological effects. In many experiments, the experimental group receives the potentially active treatment, and the control group receives a placebo. Theoretically, placebos should not have much effect, and in most kinds of medical research, they don’t, but they sometimes do relieve pain (Hróbjartsson & Gøtzsche, 2001). Evidently, the pain decreases just because people expect it to decrease. People are not just saying the pain decreased; brain scans indicate that placebos decrease the brain’s response to painful stimuli. However, a placebo’s effects are mainly on emotion, not sensation. That is, a placebo decreases the response in the cingulate cortex but not the somatosensory cortex (Petrovic, Kalso, Petersson, & Ingvar, 2002; Wager et al., 2004). Similarly, when a hypnotized person is told to feel no pain, activity decreases in the cingulate cortex but not in the somatosensory cortex (Rainville, Duncan, Price, Carrier, & Bushnell, 1997). The hypnotized person still feels the painful stimulus but does not react emotionally. Do placebos and hypnotic suggestion decrease pain just by increasing relaxation? No. In one study, people were given injections of capsaicin (which produces a
212
Chapter 7
The Other Sensory Systems
burning sensation) into both hands and both feet. They were also given a placebo cream on one hand or foot and told that it was a powerful painkiller. People reported decreased pain in the area that got the placebo but normal pain on the other three extremities (Benedetti, Arduino, & Amanzio, 1999). If placebos were simply producing relaxation, the relaxation should have affected all four extremities, not just the area where the placebo was applied. The mechanism of placebos’ effects on pain is not yet understood.
Sensitization of Pain In addition to mechanisms for decreasing pain, the body has mechanisms that increase pain. For example, even a light touch on sunburned skin is painful. Damaged or inflamed tissue, such as sunburned skin, releases histamine, nerve growth factor, and other chemicals that help repair the damage but also magnify the responses in nearby pain receptors (Devor, 1996; Tominaga et al., 1998), including those responsive to heat (Chuang et al., 2001). Nonsteroidal anti-inflammatory drugs, such as ibuprofen, relieve pain by reducing the release of chemicals from damaged tissues (Hunt & Mantyh, 2001). In animal experiments, the neurotrophin GDNF has shown even greater potential to block the heightened pain sensitivity after tissue damage (Boucher et al., 2000). Sometimes people suffer chronic pain long after an injury has healed. Why this pain develops in some people and not others remains unknown, but the mechanism is partly understood. As we shall see in Chapter 13, a barrage of intense stimulation of a neuron can “potentiate” its synaptic receptors so that it responds more vigorously to the same input in the future. That mechanism is central to learning and memory, but unfortunately, pain activates the mechanism as well. An intense barrage of painful stimuli potentiates the cells responsive to pain so that they respond more vigorously to minor stimulation in the future (Ikeda, Heinke, Ruscheweyh, & Sandkühler, 2003). In effect, the brain learns how to feel pain, and it gets better at it. Therefore, to prevent chronic pain, it is important to limit pain from the start. Suppose you are about to undergo major surgery. Which approach is best? A. Start taking morphine before the surgery. B. Begin morphine soon after awakening from surgery. C. Postpone the morphine as long as possible and take as little as possible. Perhaps surprisingly, the research supports answer A: Start the morphine before the surgery (Keefe & France, 1999). Allowing pain messages to bombard the brain during and after the surgery increases the sensitivity of the pain nerves and their receptors (Malmberg, Chen, Tonagawa, & Basbaum, 1997). People who
begin taking morphine before surgery need less of it afterward. For more information about pain, including links to research reports, check either of these websites: http://www.painnet.com/ http://www.ampainsoc.org/
STOP
&
CHECK
6. How do opiates relieve pain? Why do they relieve dull pain but not sharp pain? 7. How do the pain-relieving effects of placebos differ from those of morphine?
This research helps explain an experience that you may have had. When a dentist gives you Novocain before drilling a tooth, part of your face becomes numb. An hour or more later, as the Novocain’s effects start to wear off, you may feel an intense itchy sensation in the numb portion of your face. But when you try to scratch the itch, you feel nothing because the touch and pain sensations are still numb. (Evidently, the effects of Novocain wear off faster for itch than for touch and pain axons.) The fact that you can feel itch at this time is evidence that it is not just a form of touch or pain. It is interesting that scratching the partly numb skin does not relieve the itch. Evidently, your scratching has to produce some pain to decrease the itch.
8. How do ibuprofen and other nonsteroidal antiinflammatory drugs decrease pain?
STOP
9. Why is it preferable to start taking morphine before an operation instead of waiting until later? Check your answers on page 214.
&
CHECK
10. Would antihistamine drugs increase or decrease itch sensations? What about opiates? Check your answers on page 214.
Itch Have you ever wondered, “What is itch, anyway? Is it a kind of pain? A kind of touch? Or something else altogether?” For many years, no one knew, and we still have not identified the receptors responsible for itch. However, we do know that when your skin rubs against certain plants, when an insect crawls along your skin, or when you have mild tissue damage, your skin releases histamines, and histamines produce an itching sensation. Researchers have identified a spinal cord pathway of itch sensation (Andrew & Craig, 2001). Histamines in the skin excite axons of this pathway, and other kinds of skin stimuli do not. Even when a stimulus releases histamines, however, this pathway is slow to respond, and when it does respond, the axons transmit impulses at the unusually slow velocity of only about half a meter per second. At that rate, an action potential from your foot needs 3 or 4 seconds to reach your head. For a giraffe or elephant, the delay is even longer. You might try very rubbing some rough leaves against your ankle. Note how try it soon you feel the touch sensation and how yourself much more slowly you notice the itchiness. Itch is useful because it directs you to scratch the itchy area and presumably remove whatever is irritating your skin. Vigorous scratching produces mild pain, and pain inhibits itch. Opiates, which decrease pain, increase itch (Andrew & Craig, 2001). This inhibitory relationship between pain and itch is the strongest evidence that itch is not a type of pain.
Module 7.2 In Closing: The Mechanical Senses We humans generally pay so much attention to vision and hearing that we take our mechanical senses for granted. However, a mere moment’s reflection should reveal how critical they are for survival. At every moment, your vestibular sense tells you whether you are standing or falling; your sense of pain can tell you that you have injured yourself. If you moved to a televisionlike universe with only vision and hearing, you might get by if you had already learned what all the sights and sounds mean. But it is hard to imagine how you could have learned their meaning without much previous experience of touch and pain.
Summary 1. The vestibular system detects the position and acceleration of the head and adjusts body posture and eye movements accordingly. (p. 205) 2. The somatosensory system depends on a variety of receptors that are sensitive to different kinds of stimulation of the skin and internal tissues. The brain maintains several parallel somatosensory representations of the body. (p. 206) 3. Activity in the primary somatosensory cortex corresponds to what someone is experiencing, even 7.2
The Mechanical Senses
213
if it is illusory and not the same as the actual stimulation. (p. 209) 4. Injurious stimuli excite pain receptors, which are bare nerve endings. Some pain receptors also respond to acids, heat, and capsaicin—the chemical that makes hot peppers taste hot. (p. 209) 5. Axons conveying pain stimulation to the spinal cord and brainstem release glutamate in response to moderate pain and a combination of glutamate and substance P for stronger pain. (p. 209) 6. Painful information takes two routes to the brain. A route leading to the somatosensory cortex conveys the sensory information, including location in the body. A route to the cingulate cortex and several other structures conveys the unpleasant emotional aspect. (p. 210) 7. Opiate drugs attach to the brain’s endorphin receptors. Endorphins decrease pain by blocking release of substance P and other transmitters from pain neurons. Both pleasant and unpleasant experiences can release endorphins. (p. 210) 8. A harmful stimulus may give rise to a greater or lesser degree of pain depending on other current and recent stimuli. According to the gate theory of pain, other stimuli can close certain gates and block the transmission of pain. (p. 211) 9. Chronic pain bombards pain synapses with repetitive input, and increases their responsiveness to later stimuli, through a process like learning. (p. 212) 10. Morphine is most effective as a painkiller if it is used promptly. Allowing the nervous system to be bombarded with prolonged pain messages increases the overall sensitivity to pain. (p. 212) 11. Skin irritation releases histamine, which excites a spinal pathway responsible for itch. The axons of that pathway transmit impulses very slowly. They can be inhibited by pain messages. (p. 213)
Answers to STOP
&
CHECK
sition. Without feedback about head position, a person would not be able to correct the eye movements, and the experience would be like watching a jiggling book page. (p. 205) 2. We have several types of receptors, sensitive to touch, heat, and so forth, and different parts of the somatosensory cortex respond to different kinds of skin stimulation. (p. 209) 3. People are consciously aware of only those touch stimuli that produce sufficient arousal in the primary somatosensory cortex. (p. 209) 4. Jalapeños and other hot peppers contain capsaicin, which stimulates neurons that are sensitive to pain, acids, and heat. (p. 210) 5. Blocking glutamate receptors would eliminate weak to moderate pain. (However, doing so would not be a good strategy for killing pain. Glutamate is the most abundant transmitter, and blocking it would disrupt practically everything the brain does.) Blocking substance P receptors makes intense pain feel mild. (p. 210) 6. Opiates relieve pain by stimulating the same receptors as endorphins, thereby blocking the release of substance P. Endorphins block messages from the thinnest pain fibers, conveying dull pain, and not from thicker fibers, carrying sharp pain. (p. 213) 7. Placebos reduce the emotional reaction to pain but have less effect than morphine on the sensation itself. (p. 213) 8. Anti-inflammatory drugs block the release of chemicals from damaged tissues, which would otherwise magnify the effects of pain receptors. (p. 213) 9. The morphine will not decrease the sharp pain of the surgery itself. However, it will decrease the subsequent barrage of pain stimuli that can sensitize pain neurons. (p. 213) 10. Antihistamines decrease itch; opiates increase it. (p. 213)
Thought Questions
Questions
1. Why is the vestibular sense generally useless under conditions of weightlessness?
1. The vestibular system enables the brain to shift eye movements to compensate for changes in head po-
2. How could you determine whether hypnosis decreases pain by increasing the release of endorphins?
214
Chapter 7
The Other Sensory Systems
Module 7.3
The Chemical Senses
S
uppose you had the godlike power to create a new species of animal, but you could equip it with only one sensory system. Which sense would you give it? Your first impulse might be to choose vision or hearing because of their importance to humans. But an animal with only one sensory system is not going to be much like humans, is it? To have any chance of survival, it will probably have to be small and slow, maybe even one-celled. What sense will be most useful to such an animal? Most theorists believe that the first sensory system of the earliest animals was a chemical sensitivity (G. H. Parker, 1922). A chemical sense enables a small animal to find food, avoid certain kinds of danger, and even locate mates. Now imagine that you have to choose one of your senses to lose. Which one will it be? Most of us would not choose to lose vision, hearing, or touch. Losing pain sensitivity can be dangerous. You might choose to sacrifice your smell or taste. Curious, isn’t it? If an animal is going to survive with only one sense, it almost has to be a chemical sense, and yet to humans, with many other well-developed senses, the chemical senses seem dispensable. Perhaps we underestimate their importance.
General Issues About Chemical Coding Suppose you run a bakery and need to send frequent messages to your supplier down the street. Suppose further that you can communicate only by ringing three large bells on the roof of your bakery. You would have to work out a code. One possibility would be to label the three bells: The high-pitched bell means “I need flour.” The medium-pitched bell means “I need sugar.” And the low-pitched bell means “I need eggs.” The more you need something, the faster you ring the bell. We shall call this system the labeled-line code because each bell has a single unchanging label. Of course, it is limited to flour, sugar, and eggs.
Another possibility would be to set up a code that depends on a relationship among the three bells. Ringing the high and medium bells equally means that you need flour. The medium and low bells together call for sugar; the high and low bells together call for eggs. Ringing all three together means you need vanilla extract. Ringing mostly the high bell while ringing the other two bells slightly means you need hazelnuts. And so forth. We call this the across-fiber pattern code because the meaning depends on the pattern across bells. This code is versatile and can be highly useful, unless we make it too complicated. A sensory system could theoretically use either type of coding. In a system relying on the labeled-line principle, each receptor would respond to a limited range of stimuli and send a direct line to the brain. In a system relying on the across-fiber pattern principle, each receptor responds to a wider range of stimuli and contributes to the perception of each of them. In other words, a given response by a given sensory axon means little unless the brain knows what the other axons are doing at the same time (Erickson, 1982). Vertebrate sensory systems probably do not have any pure labeled-line codes. In color perception, we encountered a good example of an across-fiber pattern code. For example, a medium-wavelength cone might produce the same level of response to a moderately bright green light, a brighter blue light, or a white light, so the response of that cone is ambiguous unless the brain compares it to other cones. In auditory pitch perception, the responses of the hair cell receptors are narrowly tuned, but even in this case, the meaning of a particular receptor’s response depends on the context: A given receptor may respond best to a certain highfrequency tone, but it also responds in phase with a number of low-frequency tones (as do all the other receptors). Each receptor also responds to white noise (static) and to various mixtures of tones. Auditory perception depends on a comparison of responses across all the receptors. Similarly, each taste and smell stimulus excites several kinds of neurons, and the meaning of a particular response by a particular neuron depends on the context of responses by other neurons.
7.3
The Chemical Senses
215
STOP
&
CHECK
1. Of the following, which use a labeled-line code and which use an across-fiber pattern code? A. A fire alarm B. A light switch C. The shift key plus another key on a keyboard Check your answers on page 227.
Taste
Taste Receptors The receptors for taste are not true neurons but modified skin cells. Like neurons, taste receptors have excitable membranes and release neurotransmitters to excite neighboring neurons, which in turn transmit information to the brain. Like skin cells, however, taste receptors are gradually sloughed off and replaced, each one lasting about 10 to 14 days (Kinnamon, 1987). Mammalian taste receptors are in taste buds, located in papillae, structures on the surface of the tongue (Figure 7.18). A given papilla may contain from zero to ten or more taste buds (Arvidson & Friberg, 1980), and each taste bud contains about 50 receptor cells. In adult humans, taste buds are located mainly along the outside edge of the tongue, with few or none in the center. You can demonstrate this principle as follows: Soak a small cotton swab in sugar water, saltwater, or vinegar. Then touch it lightly on the center of your tongue, not too far toward the back. If you get the position right, you will experience little or no taste. Then try it again on the edge of try it your tongue and notice how much stronger yourself the taste is. Now change the procedure a bit. Wash your mouth out with water and prepare a cotton swab as before. Touch the soaked portion to one edge of your tongue and then slowly stroke it to the center of your tongue. Now it will seem as if you are moving the taste to the center of your tongue. In fact, you are getting only a Chapter 7
How Many Kinds of Taste Receptors? Traditionally, people in Western society have described sweet, sour, salty, and bitter as the “primary” tastes. However, some tastes defy categorization in terms of these four labels (Schiffman & Erickson, 1980; Schiffman, McElroy, & Erickson, 1980). How could we determine how many kinds of taste we have?
E X T E N S I O N S A N D A P P L I C AT I O N S
Taste refers to the stimulation of the taste buds, the receptors on the tongue. When we talk about the taste of food, we generally mean flavor, which is a combination of taste and smell. Whereas other senses remain separate throughout the cortex, taste and smell axons converge onto some of the same cells in an area called the endopiriform cortex (Fu, Sugai, Yoshimura, & Onoda, 2004). That convergence presumably enables taste and smell to combine their influences on food selection.
216
touch sensation from the center of your tongue; you attribute the taste you had on the side of your tongue to every other spot you stroke (Bartoshuk, 1991).
The Other Sensory Systems
Chemicals That Alter the Taste Buds One way to identify taste receptor types is to find procedures that alter one receptor but not others. For example, chewing a miracle berry (native to West Africa) gives little taste itself but temporarily changes sweet receptors. Miracle berries contain a protein, miraculin, that modifies sweet receptors in such a way that they can be stimulated by acids (Bartoshuk, Gentile, Moskowitz, & Meiselman, 1974). If you ever get a chance to chew a miracle berry (and I do recommend it), for the next half hour anything acidic will taste sweet in addition to its usual sour taste. A colleague and I once spent an evening experimenting with miracle berries. We drank straight lemon juice, sauerkraut juice, even vinegar. All tasted extremely sweet, but we awoke the next day with mouths full of ulcers. Miraculin was, for a time, commercially available in the United States as a diet aid. The idea was that dieters could coat their tongue with a miraculin pill and then enjoy unsweetened lemonade and so forth, which would taste sweet but provide almost no calories. Have you ever drunk orange juice just after brushing your teeth? Did you wonder why something so wonderful suddenly tasted so bad? Most toothpastes contain sodium lauryl sulfate, a chemical that intensifies bitter tastes and weakens sweet ones, apparently by coating the sweet receptors and preventing anything from reaching them (DeSimone, Heck, & Bartoshuk, 1980; Schiffman, 1983). Another taste-modifying substance is an extract from the plant Gymnema sylvestre (R. A. Frank, Mize, Kennedy, de los Santos, & Green, 1992). Some healthfood and herbal-remedy stores sell dried leaves of Gymnema sylvestre, from which you can brew a tea. (Gymnema sylvestre pills won’t work for this demonstration.) Soak your tongue in the tea for about 30 seconds and then try tasting various substances. Salty, sour, and bitter substances taste the same as usual, but sugar becomes utterly tasteless, as if you had sand on
Taste bud close-up
Vallate (or circumvallate) papilla
Taste buds
Foliate papilla
Fungiform papilla
© SIU/Peter Arnold, Inc.
(a)
(b)
Figure 7.18 The organs of taste (a) The tip, back, and sides of the tongue are covered with taste buds. Taste buds are located in papillae. (b) Photo showing cross-section of a taste bud. Each taste bud contains about 50 receptor cells.
your tongue. Candies lose their sweetness and now taste sour, bitter, or salty. (Those tastes were already present, but you barely noticed them because the sweetness dominated.) Curiously, the artificial sweetener aspartame (NutraSweet®) loses only some, not all, of its sweetness, implying that it stimulates an additional receptor besides the sugar receptor (Schroeder & Flannery-Schroeder, 2005). Note: Anyone with diabetes should refrain from this try it demonstration because Gymnema sylvestre yourself also alters sugar absorption in the intestines.
Further behavioral evidence for separate types of taste receptors comes from studies of the following type: Soak your tongue for 15 seconds in a sour solution, such as unsweetened lemon juice. Then try tasting some other sour solution, such as dilute vinegar. You will find that the second solution tastes less sour than usual. Depending on the concentrations of the lemon juice and vinegar, the second solution may not taste sour at all. This phenomenon, called adaptation, reflects the fatigue of receptors sensitive to sour tastes. Now try tasting something salty, sweet, or bitter. These substances taste about the same as usual. In short, you experience little cross-adaptation—reduced response to one taste after exposure to another (McBurney & Bartoshuk, 1973). Evidently, the sour receptors are different from the other taste receptors. Similarly, you can show that salt try it receptors are different from the others and yourself so forth. Although we have long known that people have at least four kinds of taste receptors, several kinds of evidence suggested a fifth as well, that of glutamate, as found in monosodium glutamate (MSG). Researchers in fact located a glutamate taste receptor, which closely resembles the brain’s receptors for glutamate as a neurotransmitter (Chaudhari, Landin, & Roper, 2000). The taste of glutamate resembles that of unsalted chicken broth. The English language did not have a word for this taste, but Japanese did, so English-speaking researchers have adopted the Japanese word, umami. Researchers have also reported a fat receptor in the taste buds of mice and rats, so perhaps we shall add the taste of fat as a sixth kind of taste (Laugerette et al., 2005). In addition to the fact that different chemicals excite different receptors, they also produce different rhythms of action potentials. For example, the following two records have the same total number of action potentials in the same amount of time but different temporal patterns:
Time
7.3
The Chemical Senses
217
Researchers noticed that sweet, salty, and bitter chemicals produced different patterns of activity in the taste-sensitive area of the medulla. They recorded the pattern while rats were drinking quinine (a bitter substance) and later used an electrode to generate the same patterns while rats were drinking water. The rats then avoided the water, as if it tasted bad (Di Lorenzo, Hallock, & Kennedy, 2003). Evidently, the “code” to represent a taste includes the rhythm of activity and not just which cells are most active.
Mechanisms of Taste Receptors The saltiness receptor is particularly simple. Recall that a neuron produces an action potential when sodium ions cross its membrane. A saltiness receptor, which detects the presence of sodium, simply permits sodium ions on the tongue to cross its membrane. The higher the concentration of sodium on the tongue, the greater the receptor’s response. Chemicals such as amiloride, which prevents sodium from crossing the membrane, reduce the intensity of salty tastes (DeSimone, Heck, Mierson, & DeSimone, 1984; Schiffman, Lockhead, & Maes, 1983). Sourness receptors operate on a different principle. When an acid binds to the receptor, it closes potassium channels, preventing potassium from leaving the cell. The result is an increased accumulation of positive charges within the neuron and therefore a depolarization of the membrane (Shirley & Persaud, 1990). Sweetness, bitterness, and umami receptors have much in common chemically (He et al., 2004). After a molecule binds to one of these receptors, it activates a G-protein that releases a second messenger within the cell, as in the metabotropic synapses discussed in Chapter 3 (Lindemann, 1996). Bitter sensations have long been a puzzle. It is easy to describe the chemicals that produce a sour taste (acids), a salty taste (Na+ ions), or umami (glutamate). Sweets are more difficult, as they include naturally occurring sugars plus artificial chemicals such as saccharin and aspartame. But bitter substances include a long list of chemicals that apparently have nothing in common, except that they are to varying degrees toxic. What receptor could identify such a large and diverse set of chemicals? The answer is that we have not one bitter receptor but a whole family of 40 or more (Adler et al., 2000; Matsunami, Montmayeur, & Buck, 2000). Most taste cells sensitive to bitterness contain just a small number of the possible bitter receptors, not all 40 (Caicedo & Roper, 2001). One consequence of having so many bitter receptors is that we can detect a great variety of dangerous chemicals. The other is that because each type of bitter receptor is present in small numbers, we can’t detect very low concentrations of bitter substances, the way we can with sour or salty substances, for example.
218
Chapter 7
The Other Sensory Systems
Researchers genetically engineered some mice with salty or sweet receptors on the cells that ordinarily have bitter receptors. Those mice avoided anything that tasted salty or sweet (Mueller et al., 2005). That is, the cells still send the message “bitter,” even though a different kind of chemical is stimulating them.
STOP
&
CHECK
2. Suppose you find a new, unusual-tasting food. How could you determine whether we have a special receptor for that food or whether we taste it with a combination of the other known taste receptors? 3. Although the tongue has receptors for bitter tastes, researchers have not found neurons in the brain itself that respond more strongly to bitter than to other tastes. Explain, then, how it is possible for the brain to detect bitter tastes. 4. If someone injected into your tongue some chemical that blocks the release of second messengers, how would it affect your taste experiences? Check your answers on page 227.
Taste Coding in the Brain Although you may assume that the five kinds of receptors imply five labeled lines to the brain, research suggests a more complicated system (Hettinger & Frank, 1992). The receptors converge their input onto the next cells in the taste system, each of which responds best to a particular taste, but somewhat to other tastes also. The brain can determine what the tongue is tasting only by comparing the responses of several kinds of taste neurons. In other words, taste depends on a pattern of responses across fibers (R. P. Erickson, DiLorenzo, & Woodbury, 1994). Information from the receptors in the anterior twothirds of the tongue is carried to the brain along the chorda tympani, a branch of the seventh cranial nerve (the facial nerve). Taste information from the posterior tongue and the throat is carried along branches of the ninth and tenth cranial nerves. What do you suppose would happen if someone anesthetized your chorda tympani? You would no longer taste anything in the anterior part of your tongue, but you probably would not notice because you would still taste with the posterior part. However, the probability is about 40% that you would experience taste “phantoms,” analogous to the phantom limb experience discussed in Chapter 5 (Yanagisawa, Bartoshuk, Catalanotto, Karrer, & Kveton, 1998). That is, you might experience tastes even when nothing was on your tongue. Evidently, the inputs from the anterior and posterior parts
of your tongue interact in complex ways. Silencing the anterior part releases the posterior part from inhibition and causes it to report tastes more actively than before. The taste nerves project to the nucleus of the tractus solitarius (NTS), a structure in the medulla (Travers, Pfaffmann, & Norgren, 1986). From the NTS, information branches out, reaching the pons, the lateral hypothalamus, the amygdala, the ventral-posterior thalamus, and two areas of the cerebral cortex (Pritchard, Hamilton, Morse, & Norgren, 1986; Yamamoto, 1984). One of these, the somatosensory cortex, responds to the touch aspects of tongue stimulation. The other area, known as the insula, is the primary taste cortex. Curiously, each hemisphere of the cortex receives input mostly from the ipsilateral side of the tongue (Aglioti, Tassinari, Corballis, & Berlucchi, 2000; Pritchard, Macaluso, & Eslinger, 1999). In contrast, each hemisphere receives mostly contralateral input for vision, hearing, and touch. A few of the major connections are illustrated in Figure 7.19.
Individual Differences in Taste You may have had a biology instructor who asked you to taste phenylthiocarbamide (PTC) and then take samples home for your relatives to try. Some people experience it as bitter, and others hardly taste it at all; most
of the variance among people is controlled by a dominant gene and therefore provides an interesting example for a genetics lab (Kim et al., 2003). (Did your instructor happen to mention that PTC is mildly toxic?) For decades, this difference was widely studied as an interesting, easily measurable genetic difference among people, and we have extensive data about the percentage of nontasters in different populations (Guo & Reed, 2001). Figure 7.20 shows some representative results. Although we might expect the prevalence of nontasters to influence cuisine, the figure shows no obvious relationship. For example, nontasters are common in India, where the food is spicy, and in Britain, where it is relatively bland. In the 1990s, researchers discovered that people who are insensitive to PTC are also less sensitive than most people to other tastes as well, including other bitter substances, sours, salts, and so forth. Furthermore, people at the opposite extreme, known as supertasters, have the highest sensitivity to all tastes, as well as to mouth sensations in general (Drewnowski, Henderson, Shore, & Barratt-Fornell, 1998). Supertasters tend to avoid strong-tasting or spicy foods. However, culture and familiarity exert larger effects on people’s food preferences. Consequently, even after you think about how much you do or do not like strongly flavored foods, you cannot confidently identify yourself as a supertaster, taster, or nontaster.
Somatosensory cortex Ventral posteromedial thalamus Insula (primary taste cortex)
Corpus callosum
Orbital prefrontal cortex Hypothalamus Amygdala Nucleus of tractus solitarius From taste buds on tongue
Figure 7.19 Major routes of impulses related to the sense of taste in the human brain The thalamus and cerebral cortex receive impulses from both the left and the right sides of the tongue. (Source: Based on Rolls, 1995) 7.3
The Chemical Senses
219
Figure 7.20 Percentage of nontasters in several human populations Most of the percentages are based on large samples, including more than 31,000 in Japan and 35,000 in India. (Source: Based on Guo & Reed, 2001)
The variations in taste sensitivity relate to the number of fungiform papillae near the tip of the tongue. Supertasters have the most; nontasters have the fewest. That anatomical difference depends mostly on genetics but also on hormones and other influences. Women’s taste sensitivity rises and falls with their monthly hormone cycles and reaches its maximum during early pregnancy, when estradiol levels are very high (Prutkin et al., 2000). That tendency is probably adaptive: During pregnancy, a woman needs to be more careful than usual to avoid harmful foods.
Table 7.2 Are You a Supertaster, Taster, or Nontaster? Equipment: 1⁄4-inch hole punch, small piece of wax paper, cotton swab, blue food coloring, flashlight, and magnifying glass. Punch a 1⁄4-inch hole with a standard hole punch in a piece of wax paper. Dip the cotton swab in blue food coloring. Place the wax paper on the tip of your tongue, just right of the center. Rub the cotton swab over the hole in the wax paper to dye a small part of your tongue. With the flashlight and magnifying glass, have someone count the number of pink, unstained circles in the blue area. They are your fungiform papillae. Compare your results to the following averages:
220
Supertasters:
25 papillae
Tasters:
17 papillae
Nontasters:
10 papillae
Chapter 7
The Other Sensory Systems
If you would like to classify yourself as a taster, nontaster, or supertaster, follow the instructions in Table 7.2.
STOP
&
try it yourself
CHECK
5. What are several reasons why some people like spicier foods than others do? Check your answer on page 227.
Olfaction Olfaction, the sense of smell, is the detection and recognition of chemicals that contact the membranes inside the nose. During an ordinary day, most of us pay little attention to scents, and the deodorant industry is dedicated to decreasing our olfactory experience. Nevertheless, we enjoy many smells, especially in foods. We rely on olfaction for discriminating good from bad wine or edible from rotting meat. Many odors, such as that of a campfire or fresh popcorn, tend to evoke highly emotional memories (Herz, 2004). Decades ago, researchers described olfaction as relatively slow to respond, but later studies found that mice can respond to an odor within 200 ms of its pre-
sentation, comparable to reaction times for other senses (Abraham et al., 2004). However, olfaction is subject to more rapid adaptation than sight or hearing (Kurahashi, Lowe, & Gold, 1994). To demonstrate adaptation, take a bottle of an odorous chemical, such as lemon extract, and determine how far away you can hold the bottle and still smell it. Then hold it up to your nose and inhale deeply and repeatedly try it for a minute. Now test again: From how far yourself away can you smell it?
Behavioral Methods of Identifying Olfactory Receptors The neurons responsible for smell are the olfactory cells, which line the olfactory epithelium in the rear of the nasal
air passages (Figure 7.21). In mammals, each olfactory cell has cilia (threadlike dendrites) that extend from the cell body into the mucous surface of the nasal passage. Olfactory receptors are located on the cilia. How many kinds of olfactory receptors do we have? Researchers answered the analogous question for color vision more than a century ago, using only behavioral observations. They found that, by mixing various amounts of three colors of light—say, red, green, and blue—they could match any other color that people can see. Researchers therefore concluded that we have three, and probably only three, kinds of color receptors (which we now call cones). We could imagine doing the same experiment for olfaction. Take a few odors—say, almond, lilac, and skunk spray—and test whether people can mix various
Figure 7.21 Olfactory receptors (a) Location of receptors in nasal cavity. (b) Closeup of olfactory cells. Note also the vomeronasal organ, to be discussed later.
Olfactory bulb Olfactory nerve
(a) Olfactory bulb
Olfactory nerve axons Olfactory receptor cell Olfactory epithelium
Supporting cell Olfactory cilia (dendrites)
(b)
7.3
The Chemical Senses
221
proportions of those odors to match all other odors. If three odors are not enough, add more until eventually we can mix them to match every other possible odor. Because we do not know what the “primary odors” are (if indeed there are such things), we might have to do a great deal of trial-and-error testing to find the best set of odors to use. So far as we know, however, either no one ever tried this study, or everyone who did gave up in discouragement. A second approach is to study people who have trouble smelling one type of chemical. A general lack of olfaction is known as anosmia; the inability to smell a single chemical is a specific anosmia. For example, about 2% to 3% of all people are insensitive to the smell of isobutyric acid, the smelly component of sweat (Amoore, 1977). (Few complain about this disability.) Because people can lose the ability to smell just this one chemical, we may assume that there is a receptor specific to isobutyric acid. We might then search for additional specific anosmias on the theory that each specific anosmia represents the loss of a different type of receptor.
One investigator identified at least five other specific anosmias—musky, fishy, urinous, spermous, and malty—and less convincing evidence suggested 26 other possible specific anosmias (Amoore, 1977). But the more specific anosmias one finds, the less sure one can be of having found them all.
Biochemical Identification of Receptor Types Ultimately, the best way to determine the number of olfactory receptor types is to isolate the receptor molecules themselves. Linda Buck and Richard Axel (1991) identified a family of proteins in olfactory receptors, as shown in Figure 7.22. Like metabotropic neurotransmitter receptors, each of these proteins traverses the cell membrane seven times and responds to a chemical outside the cell (here an odorant molecule instead of a neurotransmitter) by triggering changes in a G-protein inside the cell; the G-protein then provokes chemical activities that lead to an action potential. The best estimate is that humans have several hundred olfactory
Outside the cell P L S I L D C S V P F K Q A S L S C I L S L S L D I C F V NH2 F Q R N P L P S F T M M M A I Q F A D T H G T L H M N P P C N E S E V L L Q E L L N H T T L Q E H Q K H L L L V P V I M M Y F F A T F Y V Y S V F H L A G M L S T F Y G T F L F C L V L L A M S V I D Y S W L D L I V S F T E L P S T S V F L N V V L F L V L S G L S L I L N V V F L A V I I M Y L S I C A Y I Y M A L I L P I K R D H T H V P H R L I V D S H L M S Y G S A R Y V I M V S A H S I I L P L C K V F P L
F Q
Inside the cell
S A
N N S T
P
E
C L
V
V H
C I
K
G
Y
F
G
I
I
T
T
L
Y
L
S L S G F
S S
V K
T M V M A M Y V T V T M P L P N F I S Y L R N R D M
K K C K L I V R I T F C L COOH
K E A L
Figure 7.22 One of the olfactory receptor proteins This protein resembles those of neurotransmitter receptors. Each protein traverses the membrane seven times; each responds to a chemical outside the cell and triggers activity of a G-protein inside the cell. The protein shown is one of a family; different olfactory receptors contain different proteins, each with a slightly different structure. Each of the little circles in this diagram represents one amino acid of the protein. The white circles represent amino acids that are the same in most of the olfactory receptor proteins; the purple circles represent amino acids that vary from one protein to another. (Source: Based on Buck & Axel, 1991)
222
Chapter 7
The Other Sensory Systems
receptor proteins, whereas rats and mice have about a thousand types (Zhang & Firestein, 2002). Correspondingly, rats can distinguish among odors that seem the same to humans (Rubin & Kaatz, 2001). Although each chemical excites several types of receptors, the most strongly excited receptor inhibits the activity of other ones in a process analogous to lateral inhibition (Oka, Omura, Kataoka, & Touhara, 2004). The net result is that a given chemical produces a major response in one or two kinds of receptors and weaker responses in a few others.
STOP
&
CHECK
6. If someone sees no need to shower after becoming sweaty, what is one (admittedly unlikely) explanation? 7. How do olfactory receptors resemble metabotropic neurotransmitter receptors? Check your answers on page 227.
Implications for Coding We have only three kinds of cones and probably five kinds of taste receptors, so it was a surprise to find hundreds of kinds of olfactory receptors. That diversity makes possible narrow specialization of functions. To illustrate: Because we have only three kinds of cones with which to see a great variety of colors, each cone must contribute to almost every color perception. In olfaction, we can afford to have receptors that respond to few stimuli. The response of one olfactory receptor might mean, “I smell a fatty acid with a straight chain of about three to five carbon atoms.” The response of another receptor might mean, “I smell either a fatty acid or an aldehyde with a straight chain of about five to seven carbon atoms” (Araneda, Kini, & Firestein, 2000; Imamura, Mataga, & Mori, 1992; Mori, Mataga, & Imamura, 1992). The combined activity of those two receptors would identify the chemical precisely. The question may have occurred to you, “Why did evolution go to the bother of designing so many olfactory receptor types? After all, color vision gets by with just three types of cones.” The main reason is that light energy can be arranged along a single dimension, wavelength. Olfaction processes an enormous variety of airborne chemicals that are not arranged along a single continuum. To detect them all, we need a great variety of receptors. A secondary reason has to do with localization. In olfaction, space is no problem; we arrange our olfactory receptors over the entire surface of the nasal passages. In vision, however, the brain needs to determine precisely where on the retina a stimulus
originates. Hundreds of different kinds of wavelength receptors could not be compacted into each spot on the retina.
Messages to the Brain When an olfactory receptor is stimulated, its axon carries an impulse to the olfactory bulb (see Figure 4.13, p. 91). Each odorous chemical excites only a limited part of the olfactory bulb. Evidently, olfaction is coded in terms of which area of the olfactory bulb is excited. Chemicals that smell similar excite neighboring areas, and chemicals that smell different excite more separated areas (Uchida, Takahashi, Tanifuji, & Mori, 2000). The olfactory bulb sends axons to the olfactory area of the cerebral cortex, where, again, neurons responding to a particular kind of smell cluster together. The mapping is consistent across individuals; that is, the cortical area responding to a given odor is the same from one individual to another (Zou, Horowitz, Montmayeur, Snapper, & Buck, 2001). From the cortex, information connects to other areas that control feeding and reproduction, two kinds of behavior that are very sensitive to odors. Olfactory receptors are vulnerable to damage because they are exposed to everything in the air. Unlike your receptors for vision and hearing, which remain with you for a lifetime, an olfactory receptor has an average survival time of just over a month. At that point, a stem cell matures into a new olfactory cell in the same location as the first and expressing the same receptor protein (Nef, 1998). Its axon then has to find its way to the correct target in the olfactory bulb. Each olfactory neuron axon contains copies of its olfactory receptor protein, which it uses like an identification card to find its correct partner (Barnea et al., 2004; Strotmann, Levai, Fleischer, Schwarzenbacher, & Breer, 2004). However, if the entire olfactory surface is damaged at once by a blast of toxic fumes, so that the system has to replace all the receptors at the same time, many of them fail to make the correct connections, and olfactory experience does not recover fully to normal (Iwema, Fang, Kurtz, Youngentob, & Schwob, 2004).
Individual Differences In olfaction, as with almost anything else, people differ. One strong correlate of that difference is gender. On the average, women detect odors more readily than men, and the brain responses to odors are stronger in women than in men. Those differences occur at all ages and in all cultures that have been tested (Doty, Applebaum, Zusho, & Settle, 1985; Yousem et al., 1999). Women also seem to pay more attention to smells. Surveys have found that women are much more likely
7.3
The Chemical Senses
223
than men to care about the smell of a potential romantic partner (Herz & Inzlicht, 2002). In addition, if people repeatedly attend to some faint odor, young adult women gradually become more and more sensitive to it, until that they can detect it in concentrations one ten-thousandth of what they could at the start (Dalton, Doolittle, & Breslin, 2002). Men, girls before puberty, and women after menopause do not show that effect, so it apparently depends on female hormones. We can only speculate on why we evolved a connection between female hormones and odor sensitization. We don’t yet know much about genetic differences in olfactory sensitivity, but consider this one surprising study: Through the wonders of bioengineering, researchers can examine the effects of deleting any particular gene. One gene controls a channel through which most potassium passes in the membranes of certain neurons of the olfactory bulb. Potassium, you will recall from Chapter 2, leaves a neuron after an action potential, thereby restoring the resting potential. With no particular hypothesis in mind, researchers tested what would happen if they deleted that potassium channel in mice. Ordinarily, deleting any gene leads to deficits, and deleting an important gene can leave an animal incapable of survival. So imagine the researchers’ amazement when they found that the mice lacking this potassium channel had a much greater than normal sense of smell. In fact, you could say they had developed mice with a superpower: These mice could detect faint smells, less than one-thousandth the minimum that other mice could detect. Their olfactory bulb had an unusual anatomy, with more numerous but smaller clusters of neurons (Fadool et al., 2004). Exactly how the deletion of a gene led to this superpower remains uncertain, and presumably, the mice are deficient in some other way (or evolution would have deleted this gene long ago). Still, it is a noteworthy example of how a single gene can make a huge difference. For more information about olfaction, check this website: http://www.leffingwell.com/olfaction.htm
STOP
&
CHECK
8. If two olfactory receptors are located near each other in the nose, in what way are they likely to be similar? 9. What is the mean life span of an olfactory receptor? 10. What good does it do for an olfactory axon to have copies of the cell’s olfactory receptor protein? Check your answers on page 227.
224
Chapter 7
The Other Sensory Systems
Vomeronasal Sensation and Pheromones An additional sense is important for most mammals, although less so for humans. The vomeronasal organ (VNO) is a set of receptors located near, but separate from, the olfactory receptors. The VNO has relatively few types of receptors—12 families of them in mice (Rodriguez, Del Punta, Rothman, Ishii, & Mombaerts, 2002). Unlike the olfactory system, which can identify an enormous number of chemicals, the VNO’s receptors are specialized to respond only to pheromones, which are chemicals released by an animal that affect the behavior of other members of the same species, especially sexually. For example, if you have ever had a female dog that wasn’t neutered, whenever she was in her fertile (estrus) period, even though you kept her indoors, your yard attracted every male dog in the neighborhood that was free to roam. Each VNO receptor responds to just one pheromone, such as the smell of a male or a female mouse. It responds to the preferred chemical in concentrations as low as one part in a hundred billion, but it hardly responds at all to other chemicals (Leinders-Zufall et al., 2000). Furthermore, the receptor shows no adaptation to a repeated stimulus. Olfaction, by contrast, shows massive adaptation. Have you ever been in a room that seems smelly at first but not a few minutes later? Your olfactory receptors respond to a new odor but not to a continuing one. VNO receptors, however, continue responding just as strongly even after prolonged stimulation (Holy, Dulac, & Meister, 2000). What do you suppose would happen if a male mouse’s vomeronasal receptors were inactive because of a genetic abnormality or surgical removal of the organ? Results of various studies have differed. In one study, the males lost practically all sexual behaviors (Del Punta et al., 2002). In another, they attempted indiscriminately to mate with both males and females (Stowers, Holy, Meister, Dulac, & Koentges, 2002). In still another, they mated normally, although they could not find a female by her odor (Pankevich, Baum, & Cherry, 2004). Although the VNO is reasonably prominent in most mammals and easy to find in a human fetus, in adult humans it is tiny (Monti-Bloch, Jennings-White, Dolberg, & Berliner, 1994) and has no receptors (Keverne, 1999). It seems to be vestigial—that is, a leftover from our evolutionary past. Humans nevertheless do respond to pheromones. Researchers have found at least one pheromone receptor in humans. Its structure resembles that of other mammals’ pheromone receptors, but for us, it is lo-
cated in the olfactory mucosa along with normal olfactory receptors, not in the VNO (Rodriguez, Greer, Mok, & Mombaerts, 2000). The behavioral effects of pheromones apparently occur unconsciously. That is, people respond behaviorally to certain chemicals in human skin even though they describe them as “odorless.” Exposure to these chemicals—especially chemicals from the opposite sex—alters our skin temperature, sweating, and other autonomic responses (Monti-Bloch, Jennings-White, & Berliner, 1998) and increases activity in the hypothalamus, an area important for sexual behavior (Savic, Berglund, Gulyas, & Roland, 2001). Several studies indicate a role for pheromones in human sexual behaviors. The best-documented effect relates to the timing of women’s menstrual cycles. Women who spend much time together find that their menstrual cycles become more synchronized (McClintock, 1971; Weller, Weller, Koresh-Kamin, & BenShoshan, 1999; Weller, Weller, & Roizman, 1999), unless one of them is taking birth-control pills. To test whether pheromones are responsible for the synchronization, researchers in two studies exposed young volunteer women to the underarm secretions of a donor woman. In both studies, most of the women exposed to the secretions became synchronized to the donor’s menstrual cycle (Preti, Cutler, Garcia, Huggins, & Lawley, 1986; Russell, Switz, & Thompson, 1980). Another study dealt with the phenomenon that a woman who is in an intimate relationship with a man tends to have more regular menstrual periods than other women do. According to one hypothesis, the man’s pheromones promote this regularity. In the study, young women who were not sexually active were exposed daily to a man’s underarm secretions. (Getting women to volunteer for this study wasn’t easy.) Gradually, over 14 weeks, most of these women’s menstrual periods became more regular than before, with a mean of 29–30 days each (Cutler et al., 1986). In short, human body secretions apparently do act as pheromones, although the effects are more subtle than in nonhuman mammals.
STOP
&
CHECK
11. What is one major difference between olfactory receptors and those of the vomeronasal organ? Check your answer on page 227.
Synesthesia Finally, let’s briefly consider something that is not one sense or another but rather a combination of them: Synesthesia is the experience of one sense in response to stimulation of a different sense. In the words of one person with synesthesia, “To me, the taste of beef is dark blue. The smell of almonds is pale orange. And when tenor saxophones play, the music looks like a floating, suspended coiling snake-ball of lit-up purple neon tubes” (Day, 2005, p. 11). No two people have quite the same synesthetic experience. It is estimated that about 1 person in 500 is synesthetic (Day, 2005), but that estimate probably overlooks people with a milder form of the condition, as well as many who hide their condition because other people consider it a sign of mental illness. (It is, in fact, neither particularly helpful nor harmful.) Various studies attest to the reality of synesthesia. For example, try to find the 2 among the 5s in each of the following displays: 555555555555 555555555555 555555525555 555555555555
555555555555 555555555555 555555555555 555555555525
555555555555 552555555555 555555555555 555555555555
One person with synesthesia was able to find the 2 consistently faster than other people, explaining that he just scanned each display looking for a patch of orange! However, he was slower than other people to find an 8 among 6s because both 8 and 6 look bluish to him (Blake, Palmeri, Marois, & Kim, 2005). Another person had trouble finding an A among 4s because both look red, but could easily find an A among 0s because 0 looks black (Laeng, Svartdal, & Oelmann, 2004). Oddly, however, someone who sees the letter P as yellow had no trouble finding it when it was printed (in black ink) on a yellow page. In some way, he sees the letter both in its real color (black) and its synesthetic color (Blake et al., 2005). In another study, people were asked to identify as quickly as possible the shape formed by the less common character in a display like this: TTTTTTTT TTTTTTTT TTCCCTTT TTCCCTTT TTTTTTTT TTTTTTTT
Here, the correct answer is “rectangle,” the shape formed by the Cs. People who perceive C and T as different colors find the rectangle faster than the average for people without synesthesia. However, they do not
7.3
The Chemical Senses
225
find it as fast as some other person would find the rectangle of Cs in this display, where the Cs really are in color: TTTTTTTT TTTTTTTT TTCCCTTT TTCCCTTT TTTTTTTT TTTTTTTT
Summary
In short, people with synesthesia see letters as if in color but not as bright as real colors (Hubbard, Arman, Ramachandran, & Boynton, 2005). Researchers used fMRI to map brain responses in a group of 12 women who all reported seeing colors when they heard people speak. In each case, listening to speech elicited activity in both the auditory cortex and the area of the visual cortex most responsive to color (Nunn et al., 2002). Results like these suggest that for people with synesthesia, some of the axons from one cortical area have branches into another cortical area. Although that seems a plausible hypothesis, surely it can’t be the whole explanation. Obviously, no one is born with a connection between P and yellow or between 4 and red; we have to learn to recognize numbers and letters. Exactly how synesthesia develops remains for further research.
STOP
&
CHECK
12. If someone reports seeing a particular letter in color, in what way is it different from a real color? Check your answer on page 227.
Module 7.3 In Closing: Different Senses as Different Ways of Knowing the World Ask the average person to describe the current environment, and you will probably get a description of what he or she sees and maybe hears. If nonhumans could talk, most species would start by describing what they smell. A human, a dog, and a snail may be in the same place, but the environments they perceive are very different. We sometimes underestimate the importance of taste and smell. The rare people who lose their sense of taste say they no longer enjoy eating and in fact find it difficult to swallow (Cowart, 2005). A loss of smell can be a problem too. Taste and smell can’t compete
226
Chapter 7
with vision and hearing for telling us about what is happening in the distance, but they are essential for telling us about what is right next to us or about to enter our bodies.
The Other Sensory Systems
1. Sensory information can be coded in terms of either a labeled-line system or an across-fiber pattern system. (p. 215) 2. Taste receptors are modified skin cells inside taste buds in papillae on the tongue. (p. 216) 3. According to current evidence, we have five kinds of taste receptors, sensitive to sweet, sour, salty, bitter, and umami (glutamate) tastes. (p. 217) 4. Taste is coded by the relative activity of different kinds of cells but also by the rhythm of responses within a given cell. (p. 217) 5. Salty receptors respond simply to sodium ions crossing the membrane. Sour receptors respond to a stimulus by blocking potassium channels. Sweet, bitter, and umami receptors act by a second messenger within the cell, similar to the way a metabotropic neurotransmitter receptor operates. (p. 218) 6. Mammals have about 40 kinds of bitter receptors, enabling them to detect a great variety of harmful substances that are chemically unrelated to one another. However, a consequence of so many bitter receptors is that we are not highly sensitive to low concentrations of any one bitter chemical. (p. 218) 7. Taste information from the anterior two-thirds of the tongue is carried by a different nerve from the posterior tongue and throat. The two nerves interact in complex ways, such that suppressing activity in the anterior tongue increases responses from the posterior tongue. (p. 218) 8. Some people, known as supertasters, have more fungiform papillae than other people do and are more sensitive to a great variety of tastes. They tend to avoid strong-tasting foods. (p. 219) 9. Olfactory receptors are proteins, each of them highly responsive to a few related chemicals and unresponsive to others. Vertebrates have hundreds of olfactory receptors, each contributing to the detection of a few related odors. (p. 222) 10. Olfactory neurons responsive to a particular odor all send their axons to the same part of the olfactory bulb and from there to the same clusters of cells in the olfactory cortex. (p. 223) 11. Olfactory neurons survive only a month or so. When the brain generates new cells to replace them, the new ones become sensitive to the same
chemicals as the ones they replace, and they send their axons to the same targets. (p. 223) 12. In most mammals, each vomeronasal organ (VNO) receptor is sensitive to only one chemical, a pheromone. A pheromone is a social signal, usually for mating purposes. Unlike olfactory receptors, VNO receptors show little or no adaptation to a prolonged stimulus. (p. 224) 13. Humans also respond somewhat to pheromones, although our receptors are in the olfactory mucosa, not the VNO. (p. 224) 14. A small percentage of people experience synesthesia, a sensation in one modality after stimulation in another one. For example, someone might see something while listening to music. The explanation is not known. (p. 225)
Answers to STOP
&
CHECK
Questions 1. The shift key plus another is an example of an across-fiber pattern code. (The meaning of one key depends on what else is pressed.) A fire alarm and a light switch are labeled lines; they convey only one message. (p. 216) 2. You could test for cross-adaptation. If the new taste cross-adapts with others, then it uses the same receptors. If it does not cross-adapt, it may have a receptor of its own. Another possibility would be to find some procedure that blocks this taste without blocking other tastes. (p. 218) 3. Two possibilities: First, bitter tastes produce a distinctive temporal pattern of responses in cells sensitive to taste. Second, even if no one cell responds strongly to bitter tastes, the pattern of responses across many cells may be distinctive. Analogously, in vision, no cone responds primarily to purple, but we nevertheless recognize purple by its pattern of activity across a population of cones. (p. 218)
5. Both genes and hormones influence the strength of tastes, and people who taste foods most strongly tend to avoid spicy foods. In addition, and more important, people prefer familiar foods and foods accepted by their culture. (p. 220) 6. The person may have a specific anosmia and therefore may regard sweat as odorless. (p. 223) 7. Like metabotropic neurotransmitter receptors, an olfactory receptor acts through a G-protein that triggers further events within the cell. (p. 223) 8. Olfactory receptors located near each other are probably sensitive to structurally similar chemicals. (p. 224) 9. Most olfactory receptors survive a little more than a month before dying and being replaced. (p. 224) 10. The receptor molecule acts as a kind of identification to help the axon find its correct target cell in the brain. (p. 224) 11. Olfactory receptors adapt quickly to a continuous odor, whereas receptors of the vomeronasal organ continue to respond. Also, vomeronasal sensations are apparently capable of influencing behavior even without being consciously perceived. (p. 225) 12. Someone who perceives a letter as yellow (when it is actually in black ink) can nevertheless see it on a yellow page. (p. 226)
Thought Questions 1. In the English language, the letter t has no meaning out of context. Its meaning depends on its relationship to other letters. Indeed, even a word, such as to, has little meaning except in its connection to other words. So is language a labeled-line system or an across-fiber pattern system? 2. Suppose a chemist synthesizes a new chemical that turns out to have an odor. Presumably, we do not have a specialized receptor for that chemical. Explain how our receptors detect it.
4. The chemical would block your experiences of sweet, bitter, and umami but should not prevent you from tasting salty and sour. (p. 218)
7.3
The Chemical Senses
227
Chapter Ending
Key Terms and Activities Terms across-fiber pattern principle (p. 215)
labeled-line principle (p. 215)
place theory (p. 198)
loudness (p. 196)
placebo (p. 212)
adaptation (p. 217)
nerve deafness (inner-ear deafness) (p. 201)
primary auditory cortex (area A1) (p. 199) semicircular canal (p. 205)
capsaicin (p. 209)
nucleus of the tractus solitarius (NTS) (p. 219)
cochlea (p. 198)
olfaction (p. 220)
specific anosmia (p. 222)
conductive deafness (middle-ear deafness) (p. 201)
olfactory cell (p. 221)
substance P (p. 209)
opioid mechanisms (p. 210)
supertasters (p. 219)
cross-adaptation (p. 217)
oval window (p. 197)
synesthesia (p. 225)
dermatome (p. 208)
Pacinian corpuscle (p. 206)
taste bud (p. 216)
endorphin (p. 210)
papilla (pl.: papillae) (p. 216)
tinnitus (p. 202)
frequency (p. 196)
periaqueductal gray area (p. 210)
tympanic membrane (p. 197)
frequency theory (p. 198)
pheromone (p. 224)
volley principle (p. 199)
gate theory (p. 211)
pinna (p. 197)
vomeronasal organ (VNO) (p. 224)
hair cell (p. 198)
pitch (p. 196)
amplitude (p. 196) anosmia (p. 222)
Suggestions for Further Reading
somatosensory system (p. 206)
Mark Rejhon’s Frequently Asked Questions About Hearing Impairment http://www.marky.com/hearing/
Beauchamp, G. K., & Bartoshuk, L. (1997). Tasting and smelling. San Diego, CA: Academic Press. Excellent book covering receptors, psychophysics, and disorders of taste and smell.
Pain Net, with many links
Pert, C. B. (1997). Molecules of emotion. New York: Simon & Schuster. Autobiographical statement by the woman who, as a graduate student, first demonstrated the opiate receptors.
http://www.ampainsoc.org/
Robertson, L. C., & Sagiv, N. (2005). Synesthesia: Perspectives from cognitive neuroscience. Oxford, England: Oxford University Press. A review of research on this fascinating phenomenon.
Sense of Smell Institute
Websites to Explore You can go to the Biological Psychology Study Center and click these links. While there, you can also check for suggested articles available on InfoTrac College Edition. The Biological Psychology Internet address is: psychology.wadsworth.com/book/kalatbiopsych9e/
228
Chapter Ending
http://www.painnet.com/
American Pain Society Nontaster, Taster, or Supertaster? http://www.neosoft.com/~bmiller/taste.htm
http://www.senseofsmell.org
Exploring Biological Psychology CD Hearing (learning module) Hearing Puzzle (puzzle) Somesthetic Experiment (drag & drop) Chapter Quiz (multiple-choice questions) Critical Thinking (essay questions)
http://www.thomsonedu.com Go to this site for the link to ThomsonNOW, your one-stop study shop, Take a Pre-Test for this chapter, and ThomsonNOW will generate a Personalized Study Plan based on your test results. The Study Plan will identify the topics you need to review and direct you to online resources to help you master these topics. You can then take a Post-Test to help you determine the concepts you have mastered and what you still need work on.
Test what happens when your hand movements produce the opposite of their visual effects.
This item reviews the structure of the ear.
Chapter Ending
229
Image not available due to copyright restrictions
8
Movement
Chapter Outline
Main Ideas
Module 8.1
1. Movement depends on overall plans, not just connections between a stimulus and a muscle contraction.
The Control of Movement Muscles and Their Movements Units of Movement In Closing: Categories of Movement Summary Answers to Stop & Check Questions Thought Question Module 8.2
Brain Mechanisms of Movement The Cerebral Cortex The Cerebellum The Basal Ganglia Brain Areas and Motor Learning In Closing: Movement Control and Cognition Summary Answers to Stop & Check Questions Thought Question Module 8.3
Disorders of Movement Parkinson’s Disease Huntington’s Disease In Closing: Heredity and Environment in Movement Disorders Summary Answers to Stop & Check Questions Thought Questions Terms Suggestions for Further Reading Websites to Explore Exploring Biological Psychology CD ThomsonNOW
2. Movements vary in sensitivity to feedback, skill, and variability in the face of obstacles. 3. Damage to different brain locations produces different kinds of movement impairment. 4. Brain damage that impairs movement also impairs cognitive processes. That is, control of movement is inseparably linked with cognition.
B
efore we get started, please try this: Get out a pencil and a sheet of paper, and put the pencil in your nonpreferred hand. For example, if you are right-handed, put it in your left hand. Now, with that hand, draw a face in profile—that is, facing one direction or the other, but not straight try it ahead. Please do this now before reading yourself further. If you tried the demonstration, you probably notice that your drawing is much more childlike than usual. It is as if some part of your brain stored the way you used to draw as a young child. Now, if you are righthanded and therefore drew the face with your left hand, why did you draw it facing to the right? At least I assume you did because more than two-thirds of righthanders drawing with their left hand draw the profile facing right. Young children, age 5 or so, when drawing with the right hand, almost always draw people and animals facing left, but when using the left hand, they almost always draw them facing right. But why? The short answer is we don’t know. We have much to learn about the control of movement and how it relates to perception, motivation, and other functions.
Opposite: Ultimately, what brain activity accomplishes is the control of movement—a far more complex process than it might seem. Source: Jim Rider/Zeis Images
231
Module 8.1
The Control of Movement
W
hy do we have brains at all? Plants survive just fine without them. So do sponges, which are animals, even if they don’t act like them. But plants don’t move, and neither do sponges. A sea squirt (a marine invertebrate) swims and has a brain during its infant stage, but when it transforms into an adult, it attaches to a surface, becomes a stationary filter-feeder, and digests its own brain, as if to say, “Now that I’ve stopped traveling, I won’t need this brain thing anymore.” Ultimately, the purpose of a brain is to control behaviors, and all behaviors are movements. “But wait,” you might reply. “We need brains for other things too, don’t we? Like seeing, hearing, finding food, talking, understanding speech . . .” Well, what would be the value of seeing and hearing if you couldn’t do anything? Finding food requires movement, and so does talking. Understanding isn’t movement, but again, it wouldn’t do you much good unless you could do something with it. A great brain without muscles would be like a computer without a monitor, printer, or other output. No matter how powerful the internal processing, it would be useless. Nevertheless, most psychology texts ignore movement, and journals have few articles about it (Rosenbaum, 2005). For example, one of the high points of
artificial intelligence so far was the computer that won a chess tournament against the world’s greatest player. During that tournament, the computer didn’t actually do anything; a human took the computer’s instructions and moved the chess pieces. It is as if we consider “intelligence” to mean great thoughts and actions to be something uninteresting tacked on at the end. Nevertheless, the rapid movements of a skilled typist, musician, or athlete require complex coordination and planning. Understanding movement is a significant challenge for psychologists as well as biologists.
Gary Bell/Getty Images
Muscles and Their Movements
Adult sea squirts attach to the surface, never move again, and digest their own brains.
232
Chapter 8
Movement
All animal movement depends on muscle contractions. Vertebrate muscles fall into three categories (Figure 8.1): smooth muscles, which control the digestive system and other organs; skeletal, or striated, muscles, which control movement of the body in relation to the environment; and cardiac muscles (the heart muscles), which have properties intermediate between those of smooth and skeletal muscles. Each muscle is composed of many individual fibers, as Figure 8.2 illustrates. A given axon may innervate more than one muscle fiber. For example, the eye muscles have a ratio of about one axon per three muscle fibers, and the biceps muscles of the arm have a ratio of one axon to more than a hundred fibers (Evarts, 1979). This difference allows the eye to move more precisely than the biceps. A neuromuscular junction is a synapse where a motor neuron axon meets a muscle fiber. In skeletal muscles, every axon releases acetylcholine at the neuromuscular junction, and the acetylcholine always excites the muscle to contract. Each muscle can make just one movement—contraction—in just one direction. In the absence of excitation, it relaxes, but it never moves actively in the opposite direction. Moving a leg or arm in two directions requires opposing sets of muscles, called antagonistic muscles. An arm, for example, has a flexor muscle that flexes or raises it and an extensor muscle that extends or straightens it (Figure 8.3).
All © Ed Reschke
Mitochondrion
(a)
(b)
(c)
Figure 8.1 The three main types of vertebrate muscles (a) Smooth muscle, found in the intestines and other organs, consists of long, thin cells. (b) Skeletal, or striated, muscle consists of long cylindrical fibers with stripes. (c) Cardiac muscle, found in the heart, consists of fibers that fuse together at various points. Because of these fusions, cardiac muscles contract together, not independently. (Source: Illustrations after Starr & Taggart, 1989)
Biceps contracts
Triceps relaxes Triceps contracts
© Ed Reschke
Biceps relaxes
Figure 8.2 An axon branching to innervate separate muscle fibers within a muscle Movements can be much more precise where each axon innervates only a few fibers, as with eye muscles, than where it innervates many fibers, as with biceps muscles.
Figure 8.3 A pair of antagonistic muscles The biceps of the arm is a flexor; the triceps is an extensor. (Source: Starr & Taggart, 1989)
8.1
The Control of Movement
233
Tui De Roy/Minden Pictures
Walking, clapping hands, and other coordinated sequences require a regular alternation between contraction of one set of muscles and contraction of another. Any deficit of acetylcholine or its receptors in the muscles can greatly impair movement. Myasthenia gravis (MY-us-THEE-nee-uh GRAHV-iss) is an autoimmune disease, in that the immune system forms antibodies that attack the individual’s own body. In myasthenia gravis, the immune system attacks the acetylcholine receptors at neuromuscular junctions (Shah & Lisak, 1993), causing progressive weakness and rapid fatigue of the skeletal muscles. Whenever anyone excites a given muscle fiber a few times in succession, later action potentials on the same motor neuron release less acetylcholine than before. For a healthy person, a slight decline in acetylcholine poses no problem. However, because people with myasthenia gravis have lost many of their receptors, even a slight decline in acetylcholine input produces clear deficits (Drachman, 1978).
Fast and Slow Muscles Imagine that you are a small fish. Your only defense against bigger fish, diving birds, and other predators is your ability to swim away (Figure 8.4). A fish has the same temperature as the water around it, and muscle contractions, being chemical processes, slow down in the cold. So when the water gets cold, presumably you will move slowly, right? Strangely, you will not. You will have to use more muscles than before, but you will swim at about the same speed (Rome, Loughna, & Goldspink, 1984). A fish has three kinds of muscles: red, pink, and white. Red muscles produce the slowest movements, but they are not vulnerable to fatigue. White muscles produce the fastest movements, but they fatigue rapidly. Pink muscles are intermediate in both speed and susceptibility to fatigue. At high temperatures, a fish relies mostly on its red and pink muscles. At colder temperatures, the fish relies more and more on its white muscles. By recruiting enough white muscles, the fish can swim rapidly even in cold water, at the expense of fatiguing faster. All right, you can stop imagining that you are a fish. Human and other mammalian muscles have various kinds of muscle fibers mixed together, not in separate bundles as in fish. Our muscle types range from fasttwitch fibers that produce fast contractions but fatigue rapidly to slow-twitch fibers that produce less vigorous contractions without fatiguing (Hennig & Lømo, 1985). We rely on our slow-twitch and intermediate fibers for nonstrenuous activities. For example, you could talk for hours without fatiguing your lip muscles. You could probably walk for hours too. But if you run up a steep hill at full speed, you will have to switch to fast-twitch fibers, which will fatigue rapidly.
234
Chapter 8
Movement
Figure 8.4 Temperature regulation and movement Fish are “cold blooded,” but many of their predators (e.g., this pelican) are not. At cold temperatures, a fish must maintain its normal swimming speed, even though every muscle in its body contracts more slowly than usual. To do so, a fish calls upon white muscles that it otherwise uses only for brief bursts of speed.
Slow-twitch fibers do not fatigue because they are aerobic—they use air (specifically oxygen) during their movements. You can think of them as “pay as you go.” Vigorous use of fast-twitch fibers results in fatigue because the process is anaerobic—using reactions that do not require oxygen at the time, although oxygen is eventually necessary for recovery. Using them builds up an “oxygen debt.” Prolonged exercise can start with aerobic activity and shift to anaerobic. For example, imagine yourself bicycling at a moderately challenging pace for hours. Your aerobic muscle activity uses glucose, and as the glucose supplies begin to dwindle, they activate a gene that inhibits further glucose use. The probable function is to save the dwindling glucose supplies for the brain to use (Booth & Neufer, 2005). The muscles then shift to anaerobic use of fatty acids as fuel. You will continue bicycling for a while, but the muscles will gradually fatigue. People have varying percentages of fast-twitch and slow-twitch fibers. For example, the Swedish ultramarathon runner Bertil Järlaker built up so many slow-
twitch fibers in his legs that he once ran 3,520 km (2,188 mi) in 50 days (an average of 1.7 marathons per day) with only minimal signs of pain or fatigue (Sjöström, Friden, & Ekblom, 1987). Conversely, competitive sprinters have a higher percentage of fast-twitch fibers. Individual differences depend on both genetics and training. In one study, investigators studied male sprinters before and after a 3-month period of intensive training. After the training, they found more fast-twitch leg muscle fibers and fewer slow-twitch fibers (Andersen, Klitgaard, & Saltin, 1994). Part of the change relates to alterations of the myosin molecules within muscles (Canepari et al., 2005).
STOP
&
Information to brain
Spinal cord
+ –
+
Motor neurons
Sensory neurons
CHECK
1. Why can the eye muscles move with greater precision than the biceps muscles? 2. In what way are fish movements impaired in cold water?
Muscle
3. Duck breast muscles are red (“dark meat”) whereas chicken breast muscles are white. Which species probably can fly for a longer time before fatiguing?
Muscle spindle Golgi tendon organ
4. Why is an ultramarathoner like Bertil Järlaker probably mediocre or poor at short-distance races? Check your answers on page 239.
Muscle Control by Proprioceptors You are walking along on a bumpy road. What happens if the messages from your spinal cord to your leg muscles are not exactly correct? You might set your foot down a little too hard or not quite hard enough. Nevertheless, you adjust your posture almost immediately and maintain your balance without even thinking about it. How do you do that? A baby is lying on its back. You playfully tug its foot and then let go. At once, the leg bounces back to its original position. How and why? In both cases, the mechanism is under the control of proprioceptors (Figure 8.5). A proprioceptor is a receptor that detects the position or movement of a part of the body—in these cases, a muscle. Muscle proprioceptors detect the stretch and tension of a muscle and send messages that enable the spinal cord to adjust its signals. When a muscle is stretched, the spinal cord sends a reflexive signal to contract it. This stretch reflex is caused by a stretch; it does not produce one. One kind of proprioceptor is the muscle spindle, a receptor parallel to the muscle that responds to a stretch (Merton, 1972; Miles & Evarts, 1979). Whenever the muscle spindle is stretched, its sensory nerve sends a message to a motor neuron in the spinal cord, which in turn sends a message back to the muscles sur-
Figure 8.5 Two kinds of proprioceptors regulate the contraction of a muscle When a muscle is stretched, the nerves from the muscle spindles transmit an increased frequency of impulses, resulting in a contraction of the surrounding muscle. Contraction of the muscle stimulates the Golgi tendon organ, which acts as a brake or shock absorber to prevent a contraction that is too quick or extreme.
rounding the spindle, causing a contraction. Note that this reflex provides for negative feedback: When a muscle and its spindle are stretched, the spindle sends a message that results in a muscle contraction that opposes the stretch. When you set your foot down on a bump on the road, your knee bends a bit, stretching the extensor muscles of that leg. The sensory nerves of the spindles send action potentials to the motor neuron in the spinal cord, and the motor neuron sends action potentials to the extensor muscle. Contracting the extensor muscle straightens the leg, adjusting for the bump on the road. A physician who asks you to cross your legs and then taps just below the knee (Figure 8.6) is testing your stretch reflexes. The tap stretches the extensor muscles and their spindles, resulting in a message that jerks 8.1
The Control of Movement
235
STOP
&
CHECK
5. If you hold your arm straight out and someone pulls it down slightly, it quickly bounces back. What proprioceptor is responsible? 6. What is the function of Golgi tendon organs? Check your answers on page 239.
Units of Movement
Figure 8.6 The knee-jerk reflex Here is one example of a stretch reflex.
the lower leg upward. The same reflex contributes to walking; raising the upper leg reflexively moves the lower leg forward in readiness for the next step. The Golgi tendon organ, another proprioceptor, responds to increases in muscle tension. Located in the tendons at opposite ends of a muscle, it acts as a brake against an excessively vigorous contraction. Some muscles are so strong that they could damage themselves if too many fibers contracted at once. Golgi tendon organs detect the tension that results during a muscle contraction. Their impulses travel to the spinal cord, where they inhibit the motor neurons through messages from interneurons. In short, a vigorous muscle contraction inhibits further contraction by activating the Golgi tendon organs. The proprioceptors not only control important reflexes but also provide the brain with information. Here is an illusion that you can demonstrate yourself: Find a small, dense object and a larger, less dense object that weighs the same as the small one. For example, you might try a lemon and a hollowed-out orange, with the peel pasted back together so it appears to be intact. Drop one of the objects onto someone’s hand while he or she is watching. (The watching is essential.) Then remove it and drop the other object onto the same hand. Most people report that the small one felt heavier. The reason is that with the larger object, people set themselves up with the expectation of a heavier weight. The actual weight displaces their proprioceptors less than expected and try it therefore yields the perception of a lighter yourself object.
236
Chapter 8
Movement
The stretch reflex is a simple example of movement. More complex kinds include speaking, walking, threading a needle, and throwing a basketball through a hoop while off balance and evading two defenders. In many ways, these movements are different from one another, and they depend on different kinds of control by the nervous system.
Voluntary and Involuntary Movements Reflexes are consistent automatic responses to stimuli. We generally think of reflexes as involuntary because they are insensitive to reinforcements, punishments, and motivations. The stretch reflex is one example; another is the constriction of the pupil in response to bright light.
E X T E N S I O N S A N D A P P L I C AT I O N S
Infant Reflexes Humans have few reflexes, although infants have several not seen in adults. For example, if you place an object firmly in an infant’s hand, the infant will reflexively grasp it tightly (the grasp reflex). If you stroke the sole of the foot, the infant will reflexively extend the big toe and fan the others (the Babinski reflex). If you touch an infant’s cheek, the head will turn toward the stimulated cheek, and the infant will begin to suck (the rooting reflex). The rooting reflex is not a pure reflex, as its intensity depends on the infant’s arousal and hunger levels. Although such reflexes fade away with time, the connections remain intact, not lost but suppressed by axons from the maturing brain. If the cerebral cortex is damaged, the infant reflexes are released from inhibition. In fact, neurologists and other physicians frequently test adults for infant reflexes. A physician who strokes the sole of your foot during a physical exam is probably looking for evidence of brain damage. This is hardly the most dependable test, but it is easy. If a
© Laura Dwight/PhotoEdit
(a)
© Cathy Melloan/PhotoEdit
(b)
(c) Three reflexes in infants but ordinarily not in adults: (a) grasp reflex, (b) Babinski reflex, and (c) rooting reflex.
Jo Ellen Kalat
© Charles Gupton/Stock, Boston, Inc./Picture Quest
stroke on the sole of your foot makes you fan your toes like a baby, your cerebral cortex may be impaired. Infant reflexes sometimes return temporarily if alcohol, carbon dioxide, or other chemicals decrease the activity in the cerebral cortex. (You might try testing for infant reflexes in try it a friend who has consumed too much yourself alcohol.)
The grasp reflex enables an infant to cling to the mother while she travels.
Infants and children also tend more strongly than adults to have certain allied reflexes. If dust blows in your face, you will reflexively close your eyes and mouth and probably sneeze. These reflexes are allied in the sense that each of them tends to elicit the others. If you suddenly see a bright light—as when you emerge from a dark theater on a sunny afternoon—you will reflexively close your eyes, and you may also close your mouth and perhaps sneeze. Some adults react this way; a higher percentage of children do (Whitman & Packer, 1993).
Few behaviors can be classified as purely voluntary or involuntary, reflexive or nonreflexive. Take swallowing, for example. You can voluntarily swallow or inhibit swallowing, but only within certain limits. Try to swallow ten times in a row voluntarily (without drinking). The first swallow or two are easy, but soon you will find additional swallows difficult and unpleasant. Now try to inhibit swallowtry it ing for as long as you can, without spitting. yourself Chances are you will not last long. You might think of a behavior like walking as being purely voluntary, but even that example includes involuntary components. When you walk, you automatically compensate for the bumps and irregularities in the road. You probably also swing your arms automatically as an involuntary consequence of walking. Or try this: While sitting, raise your right foot and make clockwise circles. Keep your foot moving while you draw the number 6 in the air with your right hand. Or just move your right hand in counterclockwise circles. You will probably reverse the direction of your foot movement. It is difficult to make “voluntary” clockwise and counterclockwise movements on the same side of the body at the same time. (Curiously, it is less difficult 8.1
The Control of Movement
237
to move your left hand one direction while moving the right foot in the opposite direction.) In short, the distinction between voluntary and involuntary is blurry.
try it yourself
The military distinguishes between ballistic missiles and guided missiles. A ballistic missile is simply launched, like a thrown ball, with no way to make a correction if the aim is off. A guided missile, however, detects the target location and adjusts its trajectory one way or the other to correct for error in the original aim. Similarly, some movements are ballistic, and others are corrected by feedback. A ballistic movement is executed as a whole: Once initiated, it cannot be altered or corrected while it is in progress. A reflex, such as the stretch reflex or the contraction of the pupils in response to light, is a ballistic movement. Completely ballistic movements are rare; most behaviors are subject to feedback correction. For example, when you thread a needle, you make a slight movement, check your aim, and then make a readjustment. Similarly, a singer who holds a single note hears any wavering of the pitch and corrects it.
Sequences of Behaviors Many of our behaviors consist of rapid sequences, as in speaking, writing, dancing, or playing a musical instrument. In certain cases, we can attribute these sequences to central pattern generators, neural mechanisms in the spinal cord or elsewhere that generate rhythmic patterns of motor output. Examples include the spinal cord mechanisms that generate wing flapping in birds, fin movements in fish, and the “wet dog shake.” Although a stimulus may activate a central pattern generator, it does not control the frequency of the alternating movements. For example, cats scratch themselves at a rate of three to four strokes per second. This rhythm is generated by cells in the lumbar segments of the spinal cord (Deliagina, Orlovsky, & Pavlova, 1983). The spinal cord neurons generate this same rhythm even if they are isolated from the brain and even if the muscles are paralyzed. We refer to a fixed sequence of movements as a motor program. A motor program can be either learned or built into the nervous system. For an example of a built-in program, a mouse periodically grooms itself by sitting up, licking its paws, wiping them over its face, closing its eyes as the paws pass over them, licking the paws again, and so forth (Fentress, 1973). Once begun,
238
Chapter 8
Movement
Gerry Ellis/Minden Pictures
Movements with Different Sensitivity to Feedback
Nearly all birds reflexively spread their wings when dropped. However, emus—which lost the ability to fly through evolutionary time—do not spread their wings.
the sequence is fixed from beginning to end. Many people develop learned but predictable motor sequences. An expert gymnast will produce a familiar pattern of movements as a smooth, coordinated whole; the same can be said for skilled typists, piano players, and so forth. The pattern is automatic in the sense that thinking or talking about it interferes with the action. By comparing species, we begin to understand how a motor program can be gained or lost through evolution. For example, if you hold a chicken above the ground and drop it, its wings will extend and flap. Even chickens with featherless wings make the same movements, though they fail to break their fall (Provine, 1979, 1981). Chickens, of course, still have the genetic programming to fly. On the other hand, ostriches, emus, and rheas, which have not used their wings for flight for millions of generations, have lost the genes for flight movements and do not flap their wings when dropped (Provine, 1984). (You might pause to think about the researcher who found a way to drop these huge birds to test the hypothesis.) Do humans have any built-in motor programs? Yawning is one example (Provine, 1986). A yawn consists of a prolonged open-mouth inhalation, often accompanied by stretching, and a shorter exhalation. Yawns are very consistent in duration, with a mean of just under 6 seconds. Certain facial expressions are also programmed, such as smiles, frowns, and the raisedeyebrow greeting.
Module 8.1
Answers to
In Closing: Categories of Movement
STOP
Charles Sherrington described a motor neuron in the spinal cord as “the final common path.” He meant that regardless of what sensory and motivational processes occupy the brain, the final result was always either a muscle contraction or the delay of a muscle contraction. However, a motor neuron and its associated muscle participate in a great many different kinds of movements, and we need many brain areas to control them.
Summary 1. Vertebrates have smooth, skeletal, and cardiac muscles. (p. 232)
&
CHECK
Questions 1. Each axon to the biceps muscles innervates about a hundred fibers; therefore, it is not possible to change the movement by just a few fibers more or less. In contrast, an axon to the eye muscles innervates only about three fibers. (p. 235) 2. Although a fish can move rapidly in cold water, it fatigues easily. (p. 235) 3. Ducks can fly enormous distances without evident fatigue, as they often do during migration. The white muscle of a chicken breast has the great power that is necessary to get a heavy body off the ground, but it fatigues rapidly. Chickens seldom fly far. (p. 235)
2. Skeletal muscles range from slow muscles that do not fatigue to fast muscles that fatigue quickly. We rely on the slow muscles most of the time, but we recruit the fast muscles for brief periods of strenuous activity. (p. 234)
4. An ultramarathoner builds up large numbers of slow-twitch fibers at the expense of fast-twitch fibers. Therefore, endurance is great but maximum speed is not. (p. 235)
3. Proprioceptors are receptors sensitive to the position and movement of a part of the body. Two kinds of proprioceptors, muscle spindles and Golgi tendon organs, help regulate muscle movements. (p. 235)
6. The Golgi tendon organ responds to muscle tension and thereby prevents excessively strong muscle contractions. (p. 236)
4. Some movements, especially reflexes, proceed as a unit, with little if any guidance from sensory feedback. Other movements, such as threading a needle, are constantly guided and redirected by sensory feedback. (p. 237)
5. The muscle spindle (p. 236)
Thought Question Would you expect jaguars, cheetahs, and other great cats to have mostly slow-twitch, nonfatiguing muscles in their legs or mostly fast-twitch, quickly fatiguing muscles? What kinds of animals might have mostly the opposite kind of muscles?
8.1
The Control of Movement
239
Module 8.2
Brain Mechanisms of Movement
W
hy do we care how the brain controls movethe possibility of connecting their brains to robotic ment? One practical goal is to help people with limbs as well (Nicolelis, 2001). To achieve precise spinal cord damage or limb amputations. Their brains movements, presumably physicians would need to inplan movements, but the messages cannot reach the sert electrodes directly into the motor areas of the muscles. Suppose we could listen in on their brain brain. However, people can achieve a moderate degree messages and decode their meaning. Then biomedical of control using evoked potential recordings from the engineers might route those messages to muscle stimsurface of the scalp (Millán, Renkens, Mouriño, & Gerstulators or robotic limbs. Researchers mapped the relaner, 2004; Wolpaw & McFarland, 2004). Improved untionships between rats’ brain activity and their movederstanding of the brain mechanisms of movement ments and then connected wires from the brains to may improve these technologies. robotic limbs. The rats learned to control the robotic Controlling movement is a more complicated matmovements with brain activity (Chapin, Moxon, Marter than we might have guessed. Figure 8.7 outlines kowitz, & Nicolelis, 1999). In later studies, monkeys the major motor areas of the mammalian central nerlearned to move a joystick to control a cursor on a vous system. Don’t get too bogged down in details at computer monitor, receiving reinforcement when the this point; we shall attend to each area in due course. cursor reached its target. Researchers recorded the monkeys’ brain activity during these actions, and then attached electrodes from the brain to a comPrimary motor cortex Premotor cortex puter, so that whenever the monkey intended a Primary Basal ganglia cursor movement, it happened (Lebedev et al., somatosensory (blue) 2005; Serruya, Hatsopoulos, Paninski, Fellows, cortex & Donoghue, 2002). We can imagine that the monkeys were impressed with their apparent psychic powers! People who suffer severe spinal cord damage continue to produce normal activity in the motor cortex when they want to move (Shoham, Halgren, MayInput to nard, & Normann, 2001), so rereticular formation searchers are encouraged about
Figure 8.7 The major motor areas of the mammalian central nervous system
Red nucleus
The cerebral cortex, especially the primary motor cortex, sends axons directly to the medulla and spinal cord. So do the red nucleus, reticular formation, and other brainstem areas. The medulla and spinal cord control muscle movements. The basal ganglia and cerebellum influence movement indirectly through their communication back and forth with the cerebral cortex and brainstem.
240
Chapter 8
Movement
Reticular formation
Ventromedial tract
Cerebellum
Dorsolateral tract
Supplementary motor cortex
Primary motor cortex
Premotor cortex
Prefrontal cortex
Figure 8.8 Principal areas of the motor cortex in the human brain Cells in the premotor cortex and supplementary motor cortex are active during the planning of movements, even if the movements are never actually executed.
activity in a scattered population of cells, and the regions activated by any finger overlap the Primary regions activated by other finsomatosensory gers, as shown in Figure 8.10 cortex (Sanes, Donoghue, Thangaraj, Posterior Edelman, & Warach, 1995). parietal cortex For many years, researchers studied the motor cortex in laboratory animals by stimulating various neurons with brief electrical pulses, usually less than 50 milliseconds (ms) in duration. The results were brief, isolated muscle twitches. Later researchers found different results when they lengthened the pulses to half a second. Instead of brief twitches, they elicited complex movement patterns. For example, stimulation of one spot caused a monkey to make a grasping movement with its hand, move its hand to just in front of the mouth, and open its mouth (Graziano, Taylor, & Moore, 2002). Repeated stimulation of this same spot elicited the same result each time; the monkey always grasped and moved its hand to its mouth, regardless of what it had been doing at the time and regardless of where or in what position its Central sulcus
Since the pioneering work of Gustav Fritsch and Eduard Hitzig (1870), neuroscientists have known that direct electrical stimulation of the primary motor cortex—the precentral gyrus of the frontal cortex, just anterior to the central sulcus (Figure 8.8)—elicits movements. The motor cortex has no direct connections to the muscles; its axons extend to the brainstem and spinal cord, which generate the activity patterns that control the muscles (Shik & Orlovsky, 1976). The cerebral cortex is particularly important for complex actions such as talking, writing, or hand gestures. It is less important for coughing, sneezing, gagging, laughing, or crying (Rinn, 1984). Perhaps the lack of cerebral control explains why it is hard to perform such actions voluntarily. Laughing or coughing voluntarily is not the same as laughing or coughing spontaneously, and most people cannot cry or sneeze voluntarily. Figure 8.9 (which repeats part of Figure 4.24, p. 99) indicates which area of the motor cortex controls which area of the body. For example, the brain area shown next to the hand is active during hand movements. In each case, the brain area controls a structure on the opposite side of the body. However, don’t read this figure as implying that each spot in the motor cortex controls a single muscle. For example, movement of any finger or the wrist is associated with
Knee Hip Trunk Shou lder Arm Elb ow W ris t Ha nd Fi ng er s
The Cerebral Cortex
b um k Th Nec w Bro ye E Face
Toes
Lips
Jaw Tongue
Swall
owin
g
Figure 8.9 Coronal section through the primary motor cortex Stimulation at any point in the primary motor cortex is most likely to evoke movements in the body area shown. However, actual results are usually messier than this figure implies: For example, individual cells controlling one finger may be intermingled with cells controlling another finger. (Source: Adapted from Penfield & Rasmussen, 1950)
8.2 Brain Mechanisms of Movement
241
Image not available due to copyright restrictions
hand had been. That is, the stimulation produced a certain outcome, not a fixed set of muscle movements. Depending on the position of the arm, the stimulation might activate biceps muscles, triceps, or whatever. Although the motor cortex can direct contractions of a specific muscle, more often it orders an outcome and leaves it to the spinal cord and other areas to find the combination of contractions to produce that outcome (S. H. Scott, 2004). Just as the visual cortex becomes active when we imagine seeing something, the motor cortex becomes active when we imagine movements. For example, expert pianists say that when they listen to familiar, wellpracticed music, they imagine the finger movements and often start tapping the appropriate fingers as if they were playing the music. Brain recordings have con-
242
Chapter 8
Movement
firmed that the finger area of the motor cortex is active when pianists listen to familiar music, even if they keep their fingers motionless (Haueisen & Knösche, 2001). Neurons in part of the inferior parietal cortex of monkeys are active during a movement and while a monkey watches another monkey do the same movement (Cisek & Kalaska, 2004; Fogassi et al., 2005). Brain-scan studies have demonstrated similar neurons in humans. These neurons, called mirror neurons, presumably enable the observer to understand and identify with the movements another individual is making. They respond when a monkey watches another monkey or a human watches another human. They do not respond when a monkey watches a human do something or when a human watches a robot (Tai, Scherfler, Brooks, Sawamoto, & Castiello, 2004). Evidently, there-
Table 8.1 Some Disorders of the Spinal Column Disorder
Description
Cause
Paralysis
Lack of voluntary movement in part of the body.
Damage to spinal cord, motor neurons, or their axons.
Paraplegia
Loss of sensation and voluntary muscle control in both legs. Reflexes remain. Although no messages pass between the brain and the genitals, the genitals still respond reflexively to touch. Paraplegics have no genital sensations, but they can still experience orgasm (Money, 1967).
Cut through the spinal cord above the segments attached to the legs.
Quadriplegia
Loss of sensation and muscle control in all four extremities.
Cut through the spinal cord above the segments controlling the arms.
Hemiplegia
Loss of sensation and muscle control in the arm and leg on one side.
Cut halfway through the spinal cord or (more commonly) damage to one hemisphere of the cerebral cortex.
Tabes dorsalis
Impaired sensation in the legs and pelvic region, impaired leg reflexes and walking, loss of bladder and bowel control.
Late stage of syphilis. Dorsal roots of the spinal cord deteriorate.
Poliomyelitis
Paralysis.
Virus that damages cell bodies of motor neurons.
Amyotrophic lateral sclerosis
Gradual weakness and paralysis, starting with the arms and later spreading to the legs. Both motor neurons and axons from the brain to the motor neurons are destroyed.
Unknown.
fore, their response reflects identification of the viewer with the actor: “The actor is like me, doing something I might do.” Many psychologists believe that the existence of such neurons is important for the complex social behaviors typical of humans and other primates.
Connections from the Brain to the Spinal Cord All the messages from the brain must eventually reach the medulla and spinal cord, which control the muscles. Diseases of the spinal cord can impair the control of movement in various ways (Table 8.1). The various axons from the brain organize into two paths, the dorsolateral tract and the ventromedial tract. Nearly all movements rely on a combination of both the dorsolateral and ventromedial tracts, but many movements rely on one tract more than the other. The dorsolateral tract of the spinal cord is a set of axons from the primary motor cortex, surrounding areas, and the red nucleus, a midbrain area with output mainly to the arm muscles (Figure 8.11). Axons of the dorsolateral tract extend directly from the motor cortex to their target neurons in the spinal cord. In bulges of the medulla called pyramids, the dorsolateral tract crosses from one side of the brain to the contralateral (opposite) side of the spinal cord. (For that reason, the dorsolateral tract is also called the pyramidal tract.) It controls movements in peripheral areas, such as the hands, fingers, and toes. Why does each hemisphere control the contralateral side instead of its own side? We do not know,
but all vertebrates have this pattern. In newborn humans, the immature primary motor cortex has partial control of both ipsilateral and contralateral muscles. As the contralateral control improves over the first year and a half of life, it displaces the ipsilateral control, which gradually becomes weaker. In some children with cerebral palsy, the contralateral path fails to mature, and the ipsilateral path remains relatively strong. In fact, sometimes part of the clumsiness of children with cerebral palsy comes from competition between the ipsilateral and contralateral paths (Eyre, Taylor, Villagra, Smith, & Miller, 2001). In contrast to the dorsolateral tract, the ventromedial tract includes axons from the primary motor cortex, surrounding areas, and also from many other parts of the cortex. In addition, it includes axons that originate from the midbrain tectum, the reticular formation, and the vestibular nucleus, a brain area that receives input from the vestibular system (see Figure 8.11). Axons of the ventromedial tract go to both sides of the spinal cord, not just the contralateral side. The ventromedial tract controls mainly the muscles of the neck, shoulders, and trunk and therefore such movements such as walking, turning, bending, standing up, and sitting down (Kuypers, 1989). Note that these movements are necessarily bilateral; you can move your fingers on just one side, but any movement of your neck or trunk must include both sides. Suppose someone has suffered a stroke that damaged the primary motor cortex of the left hemisphere. The result is a loss of the dorsolateral tract from that hemisphere and a loss of movement control on the right 8.2 Brain Mechanisms of Movement
243
Corpus callosum Thalamus
Fibers from cerebral cortex (especially the primary motor cortex)
Caudate nucleus
(a) Cerebral hemisphere
(a) Cerebral hemisphere
Cerebral cortex
Tectum Reticular formation
Red nucleus
(b) Midbrain
Thalamus
Basal ganglia
(b) Midbrain (c) Upper level of medulla
Cerebellar cortex
(c) Medulla and cerebellum
Pyramids of medulla
Cerebellar nuclei Vestibular nucleus Reticular formation Ventromedial tract
(a) Dorsolateral tract (from contralateral cortex)
Dorsal Ventral
(d) Spinal cord (d)
(c)
(b)
(d) Spinal cord (b)
(a)
Figure 8.11 The dorsolateral and ventromedial tracts The dorsolateral tract (a) crosses from one side of the brain to the opposite side of the spinal cord and controls precise and discrete movements of the extremities, such as hands, fingers, and feet. The ventromedial tract (b) produces bilateral control of trunk muscles for postural adjustments and bilateral movements such as standing, bending, turning, and walking.
side of the body. Eventually, depending on the exact location and amount of damage, the person may regain some muscle control from spared axons in the dorsolateral tract. If not, the possibility remains of using the ventromedial tract to approximate the intended movement. For example, someone who has no direct control of the hand muscles might move the shoulders, trunk, and hips in a way that repositions the hand in a crude way. Also, because of connections between the left and right halves of the spinal cord, normal movements of one arm or leg can induce associated movements on the other side, at least to a limited degree (Edgley, Jankowska, & Hammar, 2004).
STOP
&
CHECK
1. What evidence indicates that cortical activity represents the “idea” of the movement and not just the muscle contractions? 2. What kinds of movements does the dorsolateral tract control? The ventromedial tract? Check your answers on page 252.
244
Chapter 8
Movement
Areas Near the Primary Motor Cortex A number of areas near the primary motor cortex also contribute to movement in diverse ways (see Figure 8.8). In the posterior parietal cortex, some neurons respond primarily to visual or somatosensory stimuli, others respond mostly to current or future movements, and still others respond to a complicated mixture of the stimulus and the upcoming response (Shadlen & Newsome, 1996). You might think of the posterior parietal cortex as keeping track of the position of the body relative to the world (Snyder, Grieve, Brotchie, & Andersen, 1998). Contrast the effects of posterior parietal damage with those of occipital or temporal damage. People with posterior parietal damage can accurately describe what they see, but they have trouble converting their perception into action. Although they can walk toward something they hear, they cannot walk toward something they see, nor can they reach out to grasp something—even after describing its size, shape, and angle. They seem to know what it is but not where it is. In contrast, people with damage to parts of the occipital or temporal cortex have trouble describing what they see, but they can reach out and pick up objects, and when walking, they step over or go around the objects in their way (Goodale, 1996; Goodale, Milner,
Jakobson, & Carey, 1991). In short, seeing what is different from seeing where, and seeing where is critical for movement. The primary somatosensory cortex is the main receiving area for touch and other body information, as mentioned in Chapter 7. It sends a substantial number of axons directly to the spinal cord and also provides the primary motor cortex with sensory information. Neurons in this area are especially active when the hand grasps something, responding both to the shape of the object and the type of movement, such as grasping, lifting, or lowering (Gardner, Ro, Debowy, & Ghosh, 1999). Cells in the prefrontal cortex, premotor cortex, and supplementary motor cortex (see Figure 8.8) actively prepare for a movement, sending messages to the primary motor cortex that actually instigates the movement. These three areas contribute in distinct ways. The prefrontal cortex responds to lights, noises, and other sensory signals that lead to a movement. It also calculates probable outcomes of various actions and plans movements according to those outcomes (Tucker, Luu, & Pribram, 1995). If you had damage to this area, you might shower with your clothes on, salt the tea instead of the food, and pour water on the tube of toothpaste instead of the toothbrush (M. F. Schwartz, 1995). Interestingly, this area is inactive during dreams, and the actions we dream about doing are usually as illogical and poorly planned as those of people with prefrontal cortex damage (Braun et al., 1998; Maquet et al., 1996). The premotor cortex is active during preparations for a movement and somewhat active during movement itself. It receives information about the target in space, to which the body is directing its movement, as well as information about the current position and posture of the body itself (Hoshi & Tanji, 2000). Both kinds of information are, of course, necessary to move a body part toward some target. The premotor cortex sends output to both the primary motor cortex and the spinal cord, organizing the direction of the movements in space. For example, you would have to use different muscles to move your arm upward depending on whether your hand was palm-up or palm-down at the time, but the same premotor cortex cells would be active in either case (Kakei, Hoffman, & Strick, 2001). The supplementary motor cortex is important for planning and organizing a rapid sequence of movements, such as pushing, pulling, and then turning a stick, in a particular order (Tanji & Shima, 1994). In one study, researchers compared fMRI results when people simply made movements and when they concentrated on their feelings of intention prior to the movements. Concentrating on intentions increased activity of the supplementary motor cortex (Lau, Rogers, Haggard, & Passingham, 2004). In another study, research-
ers electrically stimulated the supplementary motor cortex while people had their brains exposed in preparation for surgery. Light stimulation elicited reports of an “urge” to move some body part or an expectation that such a movement was about to start. Longer or stronger stimulation produced the movements themselves (Fried et al., 1991).
STOP
&
CHECK
3. How does the posterior parietal cortex contribute to movement? The prefrontal cortex? The premotor cortex? The supplementary motor cortex? Check your answers on page 252.
Conscious Decisions and Movements Where does conscious decision come into all of this? Each of us has the feeling, “I consciously decide to do something, and then I do it.” That sequence seems so obvious that we might not even question it, but research on the issue has found results that most people consider surprising. Imagine yourself in the following study (Libet, Gleason, Wright, & Pearl, 1983). You are instructed to flex your wrist whenever you choose. That is, you don’t choose which movement to make, but you can choose the time freely. You should not decide in advance when to move, but let the urge occur as spontaneously as possible. The researchers are interested in three measurements. First, they attach electrodes to your scalp to record evoked electrical activity over your motor cortex (p. 107). Second, they attach a sensor to your hand to record when it starts to move. The third measurement is your self-report: You watch a clocklike device, as shown in Figure 8.12, in which a spot of light moves around the circle every 2.56 seconds. You are to watch that clock. Do not decide in advance that you will move when the spot on the clock gets to a certain point; however, when you do decide to move, note exactly where the spot of light is at that moment, and remember it so you can report it later. The procedure starts. You think, “Not yet . . . not yet . . . not yet . . . NOW!” You note where the spot was at that critical instant and report, “I made my decision when the light was at the 25 position.” The researchers compare your report to their records of your brain activity and your wrist movement. On the average, people report that their decision to move occurred about 200 ms before the actual movement. (Note: It’s the decision that occurred then. People make the report a few seconds later.) For example, if you reported that
8.2 Brain Mechanisms of Movement
245
Image not available due to copyright restrictions
your decision to move occurred at position 25, your decision preceded the movement by 200 ms, so the movement itself began at position 30. (Remember, the light moves around the circle in 2.56 seconds.) However, your motor cortex produces a particular kind of activity called a readiness potential before any voluntary movement, and on the average, the readiness potential begins at least 500 ms before the movement. In
55
this example, it would start when the light was at position 18, as illustrated in Figure 8.13. The results varied among individuals, but most were similar. The key point is that the brain activity responsible for the movement apparently began before the person’s conscious decision! The results seem to indicate that your conscious decision does not cause your action; rather, you become conscious of the decision after the process leading to action has already been underway for about 300 milliseconds. As you can imagine, this experiment has been highly controversial. The result itself has been replicated in several laboratories, so the facts are solid (e.g., Lau et al., 2004; Trevena & Miller, 2002). One challenge to the interpretation was that perhaps people cannot accurately report the time they become conscious of something. However, when people are asked to report the time of a sensory stimulus, or the time that they made a movement (instead of the decision to move), their estimates are usually within 30–50 ms of the correct time (Lau et al., 2004; Libet et al., 1983). That is, they cannot exactly report the time when something happened, but they are close enough. Both scientists and philosophers have raised other objections, and we should not expect the question to be settled soon. Nevertheless, the study at least raises the serious possibility that what we identify as a “conscious” decision is more the perception of an ongoing process than the cause of it. If so, we return to the issues raised in Chapter 1: What is the role of consciousness? Does it serve a useful function, or is it just an accident of the way we are built? These results do not deny that you make a voluntary decision. The implication, however, is that your
5
50
Person reports that the conscious decision occurred here.
10
Brain’s readiness potential begins to rise in preparation for the movement.
15
45
40
20
35
25 30
Where the light was when the readiness potential began.
Where the light was at the time of the reported decision.
Readiness potential
Where the light was when the wrist movement began.
Figure 8.13 Results from study of conscious decision and movement On the average, the brain’s readiness potential began almost 300 ms before the reported decision, which occurred 200 ms before the movement.
246
Chapter 8
Movement
Time
The movement itself starts here.
voluntary decision is, at first, unconscious. Just as a sensory stimulus has to reach a certain strength before it becomes conscious, your decision to do something has to reach a certain strength before it becomes conscious. Of course, if you think of “voluntary” as synonymous with “conscious,” then you have a contradiction. Studies of patients with brain damage shed further light on the issue. Researchers used the same spotgoing-around-the-clock procedure with patients who had damage to the parietal cortex. These patients were just as accurate as other people in reporting when a tone occurred. However, when they tried to report when they formed an intention to make a hand movement, the time they reported was virtually the same as the time of the movement itself. That is, they seemed unaware of any intention before they began the movement. Evidently, the parietal cortex monitors the preparation for a movement, including whatever it is that people ordinarily experience as their feeling of “intention” (Sirigu et al., 2004). Without the parietal cortex, they experienced no such feeling. Another kind of patient shows a different peculiarity of consciousness. Patients with damage to the primary motor cortex of the right hemisphere are unable to make voluntary movements with their left arm or leg, but some of them deny that they have a problem. That is, they insist that they can and do make voluntary movements on the left side. Neurologists call this condition anosognosia (uh-NO-sog-NO-see-uh), meaning ignorance of the presence of disease. Most people with an intact motor cortex but damaged white matter leading to the spinal cord are aware of their paralysis. Anosognosia occurs in people with damage to the right hemisphere motor cortex itself and parts of the surrounding areas. Evidently, the motor cortex monitors feedback from the muscles to determine whether the movements that it ordered have occurred. In the absence of the motor cortex, the premotor cortex generates an “intention” or representation of the intended movement, but it fails to receive any feedback about whether the intended movement was executed (Berti et al., 2005).
STOP
&
CHECK
4. Explain the evidence that someone’s conscious decision to move does not cause the movement. 5. After damage to the parietal cortex, what happens to people’s reports of their intentions to move? 6. What is anosognosia, and what brain abnormality is associated with it? Check your answers on page 252.
The Cerebellum The term cerebellum is Latin for “little brain.” Decades ago, the function of the cerebellum was described as simply “balance and coordination.” Well, yes, people with cerebellar damage do lose balance and coordination, but that description greatly understates the importance of this structure. The cerebellum contains more neurons than the rest of the brain combined (R. W. Williams & Herrup, 1988) and an enormous number of synapses. So the cerebellum has far more capacity for processing information than its small size might suggest. The most obvious effect of cerebellar damage is trouble with rapid movements that require accurate aim and timing. For example, people with cerebellar damage have trouble tapping a rhythm, clapping hands, pointing at a moving object, speaking, writing, typing, or playing a musical instrument. They are impaired at almost all athletic activities except a few like weightlifting that do not require aim or timing. Even long after the damage, when they seem to have recovered, they remain slow on sequences of movements and even on imagining sequences of movements (González, Rodríguez, Ramirez, & Sabate, 2005). They are normal, however, at a continuous motor activity, even if it has a kind of rhythm (Spencer, Zelaznik, Diedrichsen, & Ivry, 2003). For example, they can draw continuous circles, like the ones below. Although the drawing has a rhythm, it does not require rhythmically starting or stopping an action.
Here is one quick way to test someone’s cerebellum: Ask the person to focus on one spot and then to move the eyes quickly to another spot. Saccades (sa-KAHDS), ballistic eye movements from one fixation point to another, depend on impulses from the cerebellum and the frontal cortex to the cranial nerves. A healthy person’s eyes move from one fixation point to another by a single movement or by one large movement and a small correction at the end. Someone with cerebellar damage, however, has difficulty programming the angle and distance of eye movements (Dichgans, 1984). The eyes make many short movements until, by trial and error, they eventually find the intended spot. Another test of cerebellar damage is the finger-tonose test. The person is instructed to hold one arm straight out and then, at command, to touch his or her
8.2 Brain Mechanisms of Movement
247
nose as quickly as possible. A normal person does so in three steps. First, the finger moves ballistically to a point just in front of the nose. This move function depends on the cerebellar cortex (the surface of the cerebellum), which sends messages to the deep nuclei (clusters of cell bodies) in the interior of the cerebellum (Figure 8.14). Second, the finger remains steady at that spot for a fraction of a second. This hold function depends on the nuclei alone (Kornhuber, 1974). Finally, the finger moves to the try it nose by a slower movement that does not yourself depend on the cerebellum. After damage to the cerebellar cortex, a person has trouble with the initial rapid movement. Either the finger stops too soon or it goes too far, striking the face. If certain cerebellar nuclei have been damaged, the person may have difficulty with the hold segment: The finger reaches a point just in front of the nose and then wavers. The symptoms of cerebellar damage resemble those of alcohol intoxication: clumsiness, slurred speech, and inaccurate eye movements. A police officer testing someone for drunkenness may use the finger-to-nose test or similar tests because the cerebellum is one of the first brain areas that alcohol affects.
What, then, is the role of the cerebellum? Masao Ito (1984) proposed that one key role is to establish new motor programs that enable one to execute a sequence of actions as a whole. Inspired by this idea, many researchers reported evidence that cerebellar damage impairs motor learning. Richard Ivry and his colleagues have emphasized the importance of the cerebellum for behaviors that depend on precise timing of short intervals (from about a millisecond to 1.5 seconds). Any sequence of rapid movements obviously requires timing. Many perceptual and cognitive tasks also require timing—for example, judging which of two visual stimuli is moving faster or listening to two pairs of tones and judging whether the delay was longer between the first pair or the second pair.
Evidence of a Broad Role
People who are accurate at one kind of timed movement, such as tapping a rhythm with a finger, tend also to be good at other timed movements, such as tapping a rhythm with a foot, and at judging which visual stimulus moved faster and which intertone delay was longer. People with cerebellar damage are impaired at all of these tasks but unimpaired at controlling the force of a movement or at judging which tone is louder (Ivry & Diener, 1991; Keele & Ivry, 1990). Evidently, the cerebellum is important mainly for tasks that require timing. The cerebellum also appears critical for certain aspects of attention. For example, in one experiment, people were told to keep their eyes fixated on a central
The cerebellum is not only a motor structure. In one study, functional MRI measured cerebellar activity while people performed several tasks with a single set of objects (Gao et al., 1996). When they simply lifted objects, the cerebellum showed little activity. When they felt things with both hands to decide whether they were the same or different, the cerebellum was much more active. The cerebellum showed activity even when someone held a hand steady and the experimenter rubbed an object across it. That is, the cerebellum responded to sensory stimuli even in the absence of movement.
beep
beep
beep
beep
Figure 8.14 Location of the cerebellar nuclei relative to the cerebellar cortex In the inset at the upper left, the line indicates the plane shown in detail at the lower right.
Pons Cerebellar cortex Cerebellum
248
Chapter 8
Movement
Nuclei
E
E
point. At various times, they would see the letter E on either the left or right half of the screen, and they were to indicate the direction in which it was oriented (E, E , , or ) without moving their eyes. Sometimes they saw a signal telling where the letter would be on the screen. For most people, that signal improved their performance even if it appeared just 100 ms before the letter. For people with cerebellar damage, the signal had to appear nearly a second before the letter to be helpful. Evidently, people with cerebellar damage need longer to shift their attention (Townsend et al., 1999). So the cerebellum appears to be linked to timing, certain aspects of attention, and probably other abilities as well. Are these separate functions that just happen to be located in the same place? Or can we reduce them all to a single theme? (For example, maybe shifting attention requires timing or aim.) These Stellate cell unanswered questions require a careful analysis of behavior, not just a study of the nervous system.
Cellular Organization The cerebellum receives input from the spinal cord, from each of the sensory systems by way of the cranial nerve nuclei, and from the cerebral cortex. That information eventually reaches the cerebellar cortex, the surface of the cerebellum (see Figure 8.14). Figure 8.15 shows the types and arrangements of neurons in the cerebellar cortex. The figure is complex, but concentrate on these main points: • The neurons are arranged in a precise geometrical pattern, with multiple repetitions of the same units.
Parallel fibers
Purkinje cell Basket cell
Golgi cell
Granule cell
Mossy fibers
Figure 8.15 Cellular organization of the cerebellum
Parallel fibers (yellow) activate one Purkinje cell after another. Purkinje cells (red) inhibit a target cell in one of the nuclei of the cerebellum (not shown, but toward Climbing fiber the bottom of the illustration). The more Purkinje cells that respond, the longer the target cell is inhibited. In this way, the cerebellum controls the duration of a movement. 8.2 Brain Mechanisms of Movement
249
• The Purkinje cells are flat cells in sequential planes. • The parallel fibers are axons parallel to one another and perpendicular to the planes of the Purkinje cells. • Action potentials in varying numbers of parallel fibers excite one Purkinje cell after another. Each Purkinje cell then transmits an inhibitory message to cells in the nuclei of the cerebellum (clusters of cell bodies in the interior of the cerebellum) and the vestibular nuclei in the brainstem, which in turn send information to the midbrain and the thalamus. • Depending on which and how many parallel fibers are active, they might stimulate only the first few Purkinje cells or a long series of them. Because the parallel fibers’ messages reach different Purkinje cells one after another, the greater the number of excited Purkinje cells, the greater their collective duration of response. That is, if the parallel fibers stimulate only the first few Purkinje cells, the result is a brief message to the target cells; if they stimulate more Purkinje cells, the message lasts longer. The output of Purkinje cells controls the timing of a movement, including both its onset and offset (Thier, Dicke, Haas, & Barash, 2000).
STOP
&
CHECK
7. What kind of perceptual task would be most impaired by damage to the cerebellum? 8. How are the parallel fibers arranged relative to one another and to the Purkinje cells? 9. If a larger number of parallel fibers are active, what is the effect on the collective output of the Purkinje cells? 10. Do Purkinje cells control the strength or duration of a movement? Check your answers on page 253.
The Basal Ganglia The term basal ganglia applies collectively to a group of large subcortical structures in the forebrain (Figure 8.16).1 Various authorities differ in which structures they include as part of the basal ganglia, but everyone includes at least the caudate nucleus, the putamen (pyuh-TAY-men), and the globus pallidus. Input comes 1Ganglia
is the plural of ganglion, so the term basal ganglia is a plural noun.
250
Chapter 8
Movement
to the caudate nucleus and putamen, mostly from the cerebral cortex. Output from the caudate nucleus and putamen goes to the globus pallidus and from there mainly to the thalamus, which relays it to the cerebral cortex, especially its motor areas and the prefrontal cortex (Hoover & Strick, 1993). Caudate nucleus
Cerebral cortex
Putamen
Motor and prefrontal areas of cerebral cortex
Globus pallidus
Thalamus Midbrain
Most of the output from the globus pallidus to the thalamus releases GABA, an inhibitory transmitter, and neurons in the globus pallidus show much spontaneous activity. Thus, the globus pallidus is constantly inhibiting the thalamus. Input from the caudate nucleus and putamen tells the globus pallidus which movements to stop inhibiting. With extensive damage to the globus pallidus, as in people with Huntington’s disease (which we shall consider later), the result is a lack of inhibition of all movements and therefore many involuntary, jerky movements. In effect, the basal ganglia select which movement to make by ceasing to inhibit it. This circuit is particularly important for self-initiated behaviors. For example, a monkey in one study was trained to move one hand to the left or right to receive food. On trials when it heard a signal indicating exactly when to move, the basal ganglia showed little activity. However, on other trials, the monkey saw a light indicating that it should start its movement in not less than 1.5 seconds and finish in not more than 3 seconds. Therefore, the monkey had to judge the interval and choose its own starting time. Under those conditions, the basal ganglia were highly active (Turner & Anderson, 2005). In another study, people used a computer mouse to draw lines on a screen while researchers used PET scans to examine brain activity. Activity in the basal ganglia increased when people drew a new line but not when they traced a line already on the screen (Jueptner & Weiller, 1998). Again, the basal ganglia seem critical for initiating an action but not when the action is directly guided by a stimulus.
STOP
&
CHECK
11. How does damage to the basal ganglia lead to involuntary movements? Check your answer on page 253.
Figure 8.16 Location of the basal ganglia Putamen
Globus pallidus (lateral part) Globus pallidus (medial part)
The basal ganglia surround the thalamus and are surrounded by the cerebral cortex.
Caudate nucleus Thalamus Subthalamic nucleus
Substantia nigra
out any hand gestures. Or explain to someone how to draw a spiral without using the word spiral and again without any hand gestures. People with basal ganglia damage are impaired at learning motor skills like these and at converting new movements into smooth, “automatic” responses try it (Poldrack et al., 2005; Willingham, Ko- yourself roshetz, & Peterson, 1996).
Brain Areas and Motor Learning Of all the brain areas responsible for control of movement, which are important for learning new skills? The apparent answer is, all of them. Neurons in the motor cortex adjust their responses as a person or animal learns a motor skill. At first, movements are slow and inconsistent. As movements become faster, relevant neurons in the motor cortex increase their firing rates (D. Cohen & Nicolelis, 2004). After prolonged training, the movement patterns become more consistent from trial to trial, and so do the patterns of activity in the motor cortex. In engineering terms, the motor cortex increases its signal-to-noise ratio (Kargo & Nitz, 2004). The basal ganglia are critical for learning motor skills, for organizing sequences of movement into a whole, and in general, for the kinds of learning that we can’t easily express in words (Graybiel, 1998; Seger & Cincotta, 2005). For example, when you are first learning to drive a car, you have to think about everything you do. After much experience, you can signal for a left turn, change gears, turn the wheel, and change speed all at once in a single smooth movement. If you try to explain exactly what you do, you will probably find it difficult. Similarly, if you know how to tie a man’s tie, try explaining it to someone who doesn’t know—with-
STOP
&
CHECK
12. What kind of learning depends most heavily on the basal ganglia? Check your answer on page 253.
Module 8.2 In Closing: Movement Control and Cognition It is tempting to describe behavior in three steps—first we perceive, then we think, and finally we act. As you have seen, the brain does not handle the process in such discrete steps. For example, the posterior parietal cortex monitors the position of the body relative to visual space and therefore helps guide movement. Thus, its functions are sensory, cognitive, and motor. The cerebellum has traditionally been considered a major part of the motor system, but it is now known to be equally important in timing sensory processes. People with basal ganglia damage are slow to start or select a movement. They are also often described as cognitively slow; that is, they hesitate to make any kind of choice. In
8.2 Brain Mechanisms of Movement
251
short, organizing a movement is not something we tack on at the end of our thinking; it is intimately intertwined with all of our sensory and cognitive processes. The study of movement is not just the study of muscles; it is the study of how we decide what to do.
hibiting particular movements. Damage to the output from the basal ganglia leads to jerky, involuntary movements. (p. 250) 12. The learning of a motor skill depends on changes occurring in both the cerebral cortex and the basal ganglia. (p. 251)
Summary 1. The primary motor cortex is the main source of brain input to the spinal cord. The spinal cord contains central pattern generators that actually control the muscles. (p. 241) 2. The primary motor cortex produces patterns representing the intended outcome, not just the muscle contractions. Neurons in part of the parietal cortex respond to both a self-produced movement and an observation of a similar movement by another individual. (p. 242) 3. The dorsolateral tract, which controls movements in the periphery of the body, has axons that cross from one side of the brain to the opposite side of the spinal cord. (p. 243) 4. The ventromedial tract controls bilateral movements near the midline of the body. (p. 243) 5. Areas near the primary motor cortex—including the prefrontal, premotor, and supplementary motor cortices—are active in detecting stimuli for movement and preparing for a movement. (p. 245) 6. When people identify the instant when they formed a conscious intention to move, their time precedes the actual movement by about 200 ms but follows the start of motor cortex activity by about 300 ms. These results suggest that what we call a conscious decision is our perception of a process already underway, not really the cause of it. (p. 245) 7. People with damage to part of the parietal cortex fail to perceive any intention prior to the start of their own movements. (p. 247) 8. Some people with damage to the primary motor cortex of the right hemisphere are paralyzed on the left side but insist that they can still move. Evidently, they fail to receive the feedback that indicates lack of movement. (p. 247) 9. The cerebellum has multiple roles in behavior, including sensory functions related to perception of the timing or rhythm of stimuli. Its role in the control of movement is especially important for timing, aiming, and correcting errors. (p. 247) 10. The cells of the cerebellum are arranged in a very regular pattern that enables them to produce outputs of well-controlled duration. (p. 249) 11. The basal ganglia are a group of large subcortical structures that are important for selecting and in-
252
Chapter 8
Movement
Answers to STOP
&
CHECK
Questions 1. Activity in the motor cortex leads to a particular outcome, such as movement of the hand to the mouth, regardless of what muscle contractions are necessary given the hand’s current location. Also, neurons in part of the parietal cortex respond similarly to self-produced movement and to observation of a similar movement by another individual. (p. 244) 2. The dorsolateral tract controls detailed movements in the periphery on the contralateral side of the body. (For example, the dorsolateral tract from the left hemisphere controls the right side of the body.) The ventromedial tract controls the trunk muscles bilaterally. (p. 244) 3. The posterior parietal cortex is important for perceiving the location of objects and the position of the body relative to the environment, including those objects. The prefrontal cortex responds to sensory stimuli that call for some movement. The premotor cortex is active in preparing a movement immediately before it occurs. The supplementary motor cortex is especially active in preparing for a rapid sequence of movements. (p. 245) 4. Researchers recorded a readiness potential in the motor cortex, beginning about 500 ms before the start of the movement. When people reported the time that they felt the intention to move, the reported intention occurred about 200 ms before the movement and therefore 300 ms after the start of motor cortex activity that led to the movement. (p. 247) 5. After damage to the parietal cortex, people do not monitor the processes leading up to a movement. When they try to report the time of an intention to move, they report the same time when the movement actually began. That is, they are not aware of any intention before the movement itself. (p. 247) 6. Anosognosia occurs when someone is paralyzed on the left side but denies having such a problem. It is associated with damage to the motor cortex of the right hemisphere and parts of the surrounding areas. (p. 247)
7. Damage to the cerebellum strongly impairs perceptual tasks that depend on timing. (p. 250) 8. The parallel fibers are parallel to one another and perpendicular to the planes of the Purkinje cells. (p. 250) 9. As a larger number of parallel fibers become active, the Purkinje cells increase their duration of response. (p. 250) 10. Duration (p. 250) 11. Output from the basal ganglia to the thalamus consists mainly of the inhibitory transmitter GABA. Ordinarily, the basal ganglia produce steady output, inhibiting all movements or all except the ones selected at the time. After damage to the basal gan-
glia, the thalamus, and therefore the cortex, receive less inhibition. Thus, they produce unwanted actions. (p. 250) 12. The basal ganglia are essential for learning habits that are difficult to describe in words. (p. 251)
Thought Question Human infants are at first limited to gross movements of the trunk, arms, and legs. The ability to move one finger at a time matures gradually over at least the first year. What hypothesis would you suggest about which brain areas controlling movement mature early and which areas mature later?
8.2 Brain Mechanisms of Movement
253
Module 8.3
Disorders of Movement
E
ven if your nervous system and muscles are completely healthy, you may sometimes find it difficult to move in the way you would like. For example, if you have just finished a bout of unusually strenuous exercise, your muscles may be so fatigued that you can hardly move them voluntarily, even though they keep twitching. Or if your legs “fall asleep” while you are sitting in an awkward position, you may stumble and even fall when you try to walk. Certain neurological disorders produce exaggerated and lasting movement impairments. We consider two examples: Parkinson’s disease and Huntington’s disease.
Parkinson’s Disease The symptoms of Parkinson’s disease (also known as Parkinson disease) are rigidity, muscle tremors, slow movements, and difficulty initiating physical and mental activity (M. T. V. Johnson et al., 1996; Manfredi, Stocchi, & Vacca, 1995; Pillon et al., 1996). It strikes about 1 or 2 people per thousand over the age of 40, with increasing incidence at later ages. Although the motor problems are the most obvious, the disorder goes beyond movement. Parkinsonian patients are also slow on cognitive tasks, such as imagining events or actions, even when they don’t have to make any movements (Sawamoto, Honda, Hanakawa, Fukuyama, & Shibasaki, 2002). Most patients also become depressed at an early stage, and many show deficits of memory and reasoning. These mental symptoms are probably part of the disease itself, not just a reaction to the muscle failures (Ouchi et al., 1999). People with Parkinson’s disease are not paralyzed or weak; they are impaired at initiating spontaneous movements in the absence of stimuli to guide their actions. Rats with Parkinsonian-type brain damage have few spontaneous movements, but they respond well to strong stimuli (Horvitz & Eyny, 2000). Parkinsonian patients sometimes walk surprisingly well when following a parade, when walking up a flight of stairs, or when walking across lines drawn at one-step intervals (Teitelbaum, Pellis, & Pellis, 1991).
254
Chapter 8
Movement
The immediate cause of Parkinson’s disease is the gradual progressive death of neurons, especially in the substantia nigra, which sends dopamine-releasing axons to the caudate nucleus and putamen. People with Parkinson’s disease lose these axons and therefore all the effects that their dopamine would have produced. One of the effects of dopamine is inhibition of the caudate nucleus and putamen, and a decrease in that inhibition means that the caudate nucleus and putamen increase their stimulation of the globus pallidus. The result is increased inhibition of the thalamus and cerebral cortex, as shown in Figure 8.17 (Wichmann, Vitek, & DeLong, 1995). In summary, a loss of dopamine activity leads to less stimulation of the motor cortex and slower onset of movements (Parr-Brownlie & Hyland, 2005). Researchers estimate that the average person over age 45 loses substantia nigra neurons at a rate of almost 1% per year. Most people have enough to spare, but some either start with fewer or lose them faster. If the number of surviving substantia nigra neurons declines below 20%–30% of normal, Parkinsonian symptoms begin (Knoll, 1993). Symptoms become more severe as the cell loss continues.
Possible Causes In the late 1990s, the news media excitedly reported that researchers had located a gene that causes Parkinson’s disease. That report was misleading. The research had found certain families in which people sharing a particular gene all developed Parkinson’s disease with onset before age 50 (Shimura et al., 2001). Since then, several other genes have been found that lead to earlyonset Parkinson’s disease (Bonifati et al., 2003; Singleton et al., 2003; Valente et al., 2004). However, none of these genes have any link to later-onset Parkinson’s disease, which is far more common. One study examined Parkinson’s patients who had twins. As shown in Figure 8.18, if you have a monozygotic (MZ) twin who develops early-onset Parkinson’s disease, you are almost certain to get it too. In other words, early-onset Parkinson’s disease has a strong genetic basis. However, if your twin develops Parkinson’s
Globus pallidus
Cerebral cortex
Cerebral cortex
Thalamus
Globus pallidus
Thalamus
Amygdala
Amygdala Putamen
Putamen
Substantia nigra
Substantia nigra
(a)
(b)
Figure 8.17 Connections from the substantia nigra: (a) normal and (b) in Parkinson’s disease Excitatory paths are shown in green; inhibitory are in red. The substantia nigra’s axons inhibit the putamen. Axon loss increases excitatory communication to the globus pallidus. The result is increased inhibition from the globus pallidus to the thalamus and decreased excitation from the thalamus to the cerebral cortex. People with Parkinson’s disease show decreased initiation of movement, slow and inaccurate movement, and psychological depression. (Source: Based on Wichmann, Vitek, & DeLong, 1995)
If one MZ twin gets Parkinson’s disease before age 50, the other does too:
But if one DZ twin gets it before age 50, the other still has only a moderate probability:
0 2
Parkinson’s Not Parkinson’s
10 4
If one MZ twin gets Parkinson’s disease after age 50, the other twin has a moderate probability of getting it too:
If one DZ twin gets Parkinson’s disease after age 50, the other twin has that same moderate probability:
7
8 Parkinson’s Not Parkinson’s
58
68
Figure 8.18 Probability of developing Parkinson’s disease if you have a twin who developed the disease before or after age 50 Having a monozygotic (MZ) twin develop Parkinson’s disease before age 50 means that you are very likely to get it too. A dizygotic (DZ) twin who gets it before age 50 does not pose the same risk. Therefore, earlyonset Parkinson’s disease shows a strong genetic component. However, if your twin develops Parkinson’s disease later (as is more common), your risk is the same regardless of whether you are a monozygotic or dizygotic twin. Therefore, late-onset Parkinson’s disease has little or no heritability. (Source: Based on data of Tanner et al., 1999)
8.3 Disorders of Movement
255
disease after age 50, your risk is lower, and it doesn’t depend on whether your twin is monozygotic or dizygotic (Tanner et al., 1999). Equal concordance for both kinds of twins implies low heritability. That is, it implies that the family resemblance depends on shared environment rather than genes. However, this study had a small sample size. Furthermore, a study using brain scans found that many of the monozygotic twins without symptoms of Parkinson’s disease did have indications of minor brain damage that could lead to symptoms later in life (Piccini, Burn, Ceravolo, Maraganore, & Brooks, 1999). A later study examined the chromosomes of people in 174 families that included more than one person with Parkinson’s disease. Researchers looked for genes that were more common in family members with Parkinson’s disease than those withoutit. They found no gene that was consistently linked to the disease, but five genes were more common among people with Parkinson’s (E. Martin et al., 2001; W. K. Scott et al., 2001). However, none of these genes had a powerful impact. For example, one gene was found in 82% of the people with Parkinson’s disease and in 79% of those without it. Clearly, the great majority of people with this gene do not get Parkinson’s disease. In short, genetic factors are only a small contributor to late-onset Parkinson’s disease. What environmental influences might be relevant? Sometimes Parkinson’s disease results from exposure to toxins. The first solid evidence was discovered by accident (Ballard, Tetrud, & Langston, 1985). In northern California in 1982, several people aged 22 to 42 developed symptoms of Parkinson’s disease after using a drug similar to heroin. Before the investigators could alert the community to the danger of this heroin substitute, many other users had developed symptoms ranging from mild to fatal (Tetrud, Langston, Garbe, & Ruttenber, 1989). The substance responsible for the symptoms was MPTP, a chemical that the body converts to MPP+, which accumulates in, and then destroys, neurons that release dopamine2 (Nicklas, Saporito, Basma, Geller, & Heikkila, 1992). Postsynaptic neurons compensate for the loss by increasing their number of dopamine receptors (Chiueh, 1988) (Figure 8.19). In many ways, the extra receptors help, but they also produce a jumpy overresponsiveness that creates additional problems (W. C. Miller & DeLong, 1988). No one supposes that Parkinson’s disease is often the result of using illegal drugs. A more likely hypothesis is that people are sometimes exposed to MPTP or similar chemicals in herbicides and pesticides 2The
full names of these chemicals are 1-methyl-4 phenyl-1,2,3,6tetrahydropyridine and 1-methyl-4-phenylpyridinium ion. (Let’s hear it for abbreviations!)
256
Chapter 8
Movement
Image not available due to copyright restrictions
(Figure 8.20), many of which can damage cells of the substantia nigra (Betarbet et al., 2000). For example, rats exposed to the pesticide rotenone develop a condition closely resembling human Parkinson’s disease (Betarbet et al., 2000). However, if exposure to a toxin were the main cause of Parkinson’s disease, we should expect to find great differences in the prevalence be-
CH3
CH3
N
N
CH3
CH3
+N
+N
O2CC2H5 +N
MPPP
MPTP
MPP+
CH3 Paraquat
Figure 8.20 The chemical structures of MPPP, MPTP, MPP+, and paraquat Exposure to paraquat and similar herbicides and pesticides may increase the risk of Parkinson’s disease.
tween one geographical area and another. The more or less random distribution of the disease suggests that toxins are just one cause among others yet to be discovered (C. A. D. Smith et al., 1992). What else might influence the risk of Parkinson’s disease? Researchers have compared the lifestyles of people who did and didn’t develop the disease. One factor that stands out consistently is cigarette smoking and coffee drinking: People who smoke cigarettes or drink coffee have less chance of developing Parkinson’s disease (Hernán, Takkouche, Caamaño-Isorna, & GestalOtero, 2002). (Read that sentence again.) One study took questionnaire results from pairs of young adult twins and compared the results to medical records decades later. Cigarette smoking was about 70% as common in people who developed Parkinson’s disease as in their twins who did not (Wirdefeldt, Gatz, Pawitan, & Pedersen, 2004). A study of U.S. adults compared coffee drinking in middle-aged adults to their medical histories later in life; drinking coffee decreased the risk of Parkinson’s disease, especially for men (Ascherio et al., 2004). Needless to say, smoking cigarettes increases the risk of lung cancer and other diseases more than it decreases the risk of Parkinson’s disease. Coffee has somewhat less benefit for decreasing Parkinson’s disease, but it’s safer than smoking. In contrast to the effects of tobacco, marijuana increases the risk of Parkinson’s disease (Glass, 2001). Researchers do not yet know how any of these drugs alters the risk of Parkinson’s disease. In short, Parkinson’s disease probably results from a mixture of causes. Perhaps what they have in common is damage to the mitochondria. When a neuron’s mitochondria begin to fail—because of genes, toxins, infections, or whatever—a chemical called a-synuclein clots into clusters that damage neurons containing dopamine (Dawson & Dawson, 2003). Dopaminecontaining neurons are evidently more vulnerable than most other neurons to damage from almost any metabolic problem (Zeevalk, Manzino, Hoppe, & Sonsalla, 1997).
STOP
&
CHECK
1. Do monozygotic twins resemble each other more than dizygotic twins do for early-onset Parkinson’s disease? For late-onset? What conclusion do these results imply? 2. How does MPTP exposure influence the likelihood of Parkinson’s disease? What are the effects of cigarette smoking? Check your answers on page 261.
L-Dopa Treatment If Parkinson’s disease results from a dopamine deficiency, then the goal of therapy should be to restore the missing dopamine. However, a dopamine pill would be ineffective because dopamine does not cross the bloodbrain barrier. L-dopa, a precursor to dopamine, does cross the barrier. Taken as a daily pill, L-dopa reaches the brain, where neurons convert it to dopamine. L-dopa continues to be the main treatment for Parkinson’s disease. However, L-dopa is disappointing in several ways. First, it is ineffective for some patients, especially those in the late stages of the disease. Second, it does not prevent the continued loss of neurons. In fact, there is some evidence that too much dopamine kills dopaminecontaining cells (Weingarten & Zhou, 2001). For that reason, L-dopa could do harm as well as good. Third, L-dopa enters not only the brain cells that need extra dopamine but also others, producing unpleasant side effects such as nausea, restlessness, sleep problems, low blood pressure, repetitive movements, hallucinations, and delusions. Generally, the more severe the patient’s symptoms, the more severe the side effects.
Therapies Other Than L-Dopa Given the limitations of L-dopa, researchers have sought alternatives and supplements. The following possibilities show promise (Dunnett & Björklund, 1999; Siderowf & Stern, 2003): • Antioxidant drugs to decrease further damage • Drugs that directly stimulate dopamine receptors • Neurotrophins to promote survival and growth of the remaining neurons • Drugs that decrease apoptosis (programmed cell death) of the remaining neurons • High-frequency electrical stimulation of the globus pallidus or the subthalamic nucleus (This procedure is especially effective for blocking tremor.) A potentially exciting strategy is still in the experimental stage but has been “in the experimental stage” for more than a quarter of a century already. In a pioneering study, M. J. Perlow and colleagues (1979) injected the chemical 6-OHDA into rats to make lesions in the substantia nigra of one hemisphere, producing Parkinson’s-type symptoms in the opposite side of the body. After the movement abnormalities stabilized, the experimenters removed the substantia nigra from rat fetuses and transplanted them into the damaged brains. The grafts survived in 29 of the 30 recipients, making synapses in varying numbers. Four weeks later, most recipients had recovered much of their normal movement. Control animals that suffered the same brain dam-
8.3 Disorders of Movement
257
age without receiving grafts showed little or no behavioral recovery. If such surgery works for rats, might it also for humans? The procedure itself is feasible. Perhaps because the blood-brain barrier protects the brain from foreign substances, the immune system is less active in the brain than elsewhere (Nicholas & Arnason, 1992), and physicians can give drugs to further suppress rejection of the transplanted tissue. However, only immature cells transplanted from a fetus can make connections, and simply making connections is not enough. In laboratory research, the recipient animal still has to relearn the behaviors dependent on those cells (Brasted, Watts, Robbins, & Dunnett, 1999). In effect, the animal has to practice using the transplanted cells. Ordinarily, scientists test any experimental procedure extensively with laboratory animals before trying it on humans, but with Parkinson’s disease, the temptation was too great. People in the late stages have little to lose and are willing to try almost anything. The obvious problem is where to get the donor tissue. Several early studies used tissue from the patient’s own adrenal gland. Although that tissue is not composed of neurons, it produces and releases dopamine. Unfortunately, the adrenal gland transplants seldom produced much benefit (Backlund et al., 1985). Another possibility is to transplant brain tissue from aborted fetuses. Fetal neurons transplanted into the brains of Parkinson’s patients sometimes survive for years and do make synapses with the patient’s own cells. However, the operation is difficult and expensive, requiring brain tissue from four to eight aborted fetuses. According to two studies, most patients show little or no benefit a year after surgery (Freed et al., 2001; Olanow et al., 2003). Patients with only mild symptoms showed a significant benefit, but that benefit consisted only of failing to deteriorate over the year, not actually improving (Olanow et al., 2003). At best, this procedure needs more research. One problem is that many transplanted cells do not survive or do not form effective synapses, especially in the oldest recipients. Aging brains produce less volume of neurotrophins than younger brains do. Researchers have found improved success of brain transplants in aging rats if the transplant includes not only fetal neurons but also a source of neurotrophins (Collier, Sortwell, & Daley, 1999). One way to decrease the need for aborted fetuses is to grow cells in tissue culture, genetically alter them so that they produce large quantities of L-dopa, and then transplant them into the brain (Ljungberg, Stern, & Wilkin, 1999; Studer, Tabar, & McKay, 1998). That idea is particularly attractive if the cells grown in tissue culture are stem cells, immature cells that are capable of differentiating into a wide variety of other cell types depending on where they are in the body. It may be possible to nurture a population of stem cells
258
Chapter 8
Movement
that are capable of becoming dopamine-releasing neurons and then deposit them into damaged brain areas (Kim et al., 2002). However, so far this method has also produced only modest benefits (Lindvall, Kokaia, & Martinez-Serrano, 2004). The research on brain transplants has suggested yet another possibility for treatment. In several experiments, the transplanted tissue failed to survive, but the recipient showed behavioral recovery anyway. Evidently, the transplanted tissue releases neurotrophins that stimulate axon and dendrite growth in the recipient’s own brain (Bohn, Cupit, Marciano, & Gash, 1987; Dunnett, Ryan, Levin, Reynolds, & Bunch, 1987; Ensor, Morley, Redfern, & Miles, 1993). Further research has demonstrated that brain injections of neurotrophins can significantly benefit brain-damaged rats and monkeys, presumably by enhancing the growth of axons and dendrites (Eslamboli et al., 2005; Gash et al., 1996; Kolb, Côte, Ribeiro-da-Silva, & Cuello, 1997). Because neurotrophins do not cross the blood-brain barrier, researchers are developing novel ways to get them into the brain (Kordower et al., 2000). For the latest information about Parkinson’s disease, visit the website of the World Parkinson Disease Association: http://www.wpda.org/
STOP
&
CHECK
3. What is the likely explanation for how L-dopa relieves the symptoms of Parkinson’s disease? 4. In what ways is L-dopa treatment disappointing? 5. What are some other possible treatments? Check your answers on pages 261–262.
Huntington’s Disease Huntington’s disease (also known as Huntington disease or Huntington’s chorea) is a severe neurological disorder that strikes about 1 person in 10,000 in the United States (A. B. Young, 1995). Motor symptoms usually begin with arm jerks and then facial twitches; later, tremors spread to other parts of the body and develop into writhing (M. A. Smith, Brandt, & Shadmehr, 2000). (Chorea comes from the same root as choreography; the writhings of chorea sometimes resemble dancing.) Gradually, the twitches, tremors, and writhing interfere more and more with the person’s walking, speech, and other voluntary movements. The ability to learn and improve new movements is especially limited (Willingham et al., 1996). The disorder is associated with gradual, extensive brain damage, especially
Robert E. Schmidt, Washington University
Figure 8.21 Brain of a normal person (left) and a person with Huntington’s disease (right) The angle of cut through the normal brain makes the lateral ventricle look larger in this photo than it actually is. Even so, note how much larger it is in the patient with Huntington’s disease. The ventricles expand because of the loss of neurons.
in the caudate nucleus, putamen, and globus pallidus but also in the cerebral cortex (Tabrizi et al., 1999) (Figure 8.21). People with Huntington’s disease also suffer psychological disorders, including depression, memory impairment, anxiety, hallucinations and delusions, poor judgment, alcoholism, drug abuse, and sexual disorders ranging from complete unresponsiveness to indiscriminate promiscuity (Shoulson, 1990). The psychological disorders sometimes develop before the motor disorders, and occasionally, someone in the early stages of Huntington’s disease is misdiagnosed as having schizophrenia. Huntington’s disease most often appears between the ages of 30 and 50, although onset can occur at any time from early childhood to old age. Once the symptoms emerge, both the psychological and the motor symptoms grow progressively worse and culminate in death. The earlier the onset, the more rapid the deterioration. At this point, no treatment is effective at either controlling the symptoms or slowing the course of the disease. However, mouse research suggests that a stimulating environment can delay the onset of symptoms (van Dellen, Blakemore, Deacon, York, & Hannan, 2000).
Heredity and Presymptomatic Testing Huntington’s disease is controlled by an autosomal dominant gene (i.e., one not on the X or Y chromosome). As a rule, a mutant gene that causes the loss of a function is recessive. The fact that the Huntington’s
gene is dominant implies that it produces the gain of some undesirable function. Imagine that as a young adult you learn that your mother or father has Huntington’s disease. In addition to your grief about your parent, you know that you have a 50% chance of getting the disease yourself. Your uncertainty may make it difficult to decide whether to have children and perhaps even what career to choose. (For example, you might decide against a career that requires many years of education.) Would you want to know in advance whether or not you were going to get the disease? Investigators worked for many years to discover an accurate presymptomatic test to identify who would develop the disease later. In the 1980s, researchers established that the gene for Huntington’s disease is on chromosome number 4, and in 1993, they identified the gene itself (Huntington’s Disease Collaborative Research Group, 1993). Now an examination of your chromosomes can reveal with almost perfect accuracy whether or not you will get Huntington’s disease. Not everyone who is at high risk for the disease wants to take the test, but many do. The critical area of the gene includes a sequence of bases C-A-G (cytosine, adenine, guanine), which is repeated 11 to 24 times in most people. That repetition produces a string of 11 to 24 glutamines in the resulting protein. People with up to 35 C-A-G repetitions are considered safe from Huntington’s disease. Those with 36 to 38 might get it, but probably not until old age, if at all. People with 39 or more repetitions are likely to get the disease, unless they die of other causes earlier. The more C-A-G repetitions someone has, the earlier the probable onset of the disease, as shown in Figure 8.22 (Brinkman, Mezei, Theilmann, Almqvist, & Hayden, 1997). In people with more than 50 repeats, the symptoms are likely to begin even younger, although too few such cases have been found to present reliable statistics. In short, a chromosomal examination can predict not only whether a person will get Huntington’s disease but also approximately when. Figure 8.23 shows comparable data for Huntington’s disease and seven other neurological disorders. Each of them is related to an unusually long sequence of C-A-G repeats in some gene. Note that in each case, the greater the number of C-A-G repeats, the earlier the probable onset of the disease (Gusella & MacDonald, 2000). Those with a smaller number will be older when they get the disease, if they get it at all. You will recall a similar fact about Parkinson’s disease: Several genes have been linked to early-onset Parkinson’s disease, but the late-onset condition is less predictable and appears to depend on environmental factors more than genes. As we shall see in later chapters, genetic factors are clearly important for early-onset Alzheimer’s disease, alcoholism, depression, and schizophrenia. For people with later onset, the role of genetics is weaker 8.3 Disorders of Movement
259
70
Mean age at onset
60 50 40 30 20 10 0 39
40
41
42
43
44
45
46
47
48
49
50
Number of C-A-G repeats
Figure 8.22 Relationship between C-A-G repeats and age of onset of Huntington’s disease Examination of someone’s chromosomes can determine the number of C-A-G repeats in the gene for the huntingtin protein. The greater the number of C-A-G repeats, the earlier the probable onset. People with 36–38 repeats may or may not get the disease or may get it only in old age. The ages presented here are, of course, means. For a given individual, a prediction can be made only as an approximation. (Source: Based on data of Brinkmann, Mezei, Theilmann, Almqvist, & Hayden, 1997)
Figure 8.23 Relationship between C-A-G repeats and age of onset of eight diseases
Reproduced with permission from “Molecular genetics: Unmasking polyglutamine triggers in neurogenerative disease,” by J. F. Gusella and M. E. MacDonald, Fig 1, p. 111 in Neuroscience, 1, pp. 109–115, copyright 2000 Macmillan Magazines, Ltd.)
260
Chapter 8
Movement
70 Huntington’s disease Spinal and bulbar muscular dystrophy
60
Dentatorubro-pallidoluysian dystrophy
50 Age at onset (years)
The x axis shows the number of C-A-G repeats; the y axis shows the mean age at onset of disease. The various lines represent Huntington’s disease and seven others. The four unlabeled lines are for four different types of spinocerebellar ataxia. The key point is that for each disease, the greater the number of repeats, the earlier the probable onset of symptoms. (Source:
or less certain. In short, the results suggest a consistent pattern for many disorders: The earlier the onset, the greater the probability of a strong genetic influence. Even so, genes are not the only influence. Figures 8.22 and 8.23 show the means, but individuals vary. For example, among people with 48 repetitions of the Huntington’s gene, the mean age of onset is about 30 years, but some develop symptoms before age 20 and others not until after 40 (U.S.–Venezuela Collaborative Research Project, 2004). Evidently, the outcome depends mostly on one gene, but other influences can modify it, including other genes and unidentified environmental factors. Identification of the gene for Huntington’s disease led to the discovery of the protein that it codes, which has been designated huntingtin. Huntingtin occurs throughout the human body, although its mutant form produces no known harm outside the brain. Within the brain, it occurs inside neurons, not on their membranes. The abnormal form of the protein, with long chains of glutamine, aggregates into clusters that impair the neuron’s mitochondria (Panov et al., 2002). As a result, the neuron becomes vulnerable to damage from a variety of possible influences. Also, cells with the abnormal huntingtin protein fail to release the neurotrophin BDNF, which they ordinarily release along with their neurotransmitter (Zuccato et al., 2001). The result is impaired functioning of still other cells. Identifying the abnormal huntingtin protein and its cellular functions has enabled investigators to search
Machado-Joseph disease 40
30
20
10
0 20
40
60 80 Number of C-A-G codons
100
120
for drugs that reduce the harm. Researchers using animal models of Huntington’s disease have found a number of drugs that show promise. Several of them block the glutamine chains from clustering (Sánchez, Mahlke, & Yuan, 2003; Zhang et al., 2005). Another drug interferes with the RNA that enables expression of the huntingtin gene (Harper et al., 2005). However, a drug that works well with animal models may or may not work with humans. In fact, some drugs work with one kind of animal model and not another (Beal & Ferrante, 2004). Nevertheless, the animal models at least identify candidate drugs that are worth trying with humans. We now see some realistic hope of drugs to prevent or alleviate Huntington’s disease. For the latest information, here is the website of the Huntington’s Disease Society of America: http://www .hdsa.org
STOP
&
CHECK
6. What is a presymptomatic test? 7. What procedure enables physicians to predict who will or will not get Huntington’s disease and to estimate the age of onset? Check your answers on page 262.
Module 8.3 In Closing: Heredity and Environment in Movement Disorders Parkinson’s disease and Huntington’s disease show that genes influence behavior in different ways. Someone who examines the chromosomes can predict almost certainly who will and who will not develop Huntington’s disease and with moderate accuracy predict when. A gene has also been identified for earlyonset Parkinson’s disease, but for the late-onset version, environmental influences appear to be more important. In later chapters, especially Chapter 15, we shall discuss other instances in which genes increase the risk of certain disorders, but we will not encounter anything with such a strong heritability as Huntington’s disease.
Summary 1. Parkinson’s disease is characterized by impaired initiation of activity, slow and inaccurate movements, tremor, rigidity, depression, and cognitive deficits. It is associated with the degeneration of dopamine-
containing axons from the substantia nigra to the caudate nucleus and putamen. (p. 254) 2. Early-onset Parkinson’s disease has a strong hereditary basis, and the responsible gene has been identified. However, heredity plays only a small role in the ordinary form of Parkinson’s disease, with onset after age 50. (p. 254) 3. The chemical MPTP selectively damages neurons in the substantia nigra and leads to the symptoms of Parkinson’s disease. Some cases of Parkinson’s disease may result in part from exposure to toxins. (p. 256) 4. The most common treatment for Parkinson’s disease is L-dopa, which crosses the blood-brain barrier and enters neurons that convert it into dopamine. However, the effectiveness of L-dopa varies from one patient to another and is sometimes disappointing; it also produces unwelcome side effects. Many other treatments are in use or at least in the experimental stage. (p. 257) 5. Huntington’s disease is a hereditary condition marked by deterioration of motor control as well as depression, memory impairment, and other cognitive disorders. (p. 258) 6. By examining chromosome 4, physicians can determine whether someone is likely to develop Huntington’s disease later in life. The more C-A-G repeats in the gene, the earlier the likely onset of symptoms. (p. 259) 7. The gene responsible for Huntington’s disease alters the structure of a protein, known as huntingtin. The altered protein interferes with functioning of the mitochondria. (p. 260)
Answers to STOP
&
CHECK
Questions 1. Monozygotic twins resemble each other more than dizygotic twins do for early-onset Parkinson’s disease but not for late-onset. The conclusion is that early-onset Parkinson’s disease is highly heritable and late-onset is not. (p. 257) 2. Exposure to MPTP can induce symptoms of Parkinson’s disease. Cigarette smoking is correlated with decreased prevalence of the disease. (p. 257) 3. L-dopa enters the brain, where neurons convert it to dopamine, thus increasing the supply of a depleted neurotransmitter. (p. 258) 4. L-dopa is ineffective for some people and has only limited benefits for most others. It does not stop the loss of neurons. For people with severe Parkinson’s
8.3 Disorders of Movement
261
Huntington’s disease. For repeats of 36 or more, the larger the number, the more certain the person is to develop the disease and the earlier the probable age of onset. (p. 261)
disease, L-dopa produces fewer benefits and more severe side effects. (p. 258) 5. Other possible treatments include antioxidants, drugs that directly stimulate dopamine receptors, neurotrophins, drugs that decrease apoptosis, highfrequency electrical stimulation of the globus pallidus, and transplants of neurons from a fetus. (p. 258) 6. A presymptomatic test is given to people who do not yet show symptoms of a condition to predict who will eventually develop it. (p. 261) 7. Physicians can examine human chromosome 4. In one identified gene, they can count the number of consecutive repeats of the combination C-A-G. If the number is fewer than 36, the person will not develop
Thought Questions 1. Haloperidol is a drug that blocks dopamine synapses. What effect would it be likely to have in someone suffering from Parkinson’s disease? 2. Neurologists assert that if people lived long enough, sooner or later everyone would develop Parkinson’s disease. Why?
Chapter Ending
Key Terms and Activities Terms aerobic (p. 234)
grasp reflex (p. 236)
primary motor cortex (p. 241)
anaerobic (p. 234)
huntingtin (p. 260)
proprioceptor (p. 235)
anosognosia (p. 247)
Huntington’s disease (p. 258)
Purkinje cell (p. 250)
antagonistic muscles (p. 232)
L-dopa (p. 257)
readiness potential (p. 246)
Babinski reflex (p. 236)
mirror neurons (p. 242)
red nucleus (p. 243)
ballistic movement (p. 238)
motor program (p. 238)
reflex (p. 236)
basal ganglia (caudate nucleus, putamen, globus pallidus) (p. 250) cardiac muscle (p. 232) central pattern generator (p. 238) cerebellar cortex (p. 249) dorsolateral tract (p. 243) extensor (p. 232) fast-twitch fiber (p. 234) flexor (p. 232) Golgi tendon organ (p. 236)
262
Chapter Ending
MPTP,
MPP+
(p. 256)
muscle spindle (p. 235)
rooting reflex (p. 236)
myasthenia gravis (p. 234)
skeletal muscle (or striated muscle) (p. 232)
neuromuscular junction (p. 232)
slow-twitch fiber (p. 234)
nuclei of the cerebellum (p. 250)
smooth muscle (p. 232)
parallel fibers (p. 250)
stem cell (p. 258)
Parkinson’s disease (p. 254)
stretch reflex (p. 235)
posterior parietal cortex (p. 244)
supplementary motor cortex (p. 245)
prefrontal cortex (p. 245) premotor cortex (p. 245) presymptomatic test (p. 259)
ventromedial tract (p. 243) vestibular nucleus (p. 243)
Suggestions for Further Reading Cole, J. (1995). Pride and a daily marathon. Cambridge, MA: MIT Press. Biography of a man who lost his sense of touch and proprioception from the neck down and eventually learned to control his movements strictly by vision. Klawans, H. L. (1996). Why Michael couldn’t hit. New York: W. H. Freeman. A collection of fascinating sports examples related to the brain and its disorders.
http://www.thomsonedu.com Go to this site for the link to ThomsonNOW, your one-stop study shop, Take a Pre-Test for this chapter, and ThomsonNOW will generate a Personalized Study Plan based on your test results. The Study Plan will identify the topics you need to review and direct you to online resources to help you master these topics. You can then take a Post-Test to help you determine the concepts you have mastered and what you still need work on.
Lashley, K. S. (1951). The problem of serial order in behavior. In L. A. Jeffress (Ed.), Cerebral mechanisms in behavior (pp. 112–136). New York: Wiley. One of the true classic articles in psychology; a thought-provoking appraisal of what a theory of movement should explain.
Websites to Explore You can go to the Biological Psychology Study Center and click these links. While there, you can also check for suggested articles available on InfoTrac College Edition. The Biological Psychology Internet address is:
View an animation of the dorsolateral and ventromedial tracts.
http://psychology.wadsworth.com/book/kalatbiopsych9e/
Myasthenia Gravis Links http://pages.prodigy.net/stanley.way/myasthenia/
Huntington’s Disease Society of America http://www.hdsa.org
World Parkinson Disease Association http://www.wpda.org/
Exploring Biological Psychology CD The Withdrawal Reflex (animation)
This animation illustrates one example of a reflex.
The Crossed Extensor Reflex (animation) Major Motor Areas (animation) Critical Thinking (essay questions) Chapter Quiz (multiple-choice questions)
Chapter Ending
263
Image not available due to copyright restrictions
Wakefulness and Sleep
9
Chapter Outline
Main Ideas
Module 9.1
1. Wakefulness and sleep alternate on a cycle of approximately 24 hours. The brain itself generates this cycle.
Rhythms of Waking and Sleeping Endogenous Cycles Mechanisms of the Biological Clock Setting and Resetting the Biological Clock In Closing: Sleep–Wake Cycles Summary Answers to Stop & Check Questions Thought Questions Module 9.2
Stages of Sleep and Brain Mechanisms The Stages of Sleep Paradoxical or REM Sleep Brain Mechanisms of Wakefulness and Arousal Brain Function in REM Sleep Sleep Disorders In Closing: Stages of Sleep Summary Answers to Stop & Check Questions Thought Question Module 9.3
Why Sleep? Why REM? Why Dreams? Functions of Sleep Functions of REM Sleep Biological Perspectives on Dreaming In Closing: Our Limited Self-Understanding Summary Answers to Stop & Check Questions Thought Question Terms Suggestions for Further Reading Websites to Explore Exploring Biological Psychology CD ThomsonNOW
2. Sleep progresses through various stages, which differ in brain activity, heart rate, and other aspects. A special type of sleep, known as paradoxical or REM sleep, is light in some ways and deep in others. 3. Areas in the brainstem and forebrain control arousal and sleep. Localized brain damage can produce prolonged sleep or wakefulness. 4. People have many reasons for failing to sleep well enough to feel rested the following day. 5. We need sleep and REM sleep, although much about their functions remains uncertain.
E
very multicellular animal that we know about has daily rhythms of wakefulness and sleep, and if we are deprived of sleep, we suffer. But if life evolved on another planet with different conditions, could animals evolve life without a need for sleep? Imagine a planet that doesn’t rotate on its axis. Some animals evolve adaptations to live in the light area, others in the dark area, and still others in the twilight zone separating light from dark. There would be no need for any animal to alternate active periods with inactive periods on any fixed schedule and perhaps no need at all for prolonged inactive periods. If you were the astronaut who discovered these nonsleeping animals, you might be surprised. Now imagine that astronauts from that planet set out on their first voyage to Earth. Imagine their surprise to discover animals like us with long inactive periods resembling death. To someone who hadn’t seen sleep before, it would seem strange and mysterious indeed. For the purposes of this chapter, let us adopt their perspective and ask why animals as active as we are spend one-third of our lives doing so little.
Opposite: Rock hyraxes at a national park in Kenya. Source: © Norbert Wu
265
Module 9.1
Rhythms of Waking and Sleeping
Y
ou are, I suspect, not particularly surprised to learn that your body spontaneously generates its own rhythm of wakefulness and sleep. Psychologists of an earlier era, however, considered that idea revolutionary. When behaviorism dominated experimental psychology from about the 1920s through the 1950s, many psychologists believed that any behavior could be traced to an external stimulation. For example, alternation between wakefulness and sleep must depend on something in the outside world, such as the cycle of sunrise and sunset or temperature fluctuations. The research of Curt Richter (1922) and others implied that the body generates its own cycles of activity and inactivity. Gradually, the evidence became stronger that animals generate approximately 24-hour cycles of wakefulness and sleep even in an environment that was as constant as anyone could make it. The idea of self-generated wakefulness and sleep was an important step toward viewing animals as active producers of behaviors.
Endogenous Cycles An animal that produced its behavior entirely in response to current stimuli would be at a serious disadvantage; in many cases, an animal has to prepare for changes in sunlight and temperature before they occur. For example, migratory birds start flying toward their winter homes before their summer territory becomes too cold for survival. A bird that waited for the first frost would be in serious trouble. Similarly, squirrels begin storing nuts and putting on extra layers of fat in preparation for winter long before food becomes scarce. Animals’ readiness for a change in seasons comes partly from internal mechanisms. For example, several cues tell a migratory bird when to fly south for the winter, but after it reaches the tropics, no external cues tell it when to fly north in the spring. (In the tropics, the temperature doesn’t change much from season to season, and neither does the length of day or night.) Nevertheless, it flies north at the right time. Even if it is kept in a cage with no cues to the season, it becomes restless in the spring, and if it is released, it flies north
266
Chapter 9
Wakefulness and Sleep
(Gwinner, 1986). Evidently, the bird’s body generates a rhythm, an internal calendar, that prepares it for seasonal changes. We refer to that rhythm as an endogenous circannual rhythm. (Endogenous means “generated from within.” Circannual comes from the Latin words circum, for “about,” and annum, for “year.”) Similarly, all studied animals produce endogenous circadian rhythms, rhythms that last about a day.1 (Circadian comes from circum, for “about,” and dies, for “day.”) Our most familiar endogenous circadian rhythm controls wakefulness and sleepiness. If you go without sleep all night—as most college students do, sooner or later—you feel sleepier and sleepier as the night goes on, until early morning. But as morning arrives, you actually begin to feel less sleepy. Evidently, your urge to sleep depends largely on the time of day, not just how recently you have slept (Babkoff, Caspy, Mikulincer, & Sing, 1991). Figure 9.1 represents the activity of a flying squirrel kept in total darkness for 25 days. Each horizontal line represents one 24-hour day. A thickening in the line represents a period of activity by the animal. Even in this unchanging environment, the animal generates a regular rhythm of activity and sleep. The self-generated cycle may be slightly shorter than 24 hours, as in Figure 9.1, or slightly longer depending on whether the environment is constantly light or constantly dark and on whether the species is normally active in the light or in the dark (Carpenter & Grossberg, 1984). The cycle may also vary from one individual to another, partly for genetic reasons. Nevertheless, the rhythm is highly consistent for a given individual in a given environment, even if the environment provides no clues to time. Mammals, including humans, have circadian rhythms in their waking and sleeping, frequency of eating and drinking, body temperature, secretion of hormones, volume of urination, sensitivity to drugs, and other variables. For example, although we ordinarily think of human body temperature as 37°C, normal tem-
1It
would be interesting to know about a few species that have not been studied, such as fish that live deep in the sea, where light never reaches. Do they also have 24-hour cycles of activity and sleep?
1
Days of experiment
5
Sleep period starts earlier each day than the last.
10
37.0 36.9 36.8 36.7 36.6
15
98.9 98.8 98.7 98.6 98.5 98.4 98.3 98.2 98.1 98.0 97.9
37.1
–12 –10 –8 –6 –4 –2 0 2 4 Hours from sleep onset
6
Rectal temperature (°F)
Waking period
Rectal temperature (°C)
37.2
Waking period starts earlier each day than the last.
8
20
Figure 9.2 Mean rectal temperatures for nine adults
25
Body temperature reaches its low for the day about 2 hours after sleep onset; it reaches its peak about 6 hours before sleep onset. (Source: From “Sleep-onset insomniacs have delayed noon
6 pm
midnight
6 am
noon
temperature rhythms,” by M. Morris, L. Lack, and D. Dawson, Sleep, 1990, 13, 1–14. Reprinted by permission)
Time of day in hours
Figure 9.1 Activity record of a flying squirrel kept in constant darkness The thickened segments indicate periods of activity as measured by a running wheel. Note that the free-running activity cycle lasts slightly less than 24 hours. (Source: Modified from “Phase Control of Activity in a Rodent,” by P. J. DeCoursey, Cold Spring Harbor Symposia on Quantitative Biology, 1960, 25:49–55. Reprinted by permission of Cold Spring Harbor and P. J. DeCoursey)
perature fluctuates over the course of a day from a low near 36.7°C during the night to about 37.2°C in late afternoon (Figure 9.2). However, circadian rhythms differ from one person to another. Some people (“morning people,” or “larks”) awaken early, quickly become productive, and gradually lose alertness as the day progresses. Others (“evening people,” or “owls”) are slower to warm up, both literally and figuratively. They reach their peak in the late afternoon or evening, and they can tolerate staying up all night better than morning people can (Taillard, Philip, Coste, Sagaspe, & Bioulac, 2003). A study of military pilots found that those who function well after sleep deprivation—mostly “evening people”—have greater levels of brain activation (as indicated by fMRI) even when they are not sleep-deprived (Caldwell et al., 2005). Not everyone falls neatly into one extreme or the other, of course. A convenient way to compare people is to ask, “On days when you have no obligations, such as holidays and vacations, what time is the middle of your sleep?” For example, if you slept from 1 A.M. until 9 A.M. on those days, your middle time would be 5 A.M. As Figure 9.3 shows, one of the main determi-
nants of circadian rhythm type is age. When you were a young child, you were almost certainly a “morning” type; you went to bed early and woke up early. As you entered adolescence, you started staying up later and waking up later, at least on weekends and vacations, when you had the opportunity. Most teenagers qualify either as “evening” types or as intermediates. The percentage of “evening” types increases until about age 20 and then starts a steady decrease (Roenneberg et al., 2004). Do people older than 20 learn to go to bed earlier because they have jobs that require them to get up early? Two facts point to a biological explanation rather than an explanation in terms of learning. First, in Figure 9.3, note how the shift continues gradually over decades. If people were simply making a learned adjustment to their jobs, we might expect a sudden shift followed by later stability. Second, a similar trend occurs in rats: Older rats reach their best performance shortly after awakening, whereas younger rats tend to improve performance as the day progresses (Winocur & Hasher, 1999, 2004).
STOP
&
CHECK
1. If we wanted to choose people for a job that requires sometimes working without sleep, how could we quickly determine which ones were probably best able to tolerate sleep deprivation? Check your answer on page 274.
9.1
Rhythms of Waking and Sleeping
267
Males
Time of the middle of sleep on days without obligations
5 Females
4
3
10
20
30
40
50
60
Age (years)
Figure 9.3 Age differences in circadian rhythms People of various ages identified the time of the middle of their sleep, such as 3 A.M. or 5 A.M., on days when they had no obligations. People around age 20 were the most likely to go to bed late and wake up late. (Source: Reprinted from T. Roenneberg et al., “A marker for the end of adolescence,” Current Biology, 14, R1038–R1039, Figure 1, copyright 2004, with permission from Elsevier.)
Duration of the Human Circadian Rhythm It might seem simple to determine the duration of the human circadian rhythm: Put people in an environment with no cues to time and observe their waking– sleeping schedule. However, the results depend on the amount of light (Campbell, 2000). Under constant bright lights, people have trouble sleeping, they complain about the experiment, and their rhythms run faster than 24 hours. In constant darkness, they have trouble waking up, again they complain about the experiment, and their rhythms run slower. In several studies, people were allowed to turn on bright lights whenever they chose to be awake and turn them off when they wanted to sleep. Under these conditions, most people followed a cycle closer to 25 than to 24 hours a day. The problem, which experimenters did not realize at first, was that bright light late in the day lengthens the circadian rhythm. A different way to run the experiment is to provide light and darkness on a cycle that people cannot
268
Chapter 9
Wakefulness and Sleep
follow. Researchers had already known that most people can adjust to a 23- or 25-hour day but not to a 22or 28-hour day (Folkard, Hume, Minors, Waterhouse, & Watson, 1985; Kleitman, 1963). Later researchers kept healthy adults in rooms with an artificial 28-hour day. None of them could, in fact, synchronize to that schedule; they all therefore produced their own selfgenerated rhythms of alertness and body temperature. Those rhythms varied among individuals, with a mean of 24.2 hours (Czeisler et al., 1999). Yet another approach is to examine people living under unusual conditions. Naval personnel on U.S. nuclear powered submarines are cut off from sunlight for months at a time, living under faint artificial light. In many cases, they have been asked to live on a schedule of 6 hours of work alternating with 12 hours of rest. Even though they sleep (or try to sleep) on this 18-hour schedule, their bodies generate rhythms of alertness and body chemistry that average about 24.3 to 24.4 hours (Kelly et al., 1999). In short, humans’ circadian clock generates a rhythm slightly longer than 24 hours when it has nothing to reset it.
The Suprachiasmatic Nucleus (SCN)
Mechanisms of the Biological Clock What kind of biological clock within our body generates our circadian rhythm? Curt Richter (1967) introduced the concept that the brain generates its own rhythms—that is, a biological clock—and he reported that the biological clock is insensitive to most forms of interference. Blind or deaf animals generate nearly normal circadian rhythms, although they slowly drift out of phase with the external world. The circadian rhythm is surprisingly steady despite food or water deprivation, x-rays, tranquilizers, alcohol, anesthesia, lack of oxygen, most kinds of brain damage, or the removal of hormonal organs. Even an hour or so of induced hibernation often fails to reset the biological clock (Gibbs, 1983; Richter, 1975). Evidently, the biological clock is a hardy, robust mechanism.
The surest way to disrupt the biological clock is to damage an area of the hypothalamus called the suprachiasmatic (soo-pruh-kie-as-MAT-ik) nucleus, abbreviated SCN. It gets its name from its location just above the optic chiasm (Figure 9.4). The SCN provides the main control of the circadian rhythms for sleep and temperature (Refinetti & Menaker, 1992). After damage to the SCN, the body’s rhythms are less consistent and no longer synchronized to environmental patterns of light and dark. The SCN generates circadian rhythms itself in a genetically controlled, unlearned manner. If SCN neurons are disconnected from the rest of the brain or removed from the body and maintained in tissue culture, they continue to produce a circadian rhythm of action potentials (Earnest, Liang, Ratcliff, & Cassone, 1999; Inouye & Kawamura, 1979). Even a single iso-
Corpus callosum Thalamus
Figure 9.4 The suprachiasmatic nucleus (SCN) of rats and humans
Cerebral cortex
Basal ganglia SCN
(a)
(b)
Cerebral cortex
Pineal gland Suprachiasmatic nucleus
The SCN is located at the base of the brain, just above the optic chiasm, which has torn off in these coronal sections through the plane of the anterior hypothalamus. Each rat was injected with radioactive 2-deoxyglucose, which is absorbed by the most active neurons. A high level of absorption of this chemical produces a dark appearance on the slide. Note that the level of activity in SCN neurons is much higher in section (a), in which the rat was injected during the day, than it is in section (b), in which the rat received the injection at night. (c) A sagittal section through a human brain showing the location of the SCN and the pineal gland. (Source: (a) and (b) W. J. Schwartz & Gainer, 1977)
Optic chiasm
Hypothalamus (c)
9.1
Rhythms of Waking and Sleeping
269
lated SCN cell can maintain a moderately steady circadian rhythm, but cells communicate with one another to sharpen the accuracy of the rhythm, partly by neurotransmitters and partly by electrical synapses (Long, Jutras, Connors, & Burwell, 2005; Yamaguchi et al., 2003). One group of experimenters discovered hamsters with a mutant gene that makes the SCN produce a 20hour instead of 24-hour rhythm (Ralph & Menaker, 1988). The researchers surgically removed the SCN from adult hamsters and then transplanted SCN tissue from hamster fetuses into the adults. When they transplanted SCN tissue from fetuses with a 20-hour rhythm, the recipients produced a 20-hour rhythm. When they transplanted tissue from fetuses with a 24hour rhythm, the recipients produced a 24-hour rhythm (Ralph, Foster, Davis, & Menaker, 1990). That is, the rhythm followed the pace of the donors, not the recipients, so again, the results show that the rhythms come from the SCN itself.
The Biochemistry of the Circadian Rhythm Research on the mechanism of circadian rhythms began with insects, where the genetic basis is easier to explore, because they reproduce in weeks instead of months or years. Studies on the fruit fly Drosophila discovered
genes that generate a circadian rhythm (Liu et al., 1992; Sehgal, Ousley, Yang, Chen, & Schotland, 1999). Two genes, known as period (abbreviated per) and timeless (tim), produce the proteins Per and Tim. Those proteins start in small amounts early in the morning and increase during the day. By evening, they reach a high level that makes the fly sleepy; that high level also feeds back to the genes to shut them down. During the night, while the genes no longer produce Per or Tim, their concentration declines until the next morning, when the cycle begins anew. When the Per and Tim levels are high, they interact with a protein called Clock to induce sleepiness. When they are low, the result is wakefulness. Furthermore, a pulse of light during the night inactivates the Tim protein, so extra light during the evening decreases sleepiness and resets the biological clock. Figure 9.5 summarizes this feedback mechanism. Why do we care about flies? The answer is that after researchers understood the mechanism in flies, they found virtually the same genes and proteins in mammals (Reick, Garcia, Dudley, & McKnight, 2001; Zheng et al., 1999). The mechanisms are similar but not identical across species. For example, light directly alters the Tim protein in flies (Shearman et al., 2000), but in mammals, a pulse of light acts by altering input to the SCN, which then alters its release of Tim (Crosio, Cermakian, Allis, & Sassone-Corsi, 2000). In any case, the Per and Tim proteins increase the activity of cer-
Sunset
Sunrise
Day
Sunrise
Night
Concentration of Tim and Per
Normal Wakefulness
Behavior
Sleep
Wake Pulse of bright light
Bright light late at night phase– advances the rhythm
Concentration of Tim and Per Behavior
Wakefulness
Sleep
Figure 9.5 Feedback between proteins and genes to control sleepiness In fruit flies (Drosophila), the Tim and Per proteins accumulate during the day. When they reach a high level, they induce sleepiness and shut off the genes that produce them. When their levels decline sufficiently, wakefulness returns and so does the gene activity. A pulse of light during the night breaks down the Tim protein, thus increasing wakefulness and resetting the circadian rhythm.
270
Chapter 9
Wakefulness and Sleep
Wake
tain kinds of neurons in the SCN (Kuhlman, Silver, LeSauter, Bult-Ito, & McMahon, 2003). Understanding these mechanisms helps make sense of some unusual sleep disorders. Mice with damage to their clock gene sleep less than normal (Naylor et al., 2000), and presumably, some cases of decreased sleep in humans might have the same cause. Also, some people with a mutation in their per gene have odd circadian rhythms: Their biological clock runs faster than 24 hours (C. R. Jones et al., 1999), and so they consistently get sleepy early in the evening and awaken early in the morning (Toh et al., 2001; Xu et al., 2005). Most people who are on vacation with no obligations say, “Oh, good! I can stay up late and then sleep late tomorrow morning!” People with the altered per gene say, “Oh, good! I can go to bed even earlier than usual and wake up really early tomorrow!” As with other sleep disorders, most people with this gene suffer from depression (Xu et al., 2005). As we shall see again in Chapter 15, sleep difficulties and depression are closely linked.
Melatonin The SCN regulates waking and sleeping by controlling activity levels in other brain areas, including the pineal gland (PIN-ee-al; see Figure 9.4), an endocrine gland located just posterior to the thalamus (Aston-Jones, Chen, Zhu, & Oshinsky, 2001; von Gall et al., 2002). The pineal gland releases melatonin, a hormone that increases sleepiness. The human pineal gland secretes melatonin mostly at night, making us sleepy at that time. When people shift to a new time zone and start following a new schedule, they continue to feel sleepy at their old times until the melatonin rhythm shifts (Dijk & Cajochen, 1997). People who have pineal gland tumors sometimes stay awake for days at a time (Haimov & Lavie, 1996). Melatonin secretion usually starts to increase about 2 or 3 hours before bedtime. Taking a melatonin pill in the evening has little effect on sleepiness because the pineal gland produces melatonin at that time anyway. However, people who take melatonin at other times become sleepy within 2 hours (Haimov & Lavie, 1996). Therefore, some people take melatonin pills when they travel to a new time zone or start a new work schedule and need to sleep at an unaccustomed time. Melatonin also feeds back to reset the biological clock through its effects on receptors in the SCN (Gillette & McArthur, 1996). A moderate dose of melatonin (0.5 mg) in the afternoon phase-advances the clock; that is, it makes the person get sleepy earlier in the evening and wake up earlier the next morning. A single dose of melatonin in the morning has little effect (Wirz-Justice, Werth, Renz, Müller, & Kräuchi, 2002), although repeated morning doses can phase-delay the clock, caus-
ing the person to get sleepy later than usual at night and awaken later the next morning. Taking melatonin has become something of a fad. Melatonin is an antioxidant, so it has some health benefits (Reiter, 2000). Low pill doses (up to 0.3 mg/day) produce blood levels similar to those that occur naturally and therefore seem unlikely to do any harm. Larger doses seldom produce obvious side effects. However, for rats with movement problems, melatonin aggravates the problem still further (Willis & Armstrong, 1999). Long-term use also impairs animals’ reproductive fertility and, if taken during pregnancy, harms the development of the fetus (Arendt, 1997; Weaver, 1997). The long-term effects on humans are not known, but the cautious advice is, as with any medication, don’t take it unless you need it.
STOP
&
CHECK
2. What evidence indicates that humans have an internal biological clock? 3. What evidence indicates that the SCN produces the circadian rhythm itself? 4. How do the proteins Tim and Per relate to sleepiness in Drosophila? Check your answers on page 274.
Setting and Resetting the Biological Clock Our circadian rhythms have a period close to 24 hours, but they are hardly perfect. We have to readjust our internal workings daily to stay in phase with the outside world. On weekends, when most of us are freer to set our own schedules, we expose ourselves to lights, noises, and activity at night and then awaken late the next morning. By Monday morning, when the electric clock indicates 7 A.M., the biological clock within us says about 5 A.M., and we stagger off to work or school without much pep (Moore-Ede, Czeisler, & Richardson, 1983). Although circadian rhythms persist in the absence of light, light is critical for periodically resetting them. Consider this analogy: I used to have a windup wristwatch that lost about 2 minutes per day, which would accumulate to an hour per month if I didn’t reset it. It had a free-running rhythm of 24 hours and 2 minutes— that is, a rhythm that occurs when no stimuli reset or alter it. The circadian rhythm is similar to that wristwatch. The stimulus that resets the circadian rhythm
9.1
Rhythms of Waking and Sleeping
271
is referred to by the German term zeitgeber (TSITEgay-ber), meaning “time-giver.” Light is the dominant zeitgeber for land animals (Rusak & Zucker, 1979). (The tides are important for many marine animals.) Light is not our only zeitgeber; others include exercise (Eastman, Hoese, Youngstedt, & Liu, 1995), noises, meals, and the temperature of the environment (Refinetti, 2000). Even keeping an animal awake for a few hours in the middle of the night, by gentle handling, can shift the circadian rhythm (Antle & Mistlberger, 2000). However, these additional zeitgebers merely supplement or alter the effects of light; on their own, their effects are weak under most circumstances. For example, according to a study of people working in Antarctica, during the Antarctic winter, with no sunlight, each person generated his or her own free-running rhythm. Even though they were living together and trying to maintain a 24-hour rhythm, their bodies generated rhythms ranging from 241⁄2 hours to more than 25 hours (Kennaway & Van Dorp, 1991). People living in the Scandinavian countries report high rates of insomnia and other sleep problems in the winter, when they see little or no sunlight (Ohayon & Partinen, 2002). What about blind people, who are forced to set their circadian rhythms by zeitgebers other than light? The results vary. Some do set their circadian rhythms by noise, temperature, activity, and other signals. However, others who are not sufficiently sensitive to these secondary zeitgebers produce free-running circadian rhythms that run a little longer than 24 hours. When those cycles are in phase with the person’s activity schedule, all is well, but when they drift out of phase, the result is insomnia at night and sleepiness during the day (Sack & Lewy, 2001). In one study of hamsters living under constant light, the SCN of the left hemisphere got out of phase with the one in the right hemisphere in some cases. Evidently, without an external zeitgeber, the two SCNs generated independent rhythms without synchronizing each other. As a result, these hamsters had two wakeful periods and two sleep periods every 24 hours (de la Iglesia, Meyer, Carpino, & Schwartz, 2000).
Jet Lag A disruption of circadian rhythms due to crossing time zones is known as jet lag. Travelers complain of sleepiness during the day, sleeplessness at night, depression, and impaired concentration. All these problems stem from the mismatch between internal circadian clock and external time (Haimov & Arendt, 1999). Most of us find it easier to adjust to crossing time zones going west than east. Going west, we stay awake later at night and then awaken late the next morning, already partly adjusted to the new schedule. That is, we phase-delay our circadian rhythms. Going east, we have to go to sleep earlier and awaken earlier; we phase-advance (Figure 9.6). Most people find it difficult to go to sleep before their body’s usual time. Adjusting to jet lag is more stressful for some people than for others. Stress elevates blood levels of the adrenal hormone cortisol, and many studies have shown that prolonged elevations of cortisol can lead to a loss of neurons in the hippocampus, a brain area important for memory. One study examined female flight attendants who had spent the previous 5 years making flights across seven or more time zones—such as Chicago to Italy—with mostly short breaks (fewer than 6 days) between trips. On the average, they showed smaller than average volumes of the hippocampus and surrounding structures, and they showed some memory impairments (Cho, 2001). These results suggest a danger from repeated adjustments of the circadian rhythm, although the problem here could be just air travel itself. (A good control group would have been flight attendants who flew long north–south routes.)
Shift Work People who sleep irregularly—such as pilots and truck drivers, medical interns, and shift workers in factories—find that their duration of sleep depends on what time they go to sleep. When they have to sleep in the morning or early afternoon, they sleep only briefly, even though they have been awake for 16 hours or more (Frese & Harwich, 1984; Winfree, 1983).
Figure 9.6 Jet lag Eastern time is later than western time. People who travel six time zones east fall asleep on the plane and then must awaken when it is morning at their destination but still night back home. (a) Leave New York at 7 PM
272
Chapter 9
Wakefulness and Sleep
(b) Arrive in London at 7 AM, which is 2 AM in New York
light evokes no startle response and no measurable change in brain activity. Nevertheless, light resets their circadian rhythms (de Jong, Hendriks, Sanyal, & Nevo, 1990). The surprising explanation is that the retinohypothalamic path to the SCN comes from a special population of retinal ganglion cells that have their own photopigment, called melanopsin, unlike the ones found in rods and cones (Hannibal, Hindersson, Knudsen, Georg, & Fahrenkrug, 2001; Lucas, Douglas, & Foster, 2001). These special ganglion cells respond directly to light and do not require any input from rods or cones (Berson, Dunn, & Takao, 2002). They are located mainly near the nose, not evenly throughout the retina (Visser, Beersma, & Daan, 1999). (That is, they see toward the periphery.) They respond to light slowly and turn off slowly when the light ceases (Berson et al., 2002). Therefore, they respond to the overall average amount of light, not to instantaneous changes in light. The average intensity over a period of minutes or hours is, of course, exactly the information the SCN needs to gauge the time of day. Because they do not contribute to vision, they do not need to respond to momentary changes in light. Apparently, however, these neurons are not the only input to the SCN. Mice lacking the gene for melanopsin do adjust their cycles to periods of light and dark, although not as well as normal mice (Panda et al., 2002; Ruby et al., 2002). Evidently, the SCN can respond to either normal retinal input or input from the ganglion cells containing melanopsin, and either can substitute for the other.
People who work on a night shift, such as midnight to 8 A.M., sleep during the day. At least they try to. Even after months or years on such a schedule, many workers adjust incompletely. They continue to feel groggy on the job, they do not sleep soundly during the day, and their body temperature continues to peak when they are trying to sleep in the day instead of while they are working at night. In general, night-shift workers have more accidents than day-shift workers. Working at night does not reliably change the circadian rhythm because most buildings use artificial lighting in the range of 150–180 lux, which is only moderately effective in resetting the rhythm (Boivin, Duffy, Kronauer, & Czeisler, 1996). People adjust best to night work if they sleep in a very dark room during the day and work under very bright lights at night, comparable to the noonday sun (Czeisler et al., 1990).
How Light Resets the SCN The SCN is located just above the optic chiasm. (Figure 9.4 shows the positions in the human brain; the relationship is similar in other mammals.) A small branch of the optic nerve, known as the retinohypothalamic path, extends directly from the retina to the SCN. Axons of that path alter the SCN’s settings. Most of the input to that path, however, does not come from normal retinal receptors. Mice with genetic defects that destroy nearly all their rods and cones nevertheless reset their biological clocks in synchrony with the light (Freedman et al., 1999; Lucas, Freedman, Muñoz, Garcia-Fernández, & Foster, 1999). Also, consider blind mole rats (Figure 9.7). Their eyes are covered with folds of skin and fur; they have neither eye muscles nor a lens with which to focus an image. They have fewer than 900 optic nerve axons, as compared with 100,000 in hamsters. Even a bright flash of
STOP
&
CHECK
5. What stimulus is the most effective zeitgeber for humans? 6. How does light reset the biological clock? Check your answers on page 274.
Module 9.1
© Eviatar Nevo
In Closing: Sleep–Wake Cycles
Figure 9.7 A blind mole rat Although blind mole rats are indeed blind in all other regards, they reset their circadian rhythms in response to light.
Unlike an electric appliance that stays on until someone turns it off, the brain periodically turns itself on and off. Sleepiness is definitely not a voluntary or optional act. People who try to work when they are sleepy are prone to errors and injuries. Someone who sleeps well may not be altogether healthy or happy, but one who consistently fails to get enough sleep is almost certainly headed for troubles.
9.1
Rhythms of Waking and Sleeping
273
Summary
Answers to
1. Animals, including humans, have internally generated rhythms of activity lasting about 24 hours. (p. 266)
STOP
&
CHECK
Questions
2. The suprachiasmatic nucleus (SCN), a part of the hypothalamus, generates the body’s circadian rhythms for sleep and temperature. (p. 269)
1. Use fMRI to measure brain activation. Those with strongest responses tend to tolerate sleeplessness better than others. (p. 267)
3. The genes controlling the circadian rhythm are almost the same in mammals as in insects. Across species, certain proteins increase in abundance during the day and then decrease during the night. (p. 270)
2. People who have lived in an environment with a light–dark schedule much different from 24 hours fail to follow that schedule and instead become wakeful and sleepy on about a 24-hour basis. (p. 271)
4. The SCN controls the body’s rhythm partly by directing the release of melatonin by the pineal gland. The hormone melatonin increases sleepiness; if given at certain times of the day, it can also reset the circadian rhythm. (p. 271)
3. SCN cells produce a circadian rhythm of activity even if they are kept in cell culture isolated from the rest of the body. (p. 271)
5. Although the biological clock can continue to operate in constant light or constant darkness, the onset of light resets the clock. The biological clock can reset to match an external rhythm of light and darkness slightly different from 24 hours, but if the discrepancy exceeds about 2 hours, the biological clock generates its own rhythm instead of resetting. (p. 271) 6. It is easier for people to follow a cycle longer than 24 hours (as when traveling west) than to follow a cycle shorter than 24 hours (as when traveling east). (p. 272) 7. If people wish to work at night and sleep during the day, the best way to shift the circadian rhythm is to have bright lights at night and darkness during the day. (p. 273) 8. Light resets the biological clock partly by a branch of the optic nerve that extends to the SCN. Those axons originate from a special population of ganglion cells that respond directly to light, rather than relaying information from rods and cones synapsing onto them. (p. 273)
274
Chapter 9
Wakefulness and Sleep
4. The proteins Tim and Per accumulate during the wakeful period. When they reach a high enough level, they trigger sleepiness and turn off the genes that produced them. Therefore, their levels decline until they reach a low enough level for wakefulness to begin anew. (p. 271) 5. Light is the most effective zeitgeber for humans and other land animals. (p. 273) 6. A branch of the optic nerve, the retinohypothalamic path, conveys information about light to the SCN. The axons comprising that path originate from special ganglion cells that respond to light by themselves, without needing input from rods or cones. (p. 273)
Thought Questions 1. Is it possible for the onset of light to reset the circadian rhythms of a person who is blind? Explain. 2. Why would evolution have enabled blind mole rats to synchronize their SCN activity to light, even though they cannot see well enough to make any use of the light? 3. If you travel across several time zones to the east and want to use melatonin to help reset your circadian rhythm, at what time of day should you take it? What if you travel west?
Module 9.2
Stages of Sleep and Brain Mechanisms
S
uppose I buy a new radio. After I play it for 4 hours, it suddenly stops. I wonder whether the batteries are dead or whether the radio needs repair. Later, I discover that this radio always stops after playing for 4 hours but operates again a few hours later even without repairs or a battery change. I begin to suspect that the manufacturer designed it this way on purpose, perhaps to prevent me from listening to the radio all day. Now I want to find the device that turns it off whenever I play it for 4 hours. Notice that I am asking a new question. When I thought that the radio stopped because it needed repairs or new batteries, I did not ask which device turned it off. Similarly, if we think of sleep as something like the “off” state of a machine, we do not ask which part of the brain produces it. But if we think of sleep as a specialized state evolved to serve particular functions, we look for the mechanisms that regulate it.
In Figure 9.9b, sleep has just begun. During this period, called stage 1 sleep, the EEG is dominated by irregular, jagged, low-voltage waves. Overall brain activity is still fairly high but starting to decline. As Figure 9.9c shows, the most prominent characteristics of stage 2 are sleep spindles and K-complexes. A sleep spindle consists of 12- to 14-Hz waves during a burst that lasts at least half a second. Sleep spindles result from oscillating interactions between cells in the thalamus and the cortex. A K-complex is a sharp high-amplitude wave. Sudden stimuli can evoke K-complexes during other stages of sleep (Bastien & Campbell, 1992), but they are most common in stage 2. In the succeeding stages of sleep, heart rate, breathing rate, and brain activity decrease, and slow, largeamplitude waves become more common (see Figures 9.9d and e). By stage 4, more than half the record
The Stages of Sleep Nearly all scientific advances come from new or improved measurements. Researchers did not even suspect that sleep has different stages until they accidentally measured them. The electroencephalograph (EEG), as described on page 107, records an average of the electrical potentials of the cells and fibers in the brain areas nearest each electrode on the scalp (Figure 9.8). That is, if half of the cells in some area increase their electrical potentials while the other half decrease, the EEG recording is flat. The EEG record rises or falls when cells fire in synchrony—doing the same thing at the same time. You might compare it to a record of the noise in a crowded football stadium: It shows only slight fluctuations until some event gets everyone yelling at once. The EEG provides an objective way for brain researchers to compare brain activity at different times of night. Figure 9.9 shows data from a polysomnograph, a combination of EEG and eye-movement records, for a male college student during various stages of sleep. Figure 9.9a presents a period of relaxed wakefulness for comparison. Note the steady series of alpha waves at a frequency of 8 to 12 per second. Alpha waves are characteristic of relaxation, not of all wakefulness.
Image not available due to copyright restrictions
9.2 Stages of Sleep and Brain Mechanisms
275
includes large waves of at least a half-second duration. Stages 3 and 4 together constitute slowwave sleep (SWS). Slow waves indicate that neuronal activity is highly synchronized. In stage 1 and in wakefulness, the cortex receives a great deal of input, much of it at high frequencies. Nearly all the neurons are active, but different populations of neurons are active at different times. Thus, the EEG is full of short, rapid, choppy waves. By stage 4, however, sensory input to the cerebral cortex is greatly reduced, and the few remaining sources of input can synchronize many cells. As an analogy, imagine that the barrage of stimuli arriving at the brain is like thousands of rocks dropped into a pond over the course of a minute: The resulting waves largely cancel one another out. The surface of the pond is choppy, with few large waves. By contrast, the result of just one rock dropping is fewer but larger waves, such as those in stage 4 sleep.
STOP
&
(a) Relaxed, awake
(d) Stage 3 sleep
(b) Stage 1 sleep
(e) Stage 4 sleep
Sleep spindle
(c) Stage 2 sleep
A polysomnograph includes records of EEG, eye movements, and sometimes other data, such as muscle tension or head movements. For each of these records, the top line is the EEG from one electrode on the scalp; the middle line is a record of eye movements; and the bottom line is a time marker, indicating 1-second units. Note the abundance of slow waves in stages 3 and 4. (Source: Records provided by T. E. LeVere)
CHECK
Check your answer on page 285.
Paradoxical or REM Sleep Many discoveries occur when researchers stumble upon something by accident and then notice that it might be important. In the 1950s, the French scientist Michel Jouvet was trying to test the learning abilities of cats after removal of the cerebral cortex. Because decorticate mammals are generally inactive, Jouvet recorded slight movements of the muscles and EEGs from the hindbrain. During periods of apparent sleep, the cats’ brain activity was relatively high, but their neck muscles were completely relaxed. Jouvet (1960) then recorded the same phenomenon in normal, intact cats and named it paradoxical sleep because it is deep sleep in some ways and light in others. (The term paradoxical means “apparently self-contradictory.”) Chapter 9
(f) REM, or “paradoxical” sleep
Figure 9.9 Polysomnograph records from a male college student
1. What do long, slow waves on an EEG indicate?
276
K-complex
Wakefulness and Sleep
Meanwhile in the United States, Nathaniel Kleitman and Eugene Aserinsky were observing eye movements of sleeping people as a means of measuring depth of sleep, assuming that eye movements would stop during sleep. At first, they recorded only a few minutes of eye movements per hour because the recording paper was expensive and they did not expect to see anything interesting in the middle of the night anyway. When they occasionally found periods of eye movements in people who had been asleep for hours, the investigators assumed that something was wrong with their machines. Only after repeated careful measurements did they conclude that periods of rapid eye movements do exist during sleep (Dement, 1990). They called these periods rapid eye movement (REM) sleep (Aserinsky & Kleitman, 1955; Dement & Kleitman, 1957a) and soon realized that REM sleep was synonymous with what Jouvet called paradoxical sleep. Researchers use the term REM sleep when referring to humans; most prefer the term paradoxical sleep for nonhumans because many species lack eye movements. During paradoxical or REM sleep, the EEG shows irregular, low-voltage fast waves that indicate increased neuronal activity; in this regard, REM sleep is light. However, the postural muscles of the body, such as those that support the head, are more relaxed during
A2 3
12 P.M.
4
1 A.M.
2 A.M.
2 3 4 32 3 2
3
2
3 A.M.
2
3
2
4 A.M.
2
6 A.M.
REM
2
5 A.M.
REM
2 3 432 3 4 2
4 A.M.
2 A
5 A.M.
REM
11 P.M.
32
3 A.M.
REM
4
2 A.M.
REM
A23
1 A.M.
REM
12 P.M.
REM
11 P.M.
A
Figure 9.10 Sequence of sleep stages on three representative nights Columns indicate awake (A) and sleep stages 2, 3, 4, and REM. Deflections in the line at the bottom of each chart indicate shifts in body position. Note that stage 4 sleep occurs mostly in the early part of the night’s sleep, whereas REM sleep becomes more prevalent toward the end. (Source: Based on Dement & Kleitman, 1957a)
4
2
3 A.M.
2
3 4 2
4 A.M.
REM than in other stages; in this regard, REM is deep sleep. REM is also associated with erections in males and vaginal moistening in females. Heart rate, blood pressure, and breathing rate are more variable in REM than in stages 2 through 4. In short, REM sleep combines deep sleep, light sleep, and features that are difficult to classify as deep or light. Consequently, it is best to avoid the terms deep and light sleep. In addition to its steady characteristics, REM sleep has intermittent characteristics such as facial twitches and eye movements, as shown in Figure 9.9(f). The EEG record is similar to that for stage 1 sleep, but notice the difference in eye movements. The stages other than REM are known as non-REM (NREM) sleep. Anyone who falls asleep first enters stage 1 and then slowly progresses through stages 2, 3, and 4 in order, although loud noises or other intrusions can interrupt this sequence. After about an hour of sleep, the person begins to cycle back from stage 4 through stages 3, 2, and then REM. The sequence repeats, with each complete cycle lasting about 90 minutes. Early in the night, stages 3 and 4 predominate. Toward morning, the duration of stage 4 grows shorter and the duration of REM grows longer. Figure 9.10 shows typical sequences. The tendency to increase REM depends on time, not on how long you have been asleep. That is, if you go to sleep later than usual, you still begin to increase your REM at about the same time that you would have ordinarily (Czeisler, Weitzman, Moore-Ede, Zimmerman, & Knauer, 1980). Most depressed people enter REM quickly after falling asleep, even at their normal time, suggesting that their circadian rhythm is out of synchrony with real time. Initially after the discovery of REM, researchers believed it was almost synonymous with dreaming.
5 A.M.
2 3 2A
2
6 A.M.
REM
2 3
2 A.M.
REM
A 2 34 2 4 3 4 2
1 A.M.
REM
12 P.M.
REM
11 P.M.
2 A
William Dement and Nathaniel Kleitman (1957b) found that people who were awakened during REM sleep reported dreams 80% to 90% of the time. Later researchers, however, found that people also sometimes reported dreams when they were awakened from NREM sleep. REM dreams are more likely than NREM dreams to include striking visual imagery and complicated plots, but not always. Some people with brain damage continue to have REM sleep but do not report any dreams (Bischof & Bassetti, 2004), and other people continue to report dreams despite no evidence of REM sleep (Solms, 1997). In short, REM and dreams usually overlap, but they are not the same thing.
STOP
&
CHECK
2. How can an investigator determine whether a sleeper is in REM sleep? 3. During which part of a night’s sleep is REM most common? Check your answers on page 285.
Brain Mechanisms of Wakefulness and Arousal Recall from Chapter 1 the distinction between the “easy” and “hard” problems of consciousness. The “easy” problems include such matters as, “Which brain areas increase overall alertness, and by what kinds of 9.2 Stages of Sleep and Brain Mechanisms
277
transmitters do they do so?” As you are about to see, that question may be philosophically easy but it is scientifically complex.
Brain Structures of Arousal and Attention After a cut through the midbrain separates the forebrain and part of the midbrain from all the lower structures, an animal enters a prolonged state of sleep for the next few days. Even after weeks of recovery, the wakeful periods are brief. We might suppose a simple explanation: The cut isolated the brain from the sensory stimuli that come up from the medulla and spinal cord. However, if a researcher cuts each individual tract that enters the medulla and spinal cord, thus depriving the brain of the sensory input, the animal still has normal periods of wakefulness and sleep. Evidently, the midbrain does more than just relay sensory information; it has its own mechanisms to promote wakefulness. A cut through the midbrain decreases arousal by damaging the reticular formation, a structure that extends from the medulla into the forebrain. Of all the neurons in the reticular formation, some neurons have axons ascending into the brain and some have axons descending into the spinal cord. Those with axons descending into the spinal cord form part of the ventromedial tract of motor control, as discussed in Chapter 8. In 1949, Giuseppe Moruzzi and H. W. Magoun proposed that those with ascending axons are well suited to regulate arousal. The term reticular (based on the Latin word rete, meaning “net”) describes the widespread connections among neurons in this system. One part of the reticular formation that contributes to cortical arousal is known as the pontomesencephalon (Woolf, 1996). (The term derives from pons and mesencephalon, or “midbrain.”) These neurons receive input from many sensory systems and generate spontaneous activity of their own. Their axons extend into the forebrain, as shown in Figure 9.11, releasing acetylcholine and glutamate, which produce excitatory effects in the hypothalamus, thalamus, and basal forebrain. Consequently, the pontomesencephalon maintains arousal during wakefulness and increases it in response to new or challenging tasks (Kinomura, Larsson, Gulyás, & Roland, 1996). Stimulation of the pontomesencephalon awakens a sleeping individual or increases alertness in one already awake, shifting the EEG from long, slow waves to short, high-frequency waves (Munk, Roelfsema, König, Engel, & Singer, 1996). However, subsystems within the pontomesencephalon control different sensory modalities, so a stimulus sometimes arouses one part of the brain more than others (Guillery, Feig, & Lozsádi, 1998). Arousal is not a unitary process, and neither is attention (Robbins & Everitt, 1995). Waking up, directing attention to a stimulus, storing a memory, and increas-
278
Chapter 9
Wakefulness and Sleep
ing goal-directed effort rely on separate systems. For instance, the locus coeruleus (LOW-kus ser-ROO-leeus; literally, “dark blue place”), a small structure in the pons, is inactive at most times but emits bursts of impulses in response to meaningful events. The axons of the locus coeruleus release norepinephrine widely throughout the cortex, so this tiny area has a huge influence. Stimulation to the locus coeruleus strengthens the storage of recent memories (Clayton & Williams, 2000) and increases wakefulness (Berridge, Stellick, & Schmeichel, 2005). The locus coeruleus is usually silent during sleep. The hypothalamus has several pathways that influence arousal. One set of axons releases the neurotransmitter histamine (Lin, Hou, Sakai, & Jouvet, 1996), which produces widespread excitatory effects throughout the brain, increasing wakefulness and alertness (Haas & Panula, 2003). Antihistamine drugs, often used for allergies, counteract this transmitter and produce drowsiness. Antihistamines that do not cross the bloodbrain barrier avoid that side effect. Another pathway from the hypothalamus, mainly from the lateral nucleus of the hypothalamus, releases a peptide neurotransmitter called either orexin or hypocretin. For simplicity, this text will stick to one term, orexin, but you might find the term hypocretin in other reading. The axons releasing orexin extend widely throughout the forebrain and brainstem, where they stimulate acetylcholine-releasing cells, thereby increasing wakefulness and arousal (Kiyashchenko et al., 2002). Orexin is not necessary for waking up, but it is for staying awake. That is, most adult humans stay awake for roughly 16–17 hours at a time, even when nothing much is happening. Staying awake depends on orexin, especially toward the end of the day (Lee, Hassani, & Jones, 2005). A study of squirrel monkeys, which have waking and sleeping schedules similar to those of humans, found low orexin levels early in the morning. As the day continued, orexin levels rose, and if the monkeys were kept awake beyond their usual sleep time, orexin levels stayed high. As soon as the monkeys went to sleep, the orexin levels began to drop (Zeitzer et al., 2003). Other pathways from the lateral hypothalamus regulate cells in the basal forebrain (an area just anterior and dorsal to the hypothalamus). Basal forebrain cells provide axons that extend throughout the thalamus and cerebral cortex (see Figure 9.11). Some of these axons release acetylcholine, which is excitatory and tends to increase arousal (Mesulam, 1995; Szymusiak, 1995). People with Alzheimer’s disease (Chapter 13) lose many of these acetylcholine-releasing cells. Damage to these cells does not increase sleep, but it does impair alertness and attention (Berntson, Shafi, & Sarter, 2002). Other axons from the basal forebrain release GABA, the brain’s main inhibitory transmitter. GABA is essen-
o li n ylch t e Ac
e
Ac et ylc ho
GABA
e
GA
li n
Histamine BA
Basal forebrain
li n e
Hypothalamus
Norepine ph r
Locus coeruleus
in on ot
Se r
ne mi
lcholine A c ety Hista
e in
ho
A
Adenos ine
et
G AB
ine m
Ac
Hista
y lc
Hi sta m
in e
Dorsal raphe Pontomesencephalon
Figure 9.11 Brain mechanisms of sleeping and waking Green arrows indicate excitatory connections; red arrows indicate inhibitory connections. Neurotransmitters are indicated where they are known. Although adenosine is shown as a small arrow, it is a metabolic product that builds up in the area, not something released by axons. (Source: Based on Lin, Hou, Sakai, & Jouvet, 1996; Robbins & Everitt, 1995; and Szymusiak, 1995)
tial for sleep; that is, without the inhibition provided by GABA, sleep would not occur (Gottesmann, 2004). The functions of GABA help explain what we experience during sleep: During sleep, body temperature and metabolic rate decrease slightly, and so does the activity of neurons, but by less than we might expect. Spontaneously active neurons continue to fire at almost their
usual rate, and neurons in the brain’s sensory areas continue to respond to sounds and other stimuli. Nevertheless, we are unconscious. An explanation is that GABA inhibits synaptic activity. A neuron may be active, either spontaneously or in response to a stimulus, but its axons do not spread the stimulation to other areas because of inhibition by GABA. Researchers demonstrated 9.2 Stages of Sleep and Brain Mechanisms
279
that a stimulus could excite a brain area as strongly during sleep as during wakefulness, but the excitation was briefer than usual and did not spread to other areas (Massimini et al., 2005).
Getting to Sleep Sleep requires decreased arousal, largely by means of adenosine (ah-DENN-o-seen). During metabolic activity, adenosine monophosphate (AMP) breaks down into adenosine; thus, when the brain is awake and active, adenosine accumulates. In most of the brain, adenosine has little effect, but adenosine inhibits the basal forebrain cells responsible for arousal (Figure 9.12), acting by metabotropic synapses that produce an effect lasting hours (Basheer, Rainnie, Porkka-Heiskanen, Ramesh, & McCarley, 2001). When people are deprived of sleep, the accumulating adenosine produces prolonged sleepiness—a phenomenon known as “sleep debt.” Caffeine, a drug found in coffee, tea, and many soft drinks, increases arousal by blocking adenosine receptors (Rainnie, Grunze, McCarley, & Greene, 1994). It also constricts the blood vessels in the brain, thereby decreasing its blood supply. (Abstention from caffeine after repeated use can increase blood flow to the brain enough to cause a headache.) The message: Just as you might use caffeine to try to keep yourself awake, you might try decreasing your caffeine intake if you have trouble sleeping. Prostaglandins are additional chemicals that promote sleep, among other functions. Like adenosine, prostaglandins build up during the day until they provoke sleep, and they decline during sleep (Ram et al., 1997; Scamell et al., 1998). In response to infection, the immune system produces more prostaglandins, resulting in the sleepiness that accompanies illness. Table 9.1 summarizes the effects of some key brain areas on arousal and sleep.
Basal forebrain
Figure 9.12 Basal forebrain The basal forebrain is the source of many excitatory axons (releasing acetylcholine) and inhibitory axons (releasing GABA) that regulate arousal of the cerebral cortex.
STOP
&
CHECK
4. What would happen to the sleep–wake schedule of someone who took a drug that blocked GABA? 5. Why do most antihistamines make people drowsy? 6. What would happen to the sleep–wake schedule of someone who lacked orexin? 7. How does caffeine increase arousal? Check your answers on page 285.
Table 9.1 Brain Structures for Arousal and Sleep Structure
Neurotransmitter(s) It Releases
Effects on Behavior
Pontomesencephalon
Acetylcholine, glutamate
Increases cortical arousal
Locus coeruleus
Norepinephrine
Increases information storage during wakefulness; suppresses REM sleep
Acetylcholine
Excites thalamus and cortex; increases learning, attention; shifts sleep from NREM to REM
Basal forebrain Excitatory cells Inhibitory cells
GABA
Inhibits thalamus and cortex
Hypothalamus (parts)
Histamine
Increases arousal
Orexin
Maintains wakefulness
Serotonin
Interrupts REM sleep
Dorsal raphe and pons
280
Chapter 9
Wakefulness and Sleep
now note that activity in the pons triggers the onset of REM sleep. REM sleep is associated with a distinctive pattern of high-amplitude electrical potentials known as PGO waves, for pons-geniculate-occipital (Figure 9.13). Waves of neural activity are detected first in the pons, shortly afterward in the lateral geniculate nucleus of the thalamus, and then in the occipital cortex (D. C. Brooks & Bizzi, 1963; Laurent, Cespuglio, & Jouvet, 1974). Each animal maintains a nearly constant amount of PGO waves per day. During a prolonged period of REM deprivation, PGO waves begin to emerge during sleep stages 2 to 4—when they do not normally occur— and even during wakefulness, often in association with strange behaviors, as if the animal were hallucinating. At the end of the deprivation period, when an animal is permitted to sleep without interruption, the REM periods have an unusually high density of PGO waves. Besides originating the PGO waves, cells in the pons contribute to REM sleep by sending messages to the spinal cord, inhibiting the motor neurons that control the body’s large muscles. After damage to the floor of the pons, a cat still has REM sleep periods, but its muscles are not relaxed. During REM, it walks (though awkwardly), behaves as if it were chasing an imagined prey, jumps as if startled, and so forth (Morrison, Sanford, Ball, Mann, & Ross, 1995) (Figure 9.14). Is the cat acting out dreams? We do not know; the cat cannot tell us. Evidently, one function of the messages from the pons to the spinal cord is to prevent action during REM sleep. REM sleep apparently depends on a relationship between the neurotransmitters serotonin and acetylcholine. Injections of the drug carbachol, which stimulates acetylcholine synapses, quickly move a sleeper into REM sleep (Baghdoyan, Spotts, & Snyder, 1993).
Brain Function in REM Sleep Researchers who were interested in the brain mechanisms of REM decided to use a PET scan to determine which areas increased or decreased their activity during REM. Although that research might sound simple, PET requires injecting a radioactive chemical. Imagine trying to give sleepers an injection without awakening them. Further, a PET scan yields a clear image only if the head remains motionless during data collection. If the person tosses or turns even slightly, the image is worthless. To overcome these difficulties, researchers in two studies persuaded some young people to sleep with their heads firmly attached to masks that did not permit any movement. They also inserted a cannula (plastic tube) into each person’s arm so that they could inject radioactive chemicals at various times during the night. So imagine yourself in that setup. You have a cannula in your arm and your head is locked into position. Now try to sleep. Because the researchers foresaw the difficulty of sleeping under these conditions (!), they had their participants stay awake the entire previous night. Someone who is tired enough can sleep even under trying circumstances. (Maybe.) Now that you appreciate the heroic nature of the procedures, here are the results. During REM sleep, activity increased in the pons and the limbic system (which is important for emotional responses). Activity decreased in the primary visual cortex, the motor cortex, and the dorsolateral prefrontal cortex but increased in parts of the parietal and temporal cortex (Braun et al., 1998; Maquet et al., 1996). In the next module, we consider what these results imply about dreaming, but for
Figure 9.13 PGO waves O
Occipital cortex
PGO waves start in the pons (P) and then show up in the lateral geniculate (G) and the occipital cortex (O). Each PGO wave is synchronized with an eye movement in REM sleep.
G Geniculate P
Pons
9.2 Stages of Sleep and Brain Mechanisms
281
Images not available due to copyright restrictions
somnia can also be the result of epilepsy, Parkinson’s disease, brain tumors, depression, anxiety, or other neurological or psychiatric conditions. Some children suffer insomnia because they are milk-intolerant, and their parents, not realizing the intolerance, give them milk to drink right before bedtime (Horne, 1992). One man suffered insomnia for months until he realized that he dreaded going to sleep because he hated waking up to go jogging. After he switched his jogging time to late afternoon, he slept without difficulty. In short, try to identify the reasons for your sleep problems before you try to solve them. Many cases of insomnia relate to shifts in circadian rhythms (MacFarlane, Cleghorn, & Brown, 1985a, 1985b). Ordinarily, people fall asleep while their temperature is declining and awaken while it is rising, as in Figure 9.15(a). Someone whose rhythm is phase delayed, as in Figure 9.15(b), has trouble falling asleep at the usual time, as if the hypothalamus thinks it isn’t late enough (Morris et al., 1990). Someone whose rhythm is phase advanced, as in Figure 9.15(c), falls asleep easily but awakens early. Another cause of insomnia is, paradoxically, the use of tranquilizers as sleeping pills. Although tranquilizers may help a person fall asleep, taking them
Sleep period
(a) Normal circadian rhythm of body temperature
Note that acetylcholine is important for both wakefulness and REM sleep, two states that activate most of the brain. Serotonin, however, interrupts or shortens REM sleep (Boutrel, Franc, Hen, Hamon, & Adrien, 1999). So does norepinephrine from the locus coeruleus; bursts of activity in the locus coeruleus block REM sleep (Singh & Mallick, 1996).
Sleep Disorders How much sleep is enough? Different people need different amounts. The best gauge of insomnia—inadequate sleep—is whether someone feels rested the following day. If you consistently feel tired, you are not sleeping enough. Inadequate sleep is a major cause of accidents by workers and poor performance by college students. Driving while sleep deprived is comparable to driving under the influence of alcohol (Falleti, Maruff, Collie, Darby, & McStephen, 2003). Causes of insomnia include noise, uncomfortable temperatures, stress, pain, diet, and medications. In-
282
Chapter 9
Wakefulness and Sleep
3-hr phase delay
Difficulty getting to sleep
Sleep period
(b) Phase delay
3-hr phase advance
Difficulty staying asleep
Sleep period (c) Phase advance
Figure 9.15 Insomnia and circadian rhythms A delay in the circadian rhythm of body temperature is associated with onset insomnia; an advance, with termination insomnia.
repeatedly may cause dependence. Without the pills, the person goes into a withdrawal state that includes prolonged wakefulness (Kales, Scharf, & Kales, 1978). Similar problems arise when people use alcohol to get to sleep.
One special cause of insomnia is sleep apnea, the inability to breathe while sleeping. Most people beyond age 45 have occasional periods during their sleep when they go at least 9 seconds without breathing, usually during the REM stage (Culebras, 1996). However, people with sleep apnea go without breathing for longer periods, sometimes a minute or more, and then awaken, gasping for breath. Some do not remember all their nighttime awakenings, although they certainly notice the consequences—sleepiness during the day, impaired attention, depression, and sometimes heart problems. People with sleep apnea have multiple areas in their brains where they appear to have lost neurons, and consequently, they show deficiencies of learning, reasoning, attention, and impulse control (Beebe & Gozal, 2002; Macey et al., 2002). These correlational data do not tell us whether the brain abnormalities led to sleep apnea or sleep apnea led to the brain abnormalities. However, research with rats suggests the latter: Rats that are subjected to frequent periods of low oxygen (as if they hadn’t been breathing) lose neurons throughout the cerebral cortex and hippocampus and show impairments of learning and memory (Gozal, Daniel, & Dohanich, 2001). Sleep apnea results from several causes, including genetics, hormones, and old-age deterioration of the brain mechanisms that regulate breathing. Another cause is obesity, especially in middle-age men. Many obese men have narrower than normal airways and have to compensate by breathing more frequently or more vigorously than others do. During sleep, they cannot keep up that rate of breathing. Furthermore, their airways become even narrower than usual when they adopt a sleeping posture (Mezzanotte, Tangel, & White, 1992). People with sleep apnea are advised to lose weight and avoid alcohol and tranquilizers (which impair the breathing muscles). Medical options include surgery to remove tissue that obstructs the trachea (the breathing passage) or a mask that covers the nose and delivers air under enough pressure to keep the breathing passages open (Figure 9.16).
Narcolepsy Narcolepsy, a condition characterized by frequent periods of sleepiness during the day (Aldrich, 1998), strikes about 1 person in 1,000. It sometimes runs in families, although no gene for narcolepsy has been iden-
© Russell D. Curtis/Photo Researchers
Sleep Apnea
Figure 9.16 A Continuous Positive Airway Pressure (CPAP) mask The mask fits snugly over the nose and delivers air at a fixed pressure, strong enough to keep the breathing passages open.
tified, and many people with narcolepsy have no close relatives with the disease. Narcolepsy has four main symptoms, although not every patient has all four: 1. Gradual or sudden attacks of sleepiness during the day. 2. Occasional cataplexy—an attack of muscle weakness while the person remains awake. Cataplexy is often triggered by strong emotions, such as anger or great excitement. (One man suddenly collapsed during his own wedding ceremony.) 3. Sleep paralysis—an inability to move while falling asleep or waking up. Other people may experience sleep paralysis occasionally, but people with narcolepsy experience it more frequently. 4. Hypnagogic hallucinations—dreamlike experiences that the person has trouble distinguishing from reality, often occurring at the onset of sleep. Each of these symptoms can be interpreted as an intrusion of a REM-like state into wakefulness. REM sleep is associated with muscle weakness (cataplexy), paralysis, and dreams (Mahowald & Schenck, 1992). The cause relates to the neurotransmitter orexin. People with narcolepsy lack the hypothalamic cells that produce and release orexin (Thanickal et al., 2000). Why they lack those cells is not known, although one hypothesis is that they have an autoimmune disease that attacks these cells. Recall that orexin is important for maintaining wakefulness (p. 278). Consequently, people lacking orexin cannot stay awake throughout the day; they have many brief sleepy periods. Dogs that lack the gene for orexin receptors have symptoms much like human narcolepsy, with frequent alternations between wakefulness and sleep (Lin et al., 1999). The same is true for mice that lack orexin (Hara, 2001). 9.2 Stages of Sleep and Brain Mechanisms
283
Over the course of a day, they have about a normal amount of wakefulness and sleep, but they do not stay awake for long at any one time (Mochizuki et al., 2004). As discussed in Chapter 8, people with Huntington’s disease have widespread damage in the basal ganglia. In addition, most lose neurons in the hypothalamus, including the neurons that make orexin. As a result, they have problems staying awake during the day; they also have periods of arousal and activity while in bed at night (Morton et al., 2005). Theoretically, we might imagine combating narcolepsy with drugs that restore orexin. Perhaps eventually, such drugs will become available. Currently, the most common treatment is stimulant drugs, such as methylphenidate (Ritalin), which increase wakefulness by enhancing dopamine or norepinephrine activity.
Periodic Limb Movement Disorder Another factor occasionally linked to insomnia is periodic limb movement disorder, a repeated involuntary movement of the legs and sometimes arms (Edinger et al., 1992). Many people, perhaps most, experience an occasional involuntary kick, especially when starting to fall asleep. Leg movements are not a problem unless they become persistent. In some people, mostly middleaged and older, the legs kick once every 20 to 30 seconds for a period of minutes or even hours, mostly during NREM sleep. Frequent or especially vigorous leg movements may awaken the person. In some cases, tranquilizers help suppress the movements (Schenck & Mahowald, 1996).
terror should be distinguished from a nightmare, which is simply an unpleasant dream. Night terrors occur during NREM sleep and are far more common in children than in adults. Sleep talking is common and harmless. Many people, probably most, talk in their sleep occasionally. Unless someone hears you talking in your sleep and later tells you about it, you could talk in your sleep for years and never know about it. Sleep talking occurs during both REM and non-REM sleep (Arkin, Toth, Baker, & Hastey, 1970). Sleepwalking runs in families, occurs mostly in children ages 2 to 5, and is most common during stage 3 or stage 4 sleep early in the night. (It does not occur during REM sleep because the large muscles are completely relaxed.) The causes are not known. Sleepwalking is generally harmless both to the sleepwalker and to others. No doubt you have heard people say, “You should never waken a sleepwalker.” In fact, it would not be harmful or dangerous, although the person would awaken confused (Moorcroft, 1993). In an individual case, it is sometimes difficult to know whether someone is sleepwalking. One man pleaded “not guilty” to a charge of murder because he had been sleepwalking at the time and did not know what he was doing. The jury agreed with him, partly because he had a family history of sleepwalking. However, especially for a one-time event, it is difficult to know whether he was sleepwalking, subject to REM behavior disorder, or perhaps even awake at the time (Broughton et al., 1994). For more information about a variety of sleep disorders, check this website: http://www.thesleepsite.com/
REM Behavior Disorder For most people, the major postural muscles are relaxed and inactive during REM sleep. However, people with REM behavior disorder move around vigorously during their REM periods, apparently acting out their dreams. They frequently dream about defending themselves against attack, and they may punch, kick, and leap about. Most of them injure themselves or other people and damage pr