Dendritic arithmetic - Research

6 downloads 24 Views 215KB Size Report
that capture the essential processing power of the real thing. A paper by Polsky and col- ... Institute for Neuroscience, and William L. Kath is in the Department of ...
© 2004 Nature Publishing Group http://www.nature.com/natureneuroscience

NEWS AND VIEWS

Dendritic arithmetic Nelson Spruston & William L Kath Pyramidal neurons integrate synaptic inputs arriving on a structurally and functionally complex dendritic tree that has nonlinear responses. A study in this issue shows that nonlinear computation occurs in individual dendritic branches, and suggests a possible approach to building neural network models directly connected to the behavior of real neurons and synapses. Why are real brains so much more powerful than artificial neural networks? In part this is a matter of circuit complexity and scale, but it is also because real neurons are much more complex than the simple elements used in neural network models. One strategy for developing more sophisticated neural networks is to replace simple elements with sophisticated computational models of neurons, including branching dendritic trees, thousands of synapses, and dozens of voltagegated conductances. The problem with this approach is that it is not manageable to use such computationally expensive neuronal models in large-scale networks. Another strategy is to develop abstractions of neurons that capture the essential processing power of the real thing. A paper by Polsky and colleagues1 in this issue represents a large step in this direction by providing experimental insight into what kinds of computations real neurons perform and how they do it. For many decades now, neural network models have relied on simple neuronal elements called ‘integrate-and-fire neurons’2. In their simplest form, these abstracted neurons receive numerous excitatory inputs, each of which produces an excitatory postsynaptic potential that decays exponentially. Multiple inputs sum linearly until a threshold is reached, whereupon the neuron ‘fires’ and produces a response that propagates to all of its targets. Following this virtual action potential, the membrane potential is reset to the resting state. Inhibition can be incorpoNelson Spruston is in the Department of Neurobiology and Physiology and a member of the Institute for Neuroscience, and William L. Kath is in the Department of Engineering Sciences and Applied Mathematics and a member of the Institute for Neuroscience, Northwestern University, Evanston, Illinois 60208, USA. e-mail: [email protected]

rated using units that inhibit, rather than excite, their targets. Networks combining integrate-and-fire neurons with synaptic plasticity carry out powerful computations that are not easily solved using traditional approaches3. Neurophysiologists and neural network modelers alike have long known that the integrate-and-fire model is an incomplete representation of most real neurons4. To appreciate the complexity of synaptic integration, one need only glance at the structure of neurons with the knowledge that inputs arriving onto different parts of the dendritic tree will attenuate differently as they spread toward the action potential initi-

a

ation zone in the axon. Some mechanisms may exist to overcome the dendritic disadvantage of synapses on distant dendrites, but these are unlikely to be effective for all synapses on all dendrites5. It is natural to wonder, therefore, to what extent synapses on the most distant dendrites contribute to action potential initiation in the axon. Further obscuring any answer to this question are ion channels with nonlinear response properties, such as voltage-gated sodium, calcium and potassium channels, as well as transmitter-gated channels including the NMDA receptor, to name a few. These channels and others are expressed abundantly and often nonuniformly in dendrites,

b

Within branch 3 mV 20 ms

Between branches

Individual EPSPs Arithmetic sum Combined EPSP

Figure 1 Spiking in individual dendritic branches implies a three-layer model of synaptic integration. (a) Schematic representation of the main finding of Polsky et al.1: two multisynaptic inputs onto a single dendritic branch exhibit superlinear summation (top). Inputs onto separate branches exhibit roughly linear summation (bottom). (b) Reconstructed layer-5 pyramidal neuron (left) and an abstracted three-layer network model (right; based on ref. 14). Red branches represent the distal apical inputs and light blue branches the perisomatic inputs. Together, these inputs constitute the first layer of the network model, each performing superlinear summation of synaptic inputs as shown in a (indicated by small circles with sigmoids). The outputs of this first layer feed into two integration zones: one near the perisomatic branches (dark blue; e.g., soma) and one near the distal apical branches (purple; e.g., apical spike initiation zone). These integration zones constitute the second layer of the network model (large circles with sigmoids). The third layer (not shown) is the action potential initiation zone in the axon. Grey circles indicate connections between layers.

NATURE NEUROSCIENCE VOLUME 7 | NUMBER 6 | JUNE 2004

567

© 2004 Nature Publishing Group http://www.nature.com/natureneuroscience

NEWS AND VIEWS thus introducing bewildering complexity to the process of synaptic integration6. Despite numerous advances in our knowledge of the distribution and properties of voltage-gated channels in dendrites, a quantitative understanding of how they influence synaptic integration and the resulting computational rules has remained obscure. Polsky et al.1 illuminate the mechanisms of dendritic integration by testing a simple hypothesis: that each terminal dendritic branch acts as a computational subunit, summing synaptic inputs in a sigmoidal fashion (that is, a threshold nonlinearity)7. To test this idea, the authors took advantage of patch-clamp recording in cortical brain slices, combined with confocal imaging of dendritic calcium entry. The imaging methodology allowed them to identify visually small dendritic branches in the neuron from which they were recording. They then placed two small stimulating electrodes near a single dendritic branch and confirmed, using calcium imaging, that synapses were activated locally on the same dendritic branch. The outcome of the experiments was simple (Fig. 1a). Stimulation of one dendritic branch at two locations led to superlinear summation, supporting the notion that a single dendritic branch performs a sigmoidal computation with a threshold and a ceiling. By contrast, when the two stimulating electrodes were positioned near two different dendritic branches, activation of both inputs produced roughly linear summation. In fact, superlinear summation only occurred if the two activated inputs were less than 100 µm apart on the same dendritic branch. These findings support the idea that individual dendritic branches act as computational subunits, and argue against the notion that dendrites act merely as parking spaces for randomly distributed, globally summed synapses. Further observations in the Polsky et al. study1 suggest a mechanism for the thresholding nonlinearity in thin dendritic branches. Blocking glutamate-gated NMDA receptors largely eliminated the superlinear summation of inputs onto a single dendritic branch, suggesting that the voltage-dependent properties of this receptor mediate all-ornone ‘NMDA spikes’ in dendritic branches8. Other details support the NMDA-spike mechanism. For example, double stimulation of each input was most effective at producing superlinear summation. That longer depolarizations are most effective is consistent with the slow kinetics of the relief of voltagedependent magnesium blockade of NMDA receptors9,10. Also, activation of the two

568

inputs with an interval as long as 40 ms produced superlinear summation, which is consistent with the long occupancy of NMDA receptors by glutamate11. Though the authors did not completely rule out contributions from voltage-gated sodium and calcium channels to ‘branch spikes’, NMDA spikes are significantly different from spikes mediated by voltage-gated channels, because they cannot actively spread beyond the region of the dendrites activated by glutamate. Consequently, NMDA receptors provide an excellent mechanism for producing local spikes. How does the existence of spiking branches change our view of synaptic integration? The authors find that a spike in a single branch of the basal dendrites in a layer-5 pyramidal neuron produces a depolarization of about 10 mV in the soma. Because a depolarization of 15–20 mV is required to span the gap between the resting potential and the threshold of an axonal action potential, a small number of these spiking dendritic branches, activated nearly simultaneously, might be sufficient to produce an action potential. It is not known, though, how many synapses are sufficient to produce a branch spike or whether the depolarization produced by each branch spike would sum linearly or nonlinearly. A separate layer of integration is likely to be imposed by the apical dendritic tree. Because of the long primary apical dendrite, synaptic potentials from the apical dendrites attenuate substantially before reaching the soma. Polsky et al.1 show that branch spikes from proximal apical dendrites produce a somatic depolarization similar to the effect of basal dendrite activity. More distal branches on the apical tuft, however, are likely to produce considerably less somatic depolarization. One mechanism for overcoming the weak influence of apical dendritic branches in the soma and axon is to have a separate spike initiation zone in the apical dendrites. There is now substantial evidence for such a mechanism and for interesting interactions between the axonal and apical dendritic action potential initiation zones12,13. Based on these findings, the authors suggest that a single layer-5 pyramidal neuron may act like a three-layer neural network14 (Fig. 1b). The first layer consists of individual dendritic branches (roughly 50–100 in a typical pyramidal neuron), each of which performs a sigmoidal computation. The output of each element in the first layer is then passed to an element in the second layer, which in the simplest case consists of two integration zones (for example, one near the

perisomatic branches and one near the distal apical branches). A more complex case might consist of multiple elements in the second layer, corresponding to multiple integration zones on the main apical dendrite. Finally, the output of the second-layer elements would be passed to the third layer, consisting of a single action potential initiation zone (the axon). What is exciting about this proposal is that individual neurons could be modeled using a framework that is widely implemented in current neural network models. In particular, the ‘abstracted’ integrate-and-fire elements used by neural network modelers may come full circle and be the key components needed to understand how real neurons perform synaptic integration. With a better idea of how pyramidal neurons function, moreover, it might now be conceivable to build neural networks directly connected to the behavior of real neurons and real synaptic connections. Additional details regarding synaptic integration could be layered on top of the pyramidal neuron abstraction. For example, inhibitory interneurons targeting different domains of the dendritic tree could be modeled by inputs to subdomains of the threelayered pyramidal neuron, and interactions between multiple spike-initiation zones could be modeled by interactions between elements in the second layer. Despite these exciting results and their wide-reaching implications, the pyramidal neuron is not yet completely solved. To name just a few of the many remaining mysteries: we still do not fully understand the mechanism of branch-spike initiation and spread, or the mechanisms for spike initiation in larger dendrites. We still do not know how many integration zones might lie in the larger branches of apical dendrites or precisely how they interact. We still do not fully understand the significance and implications of the nonuniform distribution of some channels in dendrites, such as the high density of hyperpolarization-activated channels in distal dendrites15. We do not understand the multitude of modulatory and plastic influences that affect somatic, axonal and dendritic ion channels and shape dendritic integration. And, importantly, we still do not understand how neurons in other brain regions (hippocampal pyramidal neurons, cerebellar Purkinje cells, myriad inhibitory interneurons) differ from the layer-5 pyramidal neuron of neocortex with respect to synaptic integration. Research in these areas will continue, but in the meantime, the paths of neural network modelers and experimental neuroscientists have crossed, and a golden

VOLUME 7 | NUMBER 6 | JUNE 2004 NATURE NEUROSCIENCE

NEWS AND VIEWS

© 2004 Nature Publishing Group http://www.nature.com/natureneuroscience

opportunity exists for an effective exchange of ideas leading to new advances in our understanding of dendritic arithmetic and neural computation. 1. Polsky, A., Mel, B.W. & Schiller, J. Nat. Neurosci. 7, 621–627 (2004). 2. Dayan, P. & Abbott, L.F. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (MIT Press, Cambridge, Massachusetts, USA, 2001). 3. Fausett, L.V. Fundamentals of Neural Networks

4. 5. 6. 7. 8. 9.

(Prentice Hall, Upper Saddle River, New Jersey, USA, 1994). Koch, C. & Segev, I. Nat. Neurosci. 3 (Suppl.), 1171–1177 (2000). Williams, S.R. & Stuart, G.J. Trends Neurosci. 26, 147–154 (2003). Hausser, M., Spruston, N. & Stuart, G.J. Science 290, 739–744 (2000). Poirazi, P., Brannon, T. & Mel, B.W. Neuron 37, 977–987 (2003). Schiller, J., Major, G., Koester, H.J. & Schiller, Y. Nature 404, 285–289 (2000). Kampa, B.M., Clements, J., Jonas, P. & Stuart, G.J.

J. Physiol. (Lond.) 556, 337–345 (2004). 10. Vargas-Caballero, M & Robinson, H.P. J. Neurophysiol. 89, 2778–2783 (2003). 11. Lester, R.A. & Jahr, C.E. J. Neurosci. 12, 635–643 (1992). 12. Larkum, M.E., Zhu, J.J. & Sakmann, B. J. Physiol. (Lond.) 533, 447–466 (2001). 13. Yuste, R., Gutnick, M.J., Saar, D., Delaney, K.R. & Tank, D.W. Neuron 13, 23–43 (1994). 14. Häusser, M. & Mel, B.W. Curr. Opin. Neurobiol. 13, 372–383 (2003). 15. Lorincz, A., Notomi, T., Tamas, G., Shigemoto, R. & Nusser, Z. Nat. Neurosci. 5, 1185–1193 (2002).

Deconstructing a navigational neuron Günther M Zeck & Richard H Masland Flies show remarkable flight control, aided partly by motion-sensitive neurons in the visual ganglia. Haag and Borst now unravel the microcircuitry of some of these motion-analyzing cells, and suggest a mechanism for their receptive field tuning. The brain of a fly is small but powerful. Although we do not know much about a fly’s thoughts, everyday experience teaches us plenty about another of its brain functions, the control of flight. Flies move fast, they avoid (most) obstacles, and they are remarkably good at dodging predators, such as birds and rolled-up newspapers. These visual stunts are possible in part because of the fly’s unusually large eyes (Fig. 1), but the most interesting computations are carried out by a small set (∼120) of visually driven neurons located within the head ganglia. In this issue, Jürgen Haag and Alexander Borst get into the middle of this machinery—to a place roughly halfway between light detection and wing movement—and unravel the microcircuitry that controls the activity of one of the motion-analyzing cells1. The eye of a blowfly consists of a precise array of photosensitive cells, arranged in groups of eight. These cells converge via several way-stations onto higher-order cells within a central structure called the lobula plate. The responses of photoreceptor cells signal only the presence or absence of light, but at later stages, more interesting properties begin to appear, one of which is that some neurons become direction-selective, passing along a signal that depends on the direction of stimulus motion. This is, however, only the beginning of the story. Cells of the lobula plate have a much larger and more Günther M. Zeck and Richard H. Masland are at the Howard Hughes Medical Institute, Massachusetts General Hospital and Harvard Medical School, 50 Blossom Street, Wellman 429, Boston, Massachusetts 02114, USA. e-mail: [email protected]

puzzling repertoire, which Haag and Borst now set out to explain, focusing on neurons responsive to vertical motion, termed VS or ‘vertical system’ cells. There are ten distinguishable VS cells in each hemisphere of the blowfly. Each of them has a region of visual space upon which it reports, its receptive field. In general, the receptive field of a neuron of the visual system roughly corresponds to the extent of the input neurons sampled by its dendritic arbor, but the receptive fields of the VS neurons instead cover a huge area, representing a visual angle of more than 100 degrees. Instead of surveying a narrow segment of the world, each VS cell somehow views an area bigger than the area scanned by its own set of photoreceptor cells. The VS cells have a remarkable second feature2. Not only do they have a bias for upward or downward motion (as their name implies); they are also particularly sensitive to rotational flow fields (Fig. 1; see also Supplementary Fig. 1 of the paper by Haag and Borst1). A rotating visual stimulus is what is seen when one looks at the center of a propeller; a rotational flow field is the visual stimulus generated during the act of tilting one’s head, when the whole visual world rotates. It turns out that the VS cells’ complex response properties can be explained by lateral connections among the VS1–VS10 neurons. Haag and Borst used two electrodes to record from pairs of VS cells. Passing current into one cell was found to depolarize nearby cells. The connection was bidirectional, so that current could flow from either cell of the pair to the other. The coupling between pairs of cells became weaker as the distance between the cells increased. The connection

NATURE NEUROSCIENCE VOLUME 7 | NUMBER 6 | JUNE 2004

could be analyzed as a low-pass filter, and had kinetics of increasing order for cells that were more separated in distance. These results indicate that the cells are coupled by gap junctions, in a chain-like fashion such that depolarizing one cell depolarizes its neighbor, which then depolarizes the next neighbor and so on. Interestingly, two-photon imaging of multiple VS cells injected with fluorescent dyes gave no convincing evidence of contact among the dendrites; the gap junctions instead seem to be axo-axonal. Coupling between the cells can create the observed broadening of any individual cell’s receptive field. Each individual VS cell responds to the set of photoreceptor cells surveyed by that VS cell’s dendrites. The dendrites of neighboring VS cells survey nearby patches of photoreceptor cells, and thus sample neighboring regions of visual space. If the cells are coupled, the receptive field of any individual cell expands to include the region of visual space monitored by the neighboring (coupled) VS cells. For the most distant VS cells, and for other (horizontal system) cells of the lobula plate, the coupling relationships were more complicated. The authors studied the VS7/8 cell in detail. This cell responds broadly to downward motion, adding to the direct output of local direction-selective cells a graded excitation from VS6 and VS9, neighboring cells that are also downward-sensitive but cover additional visual space. The VS7/8 cell is also tuned to two more directions. From a neuron called HSN (which conveys information about horizontal motion), the VS7/8 cell gains sensitivity to horizontal motion, this time via a chain comprising an excitatory synapse from HSN on an unknown cell X

569