Spiking Neural Networks for Reconfigurable

0 downloads 0 Views 198KB Size Report
generally controlled by complex networks of nerve cells, or neurons, and their ... This paper describes biologically inspired spiking neural network models suit-.
Spiking Neural Networks for Reconfigurable POEtic Tissue Jan Eriksson4, Oriol Torres3, Andrew Mitchell1*, Gayle Tucker2, Ken Lindsay2, David Halliday1, Jay Rosenberg2, Juan-Manuel Moreno3, Alessandro EP Villa4,5 1

University of York, York, UK *[email protected] 2 University of Glasgow, UK 3 Technical University of Catalunya, Barcelona, Spain 4 Lab. of Neuroheuristics, University of Lausanne, Lausanne, Switzerland 5 University Joseph-Fourier, Grenoble, France

Abstract. Vertebrate and most invertebrate organisms interact with their environment through processes of adaptation and learning. Such processes are generally controlled by complex networks of nerve cells, or neurons, and their interactions. Neurons are characterized by all-or-none discharges—the spikes— and the time series corresponding to the sequences of the discharges—the spike trains — carry most of the information used for intercellular communication. This paper describes biologically inspired spiking neural network models suitable for digital hardware implementation. We consider bio-realism, hardware friendliness, and performance as factors which influence the ability of these models to integrate into a flexible computational substrate inspired by evolutionary, developmental and learning aspects of living organisms. Both software and hardware simulations have been used to assess and compare the different models to determine the most suitable spiking neural network model.

1

Introduction

Simple spiking neuron models are well suited to hardware implementation. The particular aspects which are hardware friendly include the ability to translate the output of spiking models to a 1-bit digital representation, and the ability to completely specify the models with a few key parameters. Here we describe some recent research into Spiking Neural Networks (SNN) which can be realised using the POEtic tissue: a flexible computational substrate inspired by evolutionary (phylogenesis), developmental (ontogenesis), and learning (epigenesis) aspects of living organisms. The POEtic tissue follows a 2-D cellular structure consisting of three layers: a genotype plane; a configuration plane; and a phenotype plane, whose combination allows multi-cellular electronic organisms to be easily evolved and grown. For the case where the tissue is required to learn, initially the phenotype will be a SNN although other learning organisms may also be developed on the tissue. A more detailed description of the POEtic project can be found in [1]. Several bio-inspired models capable of endowing the tissue with the ability to learn have been studied [2], ranging from a complex biophysical representation based

on modified Hodgkin & Huxley dynamics to a greatly simplified Integrate to threshold & Fire model. To balance the conflicting requirements of hardware resources and bio-realism, we concentrate on an intermediate level of complexity [3]. Biophysical and computational aspects of spiking neuron models have been extensively studied [4]. Many SNN models now also exist which are suitable for implementation in digital hardware, e.g., [5],[6], with hardware platforms continually being developed for their real-time parallel simulation, e.g., [7],[8]. However, to our knowledge, no SNN models exist which have been successfully developed for their implementation on a digital hardware device which are suitable for combination with ontogenetic [9] and phylogenetic [10] methods.

2

Model Description

In this section we describe the three principle aspects of SNN models. These are the method used in modelling the internal membrane dynamics of an individual neuron; the learning dynamic (which in this case is to be built-in to the individual neuron); and the topological rules by which networks of these neuronal elements are connected. 2.1

Membrane Dynamics

The general form of linear differential equation for a spiking neuron is given in equation (1) Cm

(

n dVi (t ) = −G m (Vi (t ) − Vrest ) − ∑ G j (t ) Vi (t ) − V j dt j =0

)

(1)

Where Vi(t) is the membrane potential of cell i at time t, Cm and Gm are the membrane capacitance and conductance, respectively, Vrest is the resting potential of the cell, and Gj(t) and Vj are the time dependent conductance change and reversal potential for the n synaptic inputs. The membrane potential is compared with a threshold value to determine if the neuron outputs a spike. After each output spike the membrane potential is instantaneously reset to the resting potential, and no further output spikes are generated for a fixed refractory period. Equation 1 can be rearranged to give dVi (t ) = − f i (t )Vi (t ) + hi (t ) dt

(

)

(

(2)

)

where f i (t ) = Gm + ∑ G j (t ) C m and hi (t ) = GmVrest + ∑ G j (t )V j Cm , the solution of which can be approximated, for a time step of ∆t as Vi (t + ∆t ) = (Vi (t ) − hi (t ) f i (t ) ) exp(− f i (t )∆t ) + hi (t ) f i (t )

(3)

The quantity 1/fi(t) can be regarded as the time constant of the cell. Ignoring the effects of the synaptic inputs, this can be approximated by the constant τm = Cm/Gm. Thus

Vi (t + ∆t ) = Vrest + k m (Vi (t ) − Vrest ) + (1 − k m )∑ I ij (t )

(4)

where km = exp(–∆t/τm) and ∑ I ij (t ) approximates the change in membrane potential resulting from synaptic (and any other) input currents. We refer to km as the kinetic membrane constant. This allows the exponential decay of the membrane to be approximated using a simple algebraic expression. An alternative approach is described in section 4, dealing with hardware aspects. A further simplification results from setting Vrest to zero to give the final form of the spiking neuron model. The firing condition is if Vi (t ) ≥ θ then S i (t ) = 1 if Vi (t ) < θ then S i (t ) = 0

(5)

where Si(t) is the binary valued output of the model, and θ is the (fixed) firing threshold of the cell, a value of 1 for the variable Si signals the presence of an output spike. The update equation used in combination with the firing condition is

if S i (t ) = 1 then Vi (t + ∆t ) = 0

if S i (t ) = 0 then Vi (t + ∆t ) = k mVi (t ) + (1 − k m )∑ I ij (t )

2.2

(6)

Learning Dynamics

Although several attempts at applying conventional artificial neural network learning algorithms (e.g., backpropagation) to SNNs exist, e.g., [11], for the purposes of the POEtic project we required a simple unsupervised learning method which would allow the adaptation of weights based on pre- and post-synaptic activity. A recent discussion of several such spike-timing dependent synaptic plasticity (STDP) learning rules can be found in [12]. Whilst these rules offer a fast and unsupervised adaptation of synaptic weights, the resulting networks are often prone to memory loss in noisy environments. However, a method recently proposed by Fusi [13],[14] aims to address this problem through discretising the synaptic weight variable. It has also been shown via simulation that this method reduces the computational load per neuron for models using plastic synaptic weights [15],[16]. An added advantage of this technique, from our perspective, is its increased suitability for digital hardware implementation. The learning rule proposed by Fusi uses an internal learning state variable for each synaptic input which functions as a short term memory, the value of this variable is used to discretise the synaptic weight into a predetermined number of states. We have chosen to modify the STDP method used by Fusi for updating the synaptic weight, so that the state variable is increased if the post-synaptic spike oc-

curs after the pre-synaptic spike, and is decreased if the opposite temporal relation holds. Each synapse has two main variables for the connection from cell j to cell i, these are an activation variable Aij and a learning state Lij. The activation variable depends on the class of cells i and j, i.e. inhibitory or excitatory. The synapses of excitatory to excitatory neurons have activation variables which can vary between a chosen number of discrete states, i.e., [0,1,2,4], all others have a constant Aij=1. The learning state, Lij, is governed by the following equation:

(

) (

Lij (t + 1) = k act × Lij (t ) + YD j (t ) × S i (t ) − YDi (t ) × S j (t )

)

(7 )

where k act is a kinetic activation constant used to approximate an exponential decay with time constant τ act of Lij (t ) . The YD variables define a time window (with exponentially decaying effect) after each pre- and post-synaptic spike where modification of the learning state, Lij (t ) , is possible. For post-synaptic neuron i, YD is modeled as: if S i (t ) = 1 YDi (t + 1) = YDMAX if S i (t ) = 0 YDi (t + 1) = k learn × YDi (t )

(8 )

where k learn is a kinetic learning constant approximating an exponential decay with time constant τ learn . From equation 7, we can see that in the absence of any spiking activity, the learning state is a continually decreasing function. However, if the presynaptic cell fires just before a post-synaptic cell the learning state will increase by YD j (t ) , whereas if the pre-synaptic cell fires just after the post-synaptic cell, the learning state will be decreased by YDi (t ) . The respective value of the YD variables reflects the level of coincidence of the pre- and post-synaptic firings as defined by equation 8. The learning state is used to determine the discrete level of activation of the synapse AEXC->EXC. The number of discrete activation states is a fixed parameter of the model, and may be two [0,1], three [0,1,2], or more. Note that, according to equation 7, if there is no activity at the synapse then Lij drifts back towards 0 (whether it's value is positive or negative). If Lij reaches a positive threshold (Lij > Lth) and Aij is not already equal to Amax, then Aij is incremented and the value of Lij is reset to Lij – (2 x Lth). Likewise, when Lij reaches a negative threshold (Lij < - Lth) and Aij is not already equal to 0, then Aij is decremented and the value of Lij is reset to Lij + (2 x Lth). The input Iij in equation 4 is then given by

(

I ij = Wij × Aij

)

(9 )

where Wij is a constant multiplier used for generating the final weight added to the membrane potential on receiving a given input. 2.3

Network Topology

In this section, we consider the method of network creation. While evolutionary techniques [10] may be applied to determine the parameters of any part of this model, ontogenesis [9] is essentially concerned with the topology. For our purposes this

process has been automated using a set of topological rules. These rules define the size of the network, the method and depth of connectivity and the neuronal class proportion. In the experiments which follow, the network contained 50 x 50 neurons N=[0,2499], 80% of which were excitatory, leaving 20% inhibitory, in line with the generally accepted ratio of the neo-cortex [17]. The excitatory cells may connect to other cells with differing synaptic weight values (see 2.2 above). In the simulations described below, each cell makes connections to other cells within a 5 x 5 neighbourhood, i.e., 24 other cells.

3

Software Simulation Results

In this section, we explore the potential of the SNN model described previously to mimic experimental results of neuroscientific studies. We first study the behaviour of the network with static synaptic weights, then with the learning activated we look at its ability to perform a visual discrimination task. The network was first initialised (t=0) where all neuronal states are set to zero A brief pulse stimuli, with sufficient amplitude to fire the neuron, was then input to all excitatory cells in the network. Sustained activity was then observed within the network, achieving low average network activities (e.g., 1 spike every 96 sampling intervals) without causing global oscillations. A trace recorded from one of the output cells can be seen in figure 1. This simulation without learning demonstrates that a simple network can hold information indefinitely through persistent activity.

(Si (0) = 0) and all membrane levels are set to zero (Vi (0) = 0) .

Fig. 1. A five second recording of the activity of a single cell with a mean interval of 40 samples.

The next stage is then to allow learning to investigate the possibility of exploiting different activity patterns as memory depending on the input stimuli. We apply the learning rule as described earlier to the same network used in the previous experiment. The network is initialised at t=0 as before. As well as the input stimuli, this time a low-level Gaussian noise input is also applied at each time step to every cell in the network. This stimuli was designed to simulate sinusoidal gratings moving horizontally at different speeds over a visual receptive field. Training of the network then involved moving the stimulus across the network from left to right at speed=1 (i.e., 50 neurons per second) for a period of 10 seconds.

Figure 2 shows the results following training for applied stimulus speeds of 1 and 2, respectively. In addition to the response of a single excitatory neuron a “local” activity was also measured. The local activity corresponds to the average activity in a narrow strip perpendicular to the movement of the grating. These results demonstrate that the network has been successfully trained to discriminate direction of a moving stimulus across the network.

Fig. 2. The response of a single excitatory cell (top trace) and local activity (bottom trace) for input stimuli moving at two speeds, in both forward and reverse directions, across the network.

The effect of reducing the strength of the stimulus is illustrated in figure 3. Here only the forward moving stimulus triggered any response, relying on the presence of additional input noise to provoke the discharge of several neurons. This result suggests that the trained network is also sensitive to an input stimulus of varying amplitude. FORWARD

REVERSE

SPEED = 4

Fig. 3. Recordings from a single cell and associated local activity as a result of applying low level stimulation (original amplitude/4) moving at a faster speed (200 neurons/sec).

4

Digital Hardware Implementation

Several aspects of the previously described model remain under development. We present here the latest implementation of several functional units following the specifications defined by the theoretical model. A block diagram of a neuron with two inputs is shown in figure 4. From this diagram it can be seen that the inputs are added and the result is stored in a register. At the following clock event this result is added, using the second adder, to the current membrane potential fed back from the decay function block. The result of this addition gives the new membrane potential now found at the output of the decay function block. However, if the result of the first addition is zero ,i.e., no inputs are present, then the membrane potential is not updated and instead the decay function becomes active implementing the leaky membrane. Equation 4 represents a common approach to approximating exponential decay. However, the decay function currently used is an approach more suited to hardware implementation approximating exponential decay through the control of two parameters M and STEP. With the membrane potential held in a shift register, the algorithm is implemented as follows: m=Most_significant_1(membrane)+1

(i)

m=min(m,M)-1

(ii)

delta_mem=m>>membrane

(iii)

iSTEP=(M-m)*2STEP

(iv)

• The first step is to determine the Most Significant Bit (MSB) of the membrane potential which is stored as the variable, m (i). • If the MSB is set then the membrane potential is decreased by one and steps (ii) and (iii) are bypassed. If the MSB is not set then the minimum of M and the MSB of the membrane potential less one is calculated (ii). • The shift register holding the membrane potential is then shifted down by m bits to give delta_mem (iii). • Finally, the time delay (iSTEP) until implementing the new value is calculated (iv).

Fig. 4. A block diagram representation of the neuron model implementation prototype for digital hardware

The output of the decay function block is continually compared with the firing threshold. If the firing threshold is surpassed then the output of the comparator goes high, producing an output spike. The output is also fed back resetting the decaying membrane potential to implement an approximate refractory period. The second part of the implementation consists in the definition of a memory for each synapse which works according to the learning rules explained earlier. Referring to figure 5, the counter holds a variable ‘xmem’. Upon receiving a spike at the synaptic input a new value of ‘xmem’ is loaded into the counter. If the membrane potential of the neuron is greater than mem_threshold (note: not the firing threshold), xmem(t+1)=xmem(t)+∆x, otherwise xmem(t+1)=xmem(t)-∆x. However, if no spike is present at the synaptic input the counter increases or decreases its value depending upon its current value i.e. if xmem is in the range [0,7] it will decrease to zero otherwise it will be increased to its maximum value of 15. This implementation represents a simplified version of the methods discussed in section 2.2 where four separate activation states were described. Here, xmem represents the learning variable Lij where xmem=[0,7] and xmem=[8,15] represent the two activation states A00=0 and A00=1 respectively. Simple methods for implementing further activation states are currently under investigation.

Fig. 5. A block diagram representation of the digital circuit used to implement synaptic weight storage and adaptation

Simulations of this implementation have also been completed showing its correct operation governed by the theoretical model, for more details see [2].

5

Discussion

The model presented has been developed following stringent guidelines, tested first using a software model and finally presented as a hardware implementation. Neurophysiological studies of neuronal correlates of working (short-term) memory show that the maintenance of a stimulus in working memory is accompanied by persistent (enhanced relative to spontaneous activity) activity in a selective subset of neurons. A long-standing hypothesis has been that persistent activity is a collective property of a network, and that it is maintained by recurrent excitatory feedback [18]. It is difficult to achieve sustained activity without high firing rates. In a network of the type we have envisaged, depending on appropriately selected synaptic strengths, the activity obtained consists generally of synchronous oscillations. Only if external input (noise) is applied other sustained patterns of activity will appear. The simulations that were made indicate that various patterns of sustained activity without external input can be obtained if some modifications are made to the typical integrate-and-fire model. Our simulation results do not represent an attempt at an exhaustive analysis of the model but an exemplification of some possibilities for uses of the model once combined with the ontogenetic and phylogenetic mechanisms. The theoretical model and simulation results presented have been used to develop a prototype for digital hardware implementation. This implementation is currently under development and once complete should fully respect the theoretical model.

6

Acknowledgements

This project is funded by the Future and Emerging Technologies programme (ISTFET) of the European Community, under grant IST-2000-28027 (POETIC). The information provided is the sole responsibility of the authors and does not reflect the Community’s opinion. The Community is not responsible for any use that might be made of data appearing in this publication. The Swiss participants to this project are partially supported under grant OFES 00.0529-2 by the Swiss government.

References 1.

2. 3. 4. 5.

A. M. Tyrrell, E. Sanchez, D. Floreano, G. Tempesti, D. Mange, J.-M. Moreno, J. Rosenberg, and A. Villa. POEtic Tissue: An Integrated Architecture for Bio-Inspired Hardware, Submitted to the Fifth International Conference on Evolvable Systems, ICES03. http://www.poetictissue.org S. L. Hill, A.E.P. Villa, Dynamic transitions in global network activity influenced by the balance of excitation and inhibition, Network, Comp. Neural Syst. 8: 165-184, 1997. W. Maass, C. M. Bishop, Pulsed Neural Networks, MIT Press, 1998. C. Christodoulou, G. Bugmann, J. G. Taylor, T. G. Clarkson, An Extension of the Temporal Noisy-Leaky Integrator Neuron and its Potential Applications, Proceedings of the International Joint Conference on Neural Networks, Vol.3, pp. 165-170, 1992.

6. 7. 8.

9. 10. 11. 12. 13. 14. 15. 16. 17. 18.

S. Maya, R. Reynoso, C. Torres, M. Arias-Estrada, Compact Spiking Neural Network Implementation in FPGA, in R.W. Hartenstein and H. Grünbacher (Eds.): FPL 2000, LNCS 1896, pp. 270−276, 2000. A. Jahnke, U. Roth and H. Klar, A SIMD/Dataflow Architecture for a Neurocomputer for Spike-Processing Neural Networks (NESPINN), MicroNeuro'96, pp. 232-237, 1996. G. Hartmann, G. Frank, M. Schafer, C. Wolff, SPIKE 128k - An Accelerator for Dynamic Simulation of Large Pulse-Coded Networks, Proceedings of the 6th International Conference on Microelectronics for Neural Networks, Evolutionary & Fuzzy Systems, pp.130139, 1997 G. Tempesti, D. Roggen, E. Sanchez, Y. Thoma, R. Canham, A. Tyrrell, Ontogenetic Development and Fault Tolerance in the POEtic Tissue, Submitted to the Fifth International Conference on Evolvable Systems, ICES03. D. Roggen, D. Floreano, C. Mattiusi, A Morphogenetic System as the Phylogenetic Mechanism of the POEtic Tissue, Submitted to the Fifth International Conference on Evolvable Systems, ICES03. S. M. Bohte, J. N. Kok, H. La Poutre, SpikeProp: Backpropagation for Networks of Spiking Neurons, proceedings of the European Symposium on Artificial Neural Networks, pp.419-424, 2000. P. D. Roberts, C. C. Bell, Spike-Timing Dependant Synaptic Plasticity: Mechanisms and Implications, http://www.proberts.net/RESEARCH.HTM, to appear in Biological Cybernetics. S. Fusi, Long term memory: encoding and storing strategies of the brain, Neurocomputing 38-40, pp.1223-1228, 2001. S. Fusi, M. Annunziato, D. Badoni, A. Salamon, D. J. Amit, Spike-Driven Synaptic Plasticity: Theory, Simulation, VLSI Implementation, Neural Computation 12, pp.2227– 2258, 2000. M. Mattia, P. Del Giudice, Efficient Event-Driven Simulation of Large Networks of Spiking Neurons and Dynamical Synapses, Neural Computation 12, pp. 2305-2329, 2000 P. Del Giudice, M. Mattia, Long and Short term synaptic plasticity and the formation of working memory: a case study, Neurocomputing 38-40, pp.1175-1180, 2001. R. Douglas, K. Martin, Neocortex, Chapter 12 of The Synaptic Organisation of the Brain (G. M. Shepherd, Ed.), Oxford University Press, 1998. A. E. P. Villa, Empirical Evidence about Temporal Structure in Multi unit Recordings, in: Time and the Brain (R. Miller, Ed.), Conceptual advances in brain research, vol. 2., Harwood Academic Publishers, pp. 1 51, 2000.