Spiking Neural Network with RRAM - CiteSeerX

8 downloads 0 Views 7MB Size Report
Abstract—The spiking neural network (SNN) provides a promis- ing solution to drastically promote the performance and effi- ciency of computing systems.
Spiking Neural Network with RRAM: Can We Use It for Real-World Application? Tianqi Tang1 , Lixue Xia1 , Boxun Li1 , Rong Luo1 , Yiran Chen2 , Yu Wang1 , Huazhong Yang1 of E.E., Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University, Beijing, China 2 Dept. of E.C.E., University of Pittsburgh, Pittsburgh, USA 1 Email: [email protected]

1 Dept.

Abstract—The spiking neural network (SNN) provides a promising solution to drastically promote the performance and efficiency of computing systems. Previous work of SNN mainly focus on increasing the scalability and level of realism in a neural simulation, while few of them support practical cognitive applications with acceptable performance. At the same time, based on the traditional CMOS technology, the efficiency of SNN systems is also unsatisfactory. In this work, we explore different training algorithms of SNN for real-world applications, and demonstrate that the Neural Sampling method is much more effective than Spiking Time Dependent Plasticity (STDP) and Remote Supervision Method (ReSuMe). We also propose an energy efficient implementation of SNN with the emerging metal-oxide resistive random access memory (RRAM) devices, which includes an RRAM crossbar array works as network synapses, an analog design of the spike neuron, and an input encoding scheme. A parameter mapping algorithm is also introduced to configure the RRAM-based SNN. Simulation results illustrate that the system achieves 91.2% accuracy on the MNIST dataset with an ultra-low power consumption of 3.5mW. Moreover, the RRAM-based SNN system demonstrates great robustness to 20% process variation with less than 1% accuracy decrease, and can tolerate 20% signal fluctuation with about 2% accuracy loss. These results reveal that the RRAM-based SNN will be quite easy to be physically realized.

I. I NTRODUCTION The explosion of data generates great demands on more powerful platforms with higher processing speed, lower energy consumption, and even intelligent mining algorithm. However, the scaling of conventional CMOS technology is approaching the limit, making it difficult for CMOS-based computing systems to achieve considerable improvements from device scaling [1]. And the well-known “memory wall” problem is increasingly limiting the performance of the traditional Von Neumann architecture [2]. Emerging nano-devices and novel computer architectures have rapidly attracted substantial research interests for the great potential of boosting performance and efficiency gains. The spiking neural network (SNN) provides a promising solution to drastically promote the performance and efficiency of computing systems. Abstracted from the actual neural system, SNN processes sparse time-encoded neural signals in parallel [3]. The special architecture not only makes SNN a promising tool to deal with cognitive tasks, such as the object detection

c 978-3-9815370-4-8/DATE15/2015 EDAA

and speech recognition, but also inspires new computational paradigms beyond the von Neumann architecture [4], [5]. Many recent works in the SNN focus on increasing the scalability and level of realism in a neural simulation [6], [7]. These techniques are able to model thousands to billions of neurons in biological real time and provide promising tools to study the brain. However, few of them support practical cognitive applications, such as the handwritten digit recognition. In other words, there remains a serious lack of the study of effective training algorithm of SNN, especially for the real-world applications, to achieve an acceptable cognitive performance. On the other hand, based on the traditional CMOS technology, the energy efficiency of these brain simulators is also unsatisfactory. For example, IBM consumes power of 1.4MW to simulate the cat cortex with 109 neurons and 1013 synapses on the Blue Gene supercomputer cluster, which was of five orders of magnitude higher than the brain (∼ 20W) [8]. An energy efficient implementation of is still highly demanded. The innovations of device technology offer great opportunities for the efficient implementation of SNN and radically different forms of architecture [9]. The metal-oxide resistive random access memory (RRAM) device (or the memristor) is one of these promising devices [10]. The RRAM device enjoys the ultra-integration density and enables a large number of signal connections within a small circuit size. More importantly, by naturally transferring the weighted combination of input signals to output voltages in parallel, the RRAM crossbar array is able to merge the computation and memory together as the brain, and provides an ultra-high efficient implementation of the matrix-vector multiplication, which is one of the most important operations in neural network models [11]. Many studies have explored the potential of RRAM-based neural computing architecture beyond Von Neumann. For example, a low power on-chip neural approximate computing system has been demonstrated with power efficiency of more than 400 GFLOPS/W [12]. These works demonstrate a great potential of realizing SNN with RRAM devices, but the concrete implementation still remains further study. In this work, we propose an RRAM-based energy efficient implementation of SNN for practical cognitive tasks. The contribution of this paper includes:

860



s;ƚͿ

0





ZZD 

ŐŬ͕ũ



KdžLJŐĞŶ sĂĐĂŶĐŝĞƐ

1

sŽ dƵŶŶĞůŝŶŐ'ĂƉ

Z^

;ĂͿ

;ďͿ

Fig. 1. (a). Physical model of the HfOx based RRAM. The resistance of the RRAM device is determined by the tunneling gap, which will evolve due to the filed and thermally driven oxygen ion migration. (b). Structure of the RRAM Crossbar Array.

1) We compare different models of spiking neural networks for practical cognitive tasks, including the Spike Timing Dependent Plasticity (STDP), the Remote Supervised Method (ReSuMe), and the latest Neural Sampling Learning Scheme. We demonstrate that the STDP and ReSuMe scheme can NOT provide acceptable cognitive performance, while the neural sampling method is promising for real-world applications. 2) We propose an RRAM-based implementation of spiking neural network. The RRAM implementation consists of an RRAM crossbar array working as network synapses, an RRAM-based design of the spike neuron, an input encoding scheme, and an algorithm to configure the RRAM-based spiking neural network. 3) Key design parameters and physical constraints, such as the process variation and signal fluctuation, are extracted and studied in the paper. Simulation results illustrate that the system achieves 91.2% accuracy on MNIST dataset [13] with power consumption of only 3.5mW. Moreover, the RRAM-based SNN is robust to a large process variation of 20% with less than 1% accuracy decrease and 20% signal fluctuation with about 2% accuracy decrease. These results demonstrate that the RRAM-based SNN will be quite easy for physically realization. The rest of this paper is organized as follows: Section II provides the related background knowledge and Section III study the training algorithm of spiking neural network. The RRAM-based SNN implementation is introduced in Section IV. Section V presents a case study on the handwritten digit recognition task. And Section VI concludes this work. II. P RELIMINARIES A. RRAM Device Basics The RRAM device is a passive two-port element with variable resistance states. Fig. 1(a) demonstrates a 3D filament model of the HfOx based RRAM [14]. The conductance of the device is exponentially dependent on the tunneling gap. When a large voltage is applied on the device, the tunneling gap will move and the resistance of RRAM devices will change.

As shown in Fig. 1(b), the RRAM devices can be used to build cross-point structure, also known as the RRAM crossbar array. The relationship between the input voltage vector (Vi ) and output voltage vector (Vo ) can be expressed as follows [11]:  Vo,j = ck,j · Vi,k (1) k

where k (k = 1,2,..,N ) and j (j = 1,2,..,M ) are the index numbers of input and output ports, and the matrix parameter ck,j can be represented by the conductivity of the RRAM device (gk,j ) and the load resistors (gs ) as: gk,j (2) ck,j = N  gs + gk,l l=1

Therefore, the RRAM crossbar array is able to store the matrix through the conductance states of RRAM devices, and perform the analog matrix-vector multiplication efficiently by merging the computation and memory together. B. Neurons and Spiking Neural Network The spiking neural network system is made up of layers of spiking neurons and synaptic weight matrixes between them. The structure of the whole system is shown in Fig. 2. An image recognition task working on this system is taken as a demonstration. Each input neuron represents a pixel value of the image while each output neuron represents one classification the image might be labeled. First, for each input channel, there is an encoding module which transforms the numeral value x (e.g. the pixel value of an image) to a 0/1 spike train X(t). Then the spike train propagates through the synaptic weight matrix W by manipulating a matrix-vector multiplication operation. The result W X(t) functions as the input Vin (t) of the spike neuron and is accumulated by the state variable V (t). Once V (t) exceeds the threshold Vthresh , the neuron will send a spike to the next synaptic weight matrix and V (t) will reset to Vreset . The behaviour of the neuron is followed as the Leaky Integrateand-Fire (LIF) Model [15] and can be described as:  β · V (t − 1) + Vin (t) when V < Vth V (t) = (3) Vreset and set a spike when V ≥ Vth where V (t) is the state variable and β is the leaky parameter; Vth is the threshold state and Vreset is reset state. After the spikes pass through all the synaptic weight crossbars and the spike neurons, there is a counter to calculate the spike number of each neuron in the output layer. A comparator labels the image to the classification whose output neuron has the largest spike number. In Section IV, we would make the hardware mapping to build the RRAM-based spiking neural network system. III. T RAINING S CHEME OF SNN The spiking neural network faces a huge problem that it is difficult to train the synaptic weights when applied in the real-world applications. In this section, we compare different

2015 Design, Automation & Test in Europe Conference & Exhibition (DATE)

861

Fig. 2.

RRAM-based Spiking Neural Network System Structure

SNN training algorithms, including the Spike Timing Dependent Plasticity (STDP), Remote Supervision Method (ReSuMe) and the latest Siegert learning scheme. We demonstrate that the STDP and ReSuMe scheme can NOT provide acceptable cognitive performance, while the Siegert method are promising for real-world applications. A. Spike Timing Dependent Plasticity (STDP) The spike timing dependent plasticity (STDP) [16] is an unsupervised learning rule which updates the synaptic weights according to relative spiking time of pre- and post-synaptic neurons. The learning rate is decided by the time interval: the closer distance between pre- and post-synaptic spikes, the larger the learning rate. The weight updating direction is decided by which neuron spikes first: for the excitatory neuron, if the postsynaptic neuron spikes later, the synapse will be strengthened; otherwise, it will be decreased. For the inhibitory neuron, vice versa. When every synaptic weight no longer changes or is set to 0/1, the learning process is finished. As an unsupervised method, STDP is mainly used as a feature extraction method. We can not build a complete machine learning system only based on STDP. A classifier is usually required for practical recognition tasks. However, in our experiment, STDP method doesn’t demonstrate enough efficiency of feature extraction. For example, we use the classic MNIST handwritten digit dataset [13] to test the performance with a support vector machine (SVM) [17] without a kernel, where two 50-dimension feature sets are extracted with STDP and principal component analysis (PCA). The PCA-SVM method achieves a recognition accuracy of 94% while the STDP-based method only reaches 91%. As PCA is usually the baseline for evaluating the performance of feature extraction, STDP does NOT demonstrate an efficient method for real-world cognitive applications or many other machine learning tasks. B. Remote Supervision Method (ReSuMe) Remote Supervision Method (ReSuMe) is a supervised learning method proposed in [18]. The algorithm introduces a supervised spike train for each synapse while training. The training

862

process comes to an end if the post-synaptic spike train is the same as the supervised spike train. However, ReSuMe faces the difficulty on the pattern design of supervised spike trains and little guidance is offered on how to define the differences between different spike train. Although some papers [19] have attempted to build learning systems under ReSuMe learning algorithm, to the best of our knowledge, we have NOT seen any efficient way to solve a real-world task. C. Neural Sampling Learning Scheme The Neural Sampling learning scheme transforms the leaky Integrate-and-Fire (LIF) neuron into a nonlinear function (named Sigert function) [20] which represents the relationship between the input and output firing rate of a neuron. Moreover, Neftci demonstrates that nonlinear function, which is equivalent with LIF neuron, is satisfied with neural sampling conditions in [3] and can be approximated to sigmoid function under certain condition. Therefore, it can take advantage of contrastive divergence (CD), which is a classic algorithm exploited in restricted Boltzmann machine (RBM) to train the network. Moreover, the spiking RBM can be stacked into multi-layer to form the spiking deep belief network (DBN), which has demonstrated satisfying performance. In [20], Connor shows that a 784×500×500×10 spiking DBN achieves the recognition accuracy of 95.2% on MNIST dataset [13]. In Section IV, we will make a hardware mapping of the spiking neural network which is trained under neural sampling learning scheme. The parameter quantization is discussed in Section V-C and V-D. The training process done on the CPU platform will not be discussed in this work. IV. RRAM- BASED S PIKING N EURAL N ETWORK S YSTEM S TRUCTURE In this section, we make the hardware mapping in order to implement the RRAM-based spiking neural network according to the system structure described in Section II-B. First, we will give a precise introduction of the input encoding module which converts the numeral values to the spike trains. Then, we will

2015 Design, Automation & Test in Europe Conference & Exhibition (DATE)

describe the RRAM-based implementation of synaptic weight crossbar and the LIF spiking neuron separately. A. Input Transformation Module Since spike trains propagate in the spiking neural network, original input x = [x1 , · · · , xN ] should be mapped to spike trains X(t) = [X1 (t), · · · , XN (t)] before running the test samples. Here, we define Xi (t) as a binary train, which just has two states 0/1. For the ith input channel, the spike train is made of Nt spike pulses with each pulse width T0 , which implies that the spike train lasts for the length of time Nt · T0 . Suppose the spike number of all input channels during the given time Nt · T0 is Ns , then the spike count Ni of the ith channel is allocated as: Ni =

N t −1  k=0

which implies

vi

Xi (kT0 ) = round(Ns · N

k=1

vk

Ni vi = N Ns i=1 vi

)

The resistance or conductance states of RRAM devices in the crossbar array must be configured properly to realize the multiplied matrix C, which requires a mapping algorithm [21]. In order to satisfy the condition described in Eq. (7), the crossbar can be represented as: Cˆ = Cˆ + − Cˆ − = α[(C + + Δ) − (C − + Δ)]

(5)

which evaluates the tradeoff between time efficiency and the accuracy performance. Some discussion on the choice of input signal bit level is shown in Section V-C. B. RRAM-based Crossbar According to Eq. (2), there does not exist a direct one-toone mapping from the original weight matrix C to the crossbar conductance matrix G. Moreover, some physical limitations on G should be considered: • The item cjk of the original weight matrix C can be either positive or negative while every item the conductance of RRAM crossbar G should be positive. Thus, the original weight matrix C should be decomposed into two parts: one positive C + , the other negative C − ; • According to Eq. (2), the parameter cjk must be of the following range to enable all the solved gjk are within the range between gOF F and gON .

(10)

And Eq. (2) can be expressed as: gk,j − ck,j ·

N 

gk,l = gs · ck,j

(11)

l=1

(4)

Then the Ni spikes of the ith channel is randomly set on the Nt time intervals. For an ideal mapping, we would like to have Ni ĞǀĞůŽĨZZDĞǀŝĐĞƐ;ϴͲďŝƚ/ŶƉƵƚͿ ;ĂͿ

ϴϴ͘ϯ ϴϴ͘ϳ

ϴϳ͘ϱ

ϴϲ͘ϴ

ϳϬ ϲϬ

ϲϭ͘ϰ

ϱϬ Ϯ

ϴϬ

ϭϬϬ

ϴϵ

ĐĐƵƌĂĐLJ;йͿ

ϵϬ

ĐĐƵƌĂĐLJ;йͿ

ϭϬϬ

ϵϬ

ĐĐƵƌĂĐLJ;йͿ

ϭϬϬ

ϴϬ

ĐĐƵƌĂĐLJ;йͿ

ϭϬϬ

Ϯ

ϭϮ

ŝƚͲ>ĞǀĞůŽĨ/ŶƉƵƚDŽĚƵůĞ;ϴͲďŝƚZZDͿ ;ďͿ

ϴϲ͘Ϯ

ϴϬ

ϴϱ͘ϵ

ϴϱ͘ϲ

ϴϱ͘ϳ

ϳϬ ϲϬ

ϱϬ ϳ

ϴϱ͘ϴ

ϵϬ

ϱϬ Ϭ

Ϭ͘ϭ

Ϭ͘Ϯ

Ϭ͘ϯ

^ŝŐŶĂů&ůƵĐƚƵĂƚŝŽŶ;ϴͲďŝƚ/ŶƉƵƚΘϴͲďŝƚZZDͿ ;ĐͿ

Ϭ

Ϭ͘ϭ

Ϭ͘Ϯ

Ϭ͘ϯ

WƌŽĐĞƐƐsĂƌĂƚŝŽŶ;ϲͲďŝƚ/ŶƉƵƚΘϴͲďŝƚZZDͿ ;ĚͿ

Fig. 4. Recognition accuracy under (a). different bit-level of RRAM devices (b). different bit-level of input module, and (c) different degrees of input signal fluctuation (d) different degrees of process variation of RRAM devices. TABLE I I MPORTANT PARAMETERS OF THE SNN S YSTEM Network Size Number of Input Spike (Ns ) Number of Pulse Interval (Nt ) Input pulse Voltage (V ) The Pulse Width (T0 )

784 × 500 × 500 × 10 2000 128 1V 1ns

A. Experiments Setup The MNIST dataset [13] is used to test the performance of RRAM-based neural network system proposed in Section IV. MNIST is a widely used dataset for optical character recognition with 60,000 handwritten digits in training set and 10,000 in testing set. In our experiment, we use all the examples of handwritten digits of ‘0’∼‘9’ to train the neural network system and randomly select 1,000 samples for testing. The parameters are shown in Table I. For the spiking neural network simulation, we first train the spiking neural network on CPU and use SPICE to simulate the circuit performance. In the training process, the weight synapses are trained on the CPU platform by CD algorithm with fire rates propogating in the network according to Section III-C. Then for the test processing, a Verilog-A RRAM device model [14] is used to build up the SPICE-level crossbar array. The circuit simulation is made where the weight matrix is mapped to RRAM-based crossbar and the voltage pulse train is generated according to the fire rate. Meanwhile, the analog spike neuron is implemented with parameters mapping under the Siegert approximation. The maximum amplitude of input voltage of each crossbar (the output of each flip-flop) is set to 1V to achieve better linearity of RRAM devices. Most of the

input voltages applied on the RRAM devices are around tens to hundreds of millivolt. A counter is cascaded at each port to calculate the spike number of each spiking train from output neuron layer and a comparator is used to select the port with the highest spike count and to provide the recognition results. The simulation results are provided in Fig. 4. Some comparisons are made between different input and device bit levels. We also make some analysis of signal fluctuation and device variation. B. System Performance and Efficiency We train the SNN with the size of 784 × 500 × 500 × 10. The experiment results show that the recognition accuracy is 95.4% on the CPU platform and 91.2% on the ideal RRAMbased hardware implementation. The recognition performance decreases about 4% because it is impossible to satisfy with Nt 85%). Based on the 8-bit RRAM result, different levels of signal fluctuation are added on the 8-bit input signal. The result shown in Fig 4 (c) demonstrates that the performance of accuracy just decreases 3% given 20% variation. The sparsity

2015 Design, Automation & Test in Europe Conference & Exhibition (DATE)

of the spike train leads to the system robustness, making it insensitive to the input fluctuation. D. Impact of RRAM Device Bit Level Quantization and Process Variation Since it is impossible to tune an RRAM device accurately to a specific gap distance, it is necessary to analyse the effect of the bit level quantization on the recognition accuracy. Then it would offer a guide on choosing an appropriate bit level. The simulation results in Fig. 4(a) show that a 8-bit RRAM device is able to realize a recognition accuracy of nearly 90%. Moveover, there exists possibility that the gap distance of RRAM device would change when current adding on the device exceeds the requirement or some random events happen. Therefore, the performance of different process variation degrees is studied, and the simulation results in Fig. 4(d) show that when RRAM device is made in 8-bit level with the 6-bit level input, the performance does not decrease under 20% variation. VI. C ONCLUSION In this work, we propose an energy efficient implementation of SNN based on RRAM devices. As for the training algorithm, the neural sampling learning scheme is chosen for practical cognitive tasks compared with STDP and ReSuMe methods since it is easy to implement a multi-layer network structure and achieves good recognition accuracy. And for the hardware architecture, the implementation includes analog LIF neurons with a changeable leaky time constant, an RRAMbased crossbar functioning as synaptic weight matrix, and an input encoding scheme converting the numeral values to spike trains. A mapping algorithm is also introduced to configure the RRAM-based SNN efficiently. The experiments on the MNIST database demonstrate that the proposed RRAM-based SNN achieves 91.2% accuracy and requires power consumption of 3.5mW per test sample. In addition, the RRAM implementation of SNN is robust to 20% process variation with less than 1% accuracy decrease, and 20% signal fluctuation with about 2% accuracy decrease. These results demonstrate that the RRAMbased SNN will be quite easy for physically realization. ACKNOWLEDGMENT This work was supported by 973 project 2013CB329000, National Science and Technology Major Project (2013ZX03003013-003) and National Natural Science Foundation of China (No. 61373026,61261160501), the Importation and Development of High-Caliber Talents Project of Beijing Municipal Institutions, Tsinghua University Initiative Scientific Research Program and CNS-1253424. And we gratefully acknowledge the support from Prof. Shimeng Yu with the help of RRAM model and the opensource code pushed by Danny Neil on Github. R EFERENCES [1] H. Esmaeilzadeh, E. Blem, R. St Amant, K. Sankaralingam, and D. Burger, “Dark silicon and the end of multicore scaling,” in Computer Architecture (ISCA), 2011 38th Annual International Symposium on. IEEE, 2011, pp. 365–376.

[2] Y. Xie, “Future memory and interconnect technologies,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2013. IEEE, 2013, pp. 964–969. [3] E. Neftci, S. Das, B. Pedroni, K. Kreutz-Delgado, and G. Cauwenberghs, “Event-driven contrastive divergence for spiking neuromorphic systems,” Frontiers in neuroscience, vol. 7, 2013. [4] T. Masquelier and S. J. Thorpe, “Unsupervised learning of visual features through spike timing dependent plasticity,” PLoS computational biology, vol. 3, no. 2, p. e31, 2007. [5] S. K. Esser, A. Andreopoulos, R. Appuswamy, P. Datta, D. Barch, A. Amir, J. Arthur, A. Cassidy, M. Flickner, P. Merolla et al., “Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores,” in Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 2013, pp. 1–10. [6] E. Painkras, L. A. Plana, J. Garside, S. Temple, F. Galluppi, C. Patterson, D. R. Lester, A. D. Brown, and S. B. Furber, “Spinnaker: A 1-w 18-core system-on-chip for massively-parallel neural network simulation,” SolidState Circuits, IEEE Journal of, vol. 48, no. 8, pp. 1943–1953, 2013. [7] R. Wang, T. J. Hamilton, J. Tapson, and A. van Schaik, “An fpga design framework for large-scale spiking neural networks,” in Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. IEEE, 2014, pp. 457–460. [8] R. Ananthanarayanan, S. K. Esser, H. D. Simon, and D. S. Modha, “The cat is out of the bag: cortical simulations with 109 neurons, 1013 synapses,” in High Performance Computing Networking, Storage and Analysis, Proceedings of the Conference on. IEEE, 2009, pp. 1–12. [9] V. Narayanan, S. Datta, G. Cauwenberghs, D. Chiarulli, S. Levitan, and P. Wong, “Video analytics using beyond cmos devices,” in Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014. IEEE, 2014, pp. 1–5. [10] B. Liu, M. Hu, H. Li, Z.-H. Mao, Y. Chen, T. Huang, and W. Zhang, “Digital-assisted noise-eliminating training for memristor crossbar-based analog neuromorphic computing engine,” in Design Automation Conference (DAC), 2013 50th ACM/EDAC/IEEE. IEEE, 2013, pp. 1–6. [11] M. Hu, H. Li, Q. Wu, and G. S. Rose, “Hardware realization of bsb recall function using memristor crossbar arrays,” in Proceedings of the 49th Annual Design Automation Conference. ACM, 2012, pp. 498–503. [12] B. Li, Y. Shan, M. Hu, Y. Wang, Y. Chen, and H. Yang, “Memristorbased approximated computation,” in Low Power Electronics and Design (ISLPED), 2013 IEEE International Symposium on. IEEE, 2013, pp. 242–247. [13] Y. LeCun and C. Cortes, “The mnist database of handwritten digits,” 1998. [14] S. Yu, B. Gao, Z. Fang, H. Yu, J. Kang, and H.-S. P. Wong, “A low energy oxide-based electronic synaptic device for neuromorphic visual systems with tolerance to device variation,” Advanced Materials, vol. 25, no. 12, pp. 1774–1779, 2013. [15] G. Indiveri, “A low-power adaptive integrate-and-fire neuron circuit,” in ISCAS (4), 2003, pp. 820–823. [16] S. Song, K. D. Miller, and L. F. Abbott, “Competitive hebbian learning through spike-timing-dependent synaptic plasticity,” Nature neuroscience, vol. 3, no. 9, pp. 919–926, 2000. [17] C.-C. Chang and C.-J. Lin, “Libsvm: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 2, no. 3, p. 27, 2011. [18] F. Ponulak, “Resume-new supervised learning method for spiking neural networks,” Institute of Control and Information Engineering, Poznan University of Technology.(Available online at: http://d1. cie. put. poznan. pl/˜ fp/research. html), 2005. [19] J. Hu, H. Tang, K. C. Tan, H. Li, and L. Shi, “A spike-timing-based integrated model for pattern recognition,” Neural computation, vol. 25, no. 2, pp. 450–472, 2013. [20] P. O’Connor, D. Neil, S.-C. Liu, T. Delbruck, and M. Pfeiffer, “Realtime classification and sensor fusion with a spiking deep belief network,” Frontiers in neuroscience, vol. 7, 2013. [21] P. Gu et al., “Technological exploration of rram crossbar array for matrixvector multiplication,” in ASPDAC, 2015.

2015 Design, Automation & Test in Europe Conference & Exhibition (DATE)

865