Synchronization control of a class of memristor ... - Semantic Scholar

4 downloads 21917 Views 554KB Size Report
Aug 6, 2011 - application and the removal of a field and its subsequent effect, just as the ... in view that memristor-based neural networks will have great ... Take the ith subsystem as the unit of analysis in order to simplify illustration, from.
Information Sciences 183 (2012) 106–116

Contents lists available at SciVerse ScienceDirect

Information Sciences journal homepage: www.elsevier.com/locate/ins

Synchronization control of a class of memristor-based recurrent neural networks Ailong Wu ⇑, Shiping Wen, Zhigang Zeng Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China

a r t i c l e

i n f o

Article history: Received 16 November 2010 Received in revised form 2 June 2011 Accepted 23 July 2011 Available online 6 August 2011 Keywords: Memristor Recurrent neural networks Circuit analysis Exponential synchronization

a b s t r a c t In this paper, we formulate and investigate a class of memristor-based recurrent neural networks. Some sufficient conditions are obtained to guarantee the exponential synchronization of the coupled networks based on drive-response concept, differential inclusions theory and Lyapunov functional method. The analysis in the paper employs results from the theory of differential equations with discontinuous right-hand side as introduced by Filippov. Finally, the validity of the obtained result is illustrated by a numerical example. Crown Copyright Ó 2011 Published by Elsevier Inc. All rights reserved.

1. Introduction In 1971, Chua [3] presented the logical and scientific basis for the existence of a new two-terminal circuit element called the memristor (a contraction for memory resistor) which has every right to be as basic as the three classical circuit elements already in existence, namely, the resistor, inductor, and capacitor. On 1 May 2008, the Hewlett–Packard (HP) research team proudly announced their realization of a memristor prototype, with an official publication in Nature [34,35]. They also devised a physical model of a memristor called the coupled variable resistor model, which works like a perfect memristor under certain conditions. This new circuit element of memristor shares many properties of resistors and shares the same unit of measurement (ohm). Many researchers focus on the memristor because of its potential applications in next generation computer and powerful brain-like ‘‘neural’’ computer (see [5,11,14,15,26,28,34,35]). Currently, many researchers attempt to build an electronic intelligence that can mimic the awesome power of a brain by mean of the crucial electronic components-memristors [14,15,34,35]. Lots of research shows that complex electrical response to the ebb and flow of potassium and sodium ions across the membranes of each cell, which allow the synapses to alter their response according to the frequency and strength of signals. It looks maddeningly similar to the response a memristor would produce, i.e., the behaviour of synapses looks maddeningly similar to a memristor’s response. And there are indications that the memristor exhibits the feature of pinched hysteresis, which means that a lag occurs between the application and the removal of a field and its subsequent effect, just as the neurons in the human brain have. Because of this feature, broad potential applications of the memristor have been identified [5,11,14,15,26], one of which is to apply this device to build a new model of neural networks (memristor-based neural networks) to emulate the human brain [11]. The memristor-based neural networks can remember its past dynamical history, store a continuous set of states, and be ‘‘plastic’’ according to the pre-synaptic and post-synaptic neuronal activity. It will open up new possibilities in the understanding of ⇑ Corresponding author at: Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China. E-mail addresses: [email protected] (A. Wu), [email protected] (S. Wen), [email protected] (Z. Zeng). 0020-0255/$ - see front matter Crown Copyright Ó 2011 Published by Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2011.07.044

107

A. Wu et al. / Information Sciences 183 (2012) 106–116

neural processes using memory devices, an important step forward to reproduce complex learning, adaptive and spontaneous behavior with electronic neural networks. Furthermore, in view that memristor-based neural networks will have great application value, at the same time, in consideration of many successful applications of the recurrent neural networks (see [1,2,6,9,10,12,13,17–25,29–33,36–43]), an interesting issue is to investigate the memristor-based recurrent neural networks, which is an ideal model for the case where the memristor-based circuit network process of more efficient learning with a realization of the famous Hebbian rule stating, in a simplified form, that ‘‘neurons that fire together, wire together’’. Moreover, the analysis of the memristor-based recurrent neural networks is able to reveal crucial features of the dynamics, such as the presence of sliding modes along switching surfaces, the chaos synchronization, and the ability to compute the exact global minimum of the underlying energy function, which make the networks especially attractive for the solution of global optimization problems in real time. To better understand the memristor-based recurrent neural networks, firstly we describe the circuit of a general class of recurrent neural networks in the Fig. 1. Take the ith subsystem as the unit of analysis in order to simplify illustration, from Fig. 1, the KCL equation of the ith subsystem is written as:

" #  n  n n g j ðxj ðtÞÞ g j ðxj ðt  sj ðtÞÞÞ 1 X 1 1 1 1 X 1 X Ii xi ðtÞ þ  sgnij þ þ  sgnij þ  sgnij þ ; x_ i ðtÞ ¼  Ci j¼1 Rij Fij Ri Ci j¼1 Ci j¼1 Rij Fij Ci t P 0;

i ¼ 1; 2; . . . ; n;

ð1Þ

where xi(t) is the voltage of the capacitor Ci ; Rij denotes the resistor between the feedback function gi(xi(t)) and xi ðtÞ; Fij denotes the resistor between the feedback function gi(xi(t  si(t))) and xi(t), si(t) corresponds to the transmission delay and satisfies 0 6 si(t) 6 si (si are constants, i = 1, 2, . . . , n), Ri represents the parallel-resistor corresponding the capacitor Ci ; Ii is the external input or bias, and

sgnij ¼



1;

i – j;

1; i ¼ j:

From (1),

~ x ðtÞ þ x_ i ðtÞ ¼ d i i

n  X

 ~ g ðx ðt  s ðtÞÞÞ þ I ; ~ij g j ðxj ðtÞÞ þ b a ij j j j i

t P 0;

i ¼ 1; 2; . . . ; n;

ð2Þ

j¼1

where

" #  n  1 X 1 1 1 ~ ; di ¼  sgnij þ þ Ci j¼1 Rij Fij Ri Ii ¼

~ij ¼ a

sgnij ; Ci Rij

~ ¼ sgnij ; b ij Ci Fij

Ii : Ci

By replacing the resistors Rij ; Fij and Ri in the primitive recurrent neural networks (1) or (2) with memristors, whose memductances Wij ; Mij and Pi , respectively, then we can construct the memristor-based recurrent neural networks of the form

x_ i ðtÞ ¼ di ðxi ðtÞÞxi ðtÞ þ

n X 

 aij ðxi ðtÞÞg j ðxj ðtÞÞ þ bij ðxi ðtÞÞg j ðxj ðt  sj ðtÞÞÞ þ Ii ;

t P 0;

i ¼ 1; 2; . . . ; n;

ð3Þ

j¼1

where

" # n   1 X di ðxi ðtÞÞ ¼ Wij þ Mij  sgnij þ Pi ; Ci j¼1 bij ðxi ðtÞÞ ¼

Mij  sgnij ; Ci

Ii ¼

aij ðxi ðtÞÞ ¼

Wij  sgnij ; Ci

Ii : Ci

Combining with the pinched hysteresis loop in the current–voltage characteristic of memristors, as a matter of convenience, in this paper we discuss a simple type of the memristor-based recurrent neural networks (3) as follows:

x_ i ðtÞ ¼ di ðxi Þxi ðtÞ þ

n X 

 aij ðxi Þg j ðxj ðtÞÞ þ bij ðxi Þg j ðxj ðt  sj ðtÞÞÞ þ Ii ;

j¼1

where

( di ðxi Þ ¼ di ðxi ðtÞÞ ¼

^ ; jx ðtÞj < T ; d i i i  ; jx ðtÞj > T ; d i i i

t P 0;

i ¼ 1; 2; . . . ; n;

ð4Þ

108

A. Wu et al. / Information Sciences 183 (2012) 106–116

Fig. 1. The circuit of recurrent neural networks.

( aij ðxi Þ ¼ aij ðxi ðtÞÞ ¼ ( bij ðxi Þ ¼ bij ðxi ðtÞÞ ¼

^ij ; jxi ðtÞj < Ti ; a ij ; jxi ðtÞj > Ti ; a ^ ; jx ðtÞj < T ; b ij i i  ; jx ðtÞj > T ; b ij i i

Ii ¼

Ii ; Ci

109

A. Wu et al. / Information Sciences 183 (2012) 106–116

^ > 0; d  > 0; a ^ ; b  ; i; j ¼ 1; 2; . . . ; n, are constant numbers. ^ij ; a ij ; b in which switching jumps Ti > 0; d i i ij ij Remark 1. There are some existing works about the memristor-based nonlinear circuit networks [5,11,15,26,28]. In this paper, we deal with the detailed construction of a general class of memristor-based recurrent neural network model (3) from the aspect of circuit analysis, which can approximately simulate the memristive synapses that they may behave like the real thing about the memristor minds to some extent. In addition, we note that a plethora of complex nonlinear behaviors including chaos appear even in a simple network of memristor [5,14,28], a detailed analytical study of synchronization control of the basic oscillator is necessary. The synchronization of recurrent neural networks has been gaining increasing research attention because of their potential applications in many real-world systems from a variety of fields such as biology, social systems, linguistic networks, and technological systems [1,2,6,9,10,12,17–24,29–33,38,40,42,43]. However, so far, there are very few works dealing with the synchronization control of the memristor-based recurrent neural networks. And the synchronization control of memristor-based recurrent neural networks can be expected to find a quick application in a broad range of fields, both defense and civilian, which need a continuously adaptive optional target tracking. Motivated by the above discussions, based on the works in [2,5,11,14,15], our objective in this paper is to study the exponential synchronization for memristor-based recurrent neural networks (4). By utilizing the driveresponse concept, differential inclusions theory, and combining them with Lyapunov functional method, a control law with an appropriate gain matrix is derived to achieve exponential synchronization of the drive-response-based coupled neural networks. The elements of the gain matrix are easily determined by checking a certain Hamiltonian matrix if its eigenvalues lie on the imaginary axis or not instead of arduously solving an algebraic Riccati equation. The remaining part of the paper consists of four sections. In Section 2, some preliminaries are introduced. In Section 3, the main results are derived. In Section 4, an illustrative example is given to demonstrate the effectiveness of the proposed approach. Finally, concluding remarks are included in Section 5. 2. Preliminaries Throughout this paper, solutions of all the systems considered in the following are intended in the Filippov’s sense. [, ] represents the interval. Cð½s; 0; Rn Þ denotes a Banach space of all continuous functions v ¼ ðv1 ðsÞ; v2 ðsÞ; . . . ; vn ðsÞÞT : ½s; 0 ! Rn , where s = max16i6n{si}. En is a n  n identity matrix. For vector X ¼ ðX1 ; X2 ; . . . ; Xn ÞT 2 Rn ; jXj ¼ ðjX1 j; jX2 j; . . . ; jXn jÞT ; jXj < T ðjXj > T Þ represents jXi j < T i ðjXi j > T i Þ; i ¼ 1; 2; . . . ; n, where T ¼ ðT 1 ; T 2 ; . . . ; T n ÞT ; T i > 0. And n o n o Pn

1 2 2  ¼ max d ^ ;d  ; d ¼ min d ^ ;d  ; a ij ¼ max a ^ij ; a ij ; aij ¼ . Let d kXk denotes the Euclidean norm, i.e., kXk ¼ i i i i i i i¼1 Xi n o n o  ^  ^  ij ; b ^ij ; a for i, j = 1, 2, . . . , ,n. For matrices M ¼ ðmij Þnn ; N ¼ ðnij Þnn ; M  min a ij ¼ max bij ; bij ; bij ¼ min bij ; bij , N ðM  N Þ means that mij > nij ðmij < nij Þ, for i, j = 1, 2, . . . , ,n. And by the interval matrix ½M; N , it follows that M  N . For 8L ¼ ðlij Þnn 2 ½M; N , it means M  L  N , i.e., mij < lij < nij for i, j = 1, 2, . . . , ,n. In addition, the initial conditions of system (4) are given by xi ðtÞ ¼ wi ðtÞ 2 Cð½si ; 0; RÞ; i ¼ 1; 2; . . . ; n. First, by the theory of differential inclusions, from system (4), we have

 x ðtÞ þ x_ i ðtÞ 2 ½di ; d i i

n X 

 g ðx ðt  s ðtÞÞÞ þ I ; ij g j ðxj ðtÞÞ þ ½bij ; b ½aij ; a ij j j j i

t P 0;

i ¼ 1; 2; . . . ; n;

ð4Þ

j¼1

 ; i 2 ½a ; a  or equivalently, for i, j = 1, 2, . . . , ,n, there exist  i 2 ½di ; d i ij ij  ij ; kij 2 ½bij ; bij , such that

x_ i ðtÞ ¼  i xi ðtÞ þ

n X ðiij g j ðxj ðtÞÞ þ kij g j ðxj ðt  sj ðtÞÞÞÞ þ Ii ;

t P 0;

i ¼ 1; 2; . . . ; n:

ð4  Þ

j¼1

In this paper, consider system (4) or (4**) as the master/drive system and the corresponding slave/response system is as:

 z ðtÞ þ z_ i ðtÞ 2 ½di ; d i i

n X 

 g ðz ðt  s ðtÞÞÞ þ I þ u ðtÞ; ij g j ðzj ðtÞÞ þ ½bij ; b ½aij ; a ij j j j i i

t P 0;

i ¼ 1; 2; . . . ; n;

ð5Þ

j¼1

 ; i 2 ½a ; a  or equivalently, for i, j = 1, 2, . . . , ,n, there exist  i 2 ½di ; d i ij ij ij ; kij 2 ½bij ; bij , such that

z_ i ðtÞ ¼  i zi ðtÞ þ

n   X iij g j ðzj ðtÞÞ þ kij g j ðzj ðt  sj ðtÞÞÞ þ Ii þ ui ðtÞ;

t P 0;

i ¼ 1; 2; . . . ; n;

ð5Þ

j¼1

where ui ðtÞ ði ¼ 1; 2; . . . ; nÞ is the appropriate control input that will be designed in order to obtain a certain control objective. The initial conditions associated with system (5) or (5⁄) are of the form zi ðtÞ ¼ ui ðtÞ 2 Cð½si ; 0; RÞ; i ¼ 1; 2; . . . ; n. In practical situations, the output signals of the drive system (4) or (4) can be received by the response system (5) or (5⁄). In this paper, referring to some relevant works in [2,6,10], we make the following assumptions (A1)–(A3): (A1) The function gi, i 2 {1, 2, . . . , ,n}, is bounded and satisfies the Lipschitz condition with a Lipschitz constant Li > 0, i.e.,

110

A. Wu et al. / Information Sciences 183 (2012) 106–116

jg i ðxÞ  g i ðyÞj 6 Li jx  yj for all x; y 2 R: (A2) For i 2 {1, 2, . . . , ,n}, si(t) is differential function with s_ i ðtÞ 6 li < 1 (li is constant) for all t P 0. (A3) For i, j = 1, 2, . . . , ,n,

 x ðtÞ  ½d ; d  z ðtÞ # ½d ; d  ðx ðtÞ  z ðtÞÞ; ½di ; d i i i i i i i i i ij g j ðxj ðtÞÞ  ½aij ; a ij g j ðzj ðtÞÞ # ½aij ; a ij ðg j ðxj ðtÞÞ  g j ðzj ðtÞÞÞ; ½aij ; a ij g ðxj ðtÞÞ  ½bij ; b ij g ðzj ðtÞÞ # ½bij ; b ij ðg ðxj ðtÞÞ  g ðzj ðtÞÞÞ: ½bij ; b j

j

j

j

Remark 2. It is obvious that for i 2 {1, 2, . . . , ,n}, the set-valued map

 x ðtÞ þ xi ðtÞ(  ½di ; d i i

n X 

 g ðx ðt  s ðtÞÞÞ þ I ; ij g j ðxj ðtÞÞ þ ½bij ; b ½aij ; a ij j j j i

t P 0;

j¼1

has nonempty compact convex values. Furthermore, it is upper semi-continuous [4]. Then the local existence of a solution x(t) = (x1(t), x2(t), . . . , ,xn(t))T with initial conditions xi(t) = wi(t), of (4) is obvious [8]. Moreover, under the condition (A1), if gi() (i = 1, 2, . . . , ,n) is bounded, this local solution x(t) can be extended to the interval [0, +1) in the sense of Filippov [8]. Let e(t) = (e1(t), e2(t), . . . , ,en(t))T be the synchronization error, where ei(t) = xi(t)  zi(t), applying the theories of set-valued maps and differential inclusions, then we can get the synchronization error system as follows

 e ðtÞ þ e_ i ðtÞ 2 ½di ; d i i

n X

n  X

  ðg ðe ðt  s ðtÞÞ þ z ðt  s ðtÞÞÞ  g ðz ðt  s ðtÞÞÞÞ ij g j ðej ðtÞ þ zj ðtÞÞ  g j ðzj ðtÞÞ þ aij ; a ½bij ; b ij j j j j j j j j

j¼1

 ui ðtÞ;

t P 0;

j¼1

i ¼ 1; 2; . . . ; n;

ð6Þ

 ; i  , such that e ij 2 ½aij ; a e ij 2 ½bij ; b e i 2 ½di ; d ij ; k or equivalently, for i, j = 1, 2, . . . , ,n, there exist  i ij

e_ i ðtÞ ¼  e i ei ðtÞ þ

n X

n X e ij g ðej ðtÞ þ zj ðtÞÞ  g ðzj ðtÞÞ þ e ij ðg ðej ðt  sj ðtÞÞ þ zj ðt  sj ðtÞÞÞ  g ðzj ðt  sj ðtÞÞÞÞ  ui ðtÞ; i k j j j j

j¼1

t P 0;

j¼1

i ¼ 1; 2; . . . ; n;

ð6Þ

with initial conditions mi ðsÞ ¼ wi ðsÞ  ui ðsÞ 2 Cð½si ; 0; RÞ; i ¼ 1; 2; . . . ; n. In the following, the paper aims to design the control input ui(t) (i = 1, 2, . . . , ,n) associated with the state-feedback for the purpose of exponentially synchronizing the unidirectional coupled identical systems (4) or (4) and (5) or (5⁄).

3. Main results In many real applications, we are interested in designing a memoryless state-feedback controller

ðu1 ðtÞ; u2 ðtÞ; . . . ; un ðtÞÞT ¼ ðxij Þnn ðe1 ðtÞ; e2 ðtÞ; . . . ; en ðtÞÞT ¼ XeðtÞ;

ð7Þ

where X = (xij)nn is a constant gain matrix to be determined for synchronizing both the drive system and response system. Furthermore, if a new error ^ ei ðtÞ is defined by ^ei ðtÞ ¼ expfatgei ðtÞ, then the synchronization error system (6) or (6⁄) can be transformed into the following form: n n n X X X     a^e ðtÞ þ  / ^ j ð^ej ðtÞÞ þ ~ j ð^ej ðt  sj ðtÞÞÞ  ij / ^e_ i ðtÞ 2  ½di ; d ½aij ; a ½bij ; b xij ^ej ðtÞ; i i ij j¼1

j¼1

t P 0;

i ¼ 1; 2; . . . ; n;

ð8Þ

j¼1

 ; i  , such that e ij 2 ½aij ; a e ij 2 ½bij ; b e i 2 ½di ; d ij ; k or equivalently, for i, j = 1, 2, . . . , ,n, there exist  i ij n n n   X X X e ij / e ij / ^ j ð^ej ðtÞÞ þ ~ j ð^ej ðt  sj ðtÞÞÞ  ^e_ i ðtÞ ¼  e i  a ^ei ðtÞ þ xij ^ej ðtÞ; i k j¼1

j¼1

where

^ j ð^ej ðtÞÞ ¼ expfatg/ ðej ðtÞÞ; / j ~ j ð^ej ðt  sj ðtÞÞÞ ¼ expfatg/ ðej ðt  sj ðtÞÞÞ; / j /j ðej ðtÞÞ ¼ g j ðej ðtÞ þ zj ðtÞÞ  g j ðzj ðtÞÞ; /j ðej ðt  sj ðtÞÞÞ ¼ g j ðej ðt  sj ðtÞÞ þ zj ðt  sj ðtÞÞÞ  g j ðzj ðt  sj ðtÞÞÞ: ^ j ð^ej ðtÞÞ, then From the definitions of /j(ej(t)) and /

j¼1

t P 0;

i ¼ 1; 2; . . . ; n;

ð8Þ

111

A. Wu et al. / Information Sciences 183 (2012) 106–116

j/j ðej ðtÞÞj 6 Lj jej ðtÞj; ^ j ð^ej ðtÞÞj ¼ j expfatg/ ðej ðtÞÞj 6 Lj j expfatgej ðtÞj ¼ Lj j^ej ðtÞj: j/ j

Definition 1. System (8) or (8⁄) is said to be globally asymptotically stable, if there exist a positive definite and radially unbounded function V, and a K-class function K defined on Rþ such that for any initial condition mðsÞ ¼ ðm1 ðsÞ; m2 ðsÞ; . . . ; mn ðsÞÞT 2 Cð½s; 0; Rn Þ,

Dþ Vjð8Þorð8Þ 6 Km ; where the function K is said to belong to class K if Kð0Þ ¼ 0 and K is strictly increasing on Rþ . The Definition 1 roots in [4] or [27] (p. 222, Theorem 6.4.6). For further deriving the exponential synchronization conditions on the control law, the following lemmas are needed. Lemma 1. Define a 2n  2n Hamiltonian matrix



D

QQ T

ðK 1 þ K 2 Þ  eEn

DT

! ;

  expf2asi gL2i ; A¼ where e is sufficiently small positive constant, D ¼ diagðdi  aÞ þ X; Q ¼ ðABÞ; K 1 ¼ diagðL2i Þ; K 2 ¼ diag 1l i

 Þ . If D is a stable matrix and Hamiltonian matrix H has no eigenvalues on the imaginary axis, then the algeij Þnn ; B ¼ ðb ða ij nn braic Riccati equation

DT P  PD þ PQQ T P þ ðK 1 þ K 2 Þ þ eEn ¼ 0

ð9Þ

has a symmetric and positive definite solution P for a given a > 0. Proof. The proof is a direct result of the Lemma 4 in [7]. h Remark 3. A real matrix D is stable if and only if all of its eigenvalues have negative real parts. All eigenvalues of D defined in Lemma 1 can be arbitrarily assigned by appropriately choosing the controller gain matrix X. Especially, choose the gain matrix as a diagonal matrix X = diag(xi) and xi > a  di, i = 1, 2, . . . , ,n, then the eigenvalues of D are (xi + di  a) < 0, i = 1, 2, . . . , ,n, which implies that D is a stable matrix. Lemma 2 [16]. If the origin of ^ eðtÞ is asymptotically convergent, then the solution e(t) of system (6) or (6⁄) is exponentially convergent with a degree a. Theorem 1. Under assumptions (A1)–(A3), if the controller gain matrix X in (7) is suitably designed such that D is a stable matrix and Hamiltonian matrix H defined in Lemma 1 for a given a > 0 has no eigenvalues on the imaginary axis, then the networks (4) or (4) and (5) or (5⁄) are synchronized exponentially with a degree a at least. Proof. Transform (8) or (8⁄) into the vector form as:

^ ^eðtÞÞ þ ½B; B/ð ~ ^eðt  sðtÞÞÞ; ^e_ ðtÞ 2 ½D; D^eðtÞ þ ½A; A/ð

t P 0;

ð10Þ

or equivalently, there exist D 2 ½D; D; A 2 ½A; A; B 2 ½B; B, such that

^ ^eðtÞÞ þ B/ð ~ ^eðt  sðtÞÞÞ; ^e_ ðtÞ ¼ D^eðtÞ þ A/ð

t P 0;

where

^eðtÞ ¼ ð^e1 ðtÞ; ^e2 ðtÞ; . . . ; ^en ðtÞÞT ;  T ^ ^eðtÞÞ ¼ / ^ 1 ð^e1 ðtÞÞ; / ^ 2 ð^e2 ðtÞÞ; . . . ; / ^ n ð^en ðtÞÞ ; /ð  T ~ 1 ð^e1 ðt  s1 ðtÞÞÞ; / ~ 2 ð^e2 ðt  s2 ðtÞÞÞ; . . . ; / ~ n ð^en ðt  sn ðtÞÞÞ ; ~ ^eðt  sðtÞÞÞ ¼ / /ð D ¼ diagðdi  aÞ þ X; A ¼ ðaij Þnn ;

  aÞ þ X; D ¼ diagðd i

ij Þnn ; A ¼ ða

B ¼ ðbij Þnn ;

 Þ : B ¼ ðb ij nn

ð10Þ

112

A. Wu et al. / Information Sciences 183 (2012) 106–116

Since D is stable and the Hamiltonian matrix H has no eigenvalues on the imaginary axis, according to Lemma 1, the algebraic Riccati equation in (9) has a symmetric and positive definite solution P. To confirm that the origin of (10) or (10⁄) is globally asymptotically convergent, consider the following functional V(t) in space Cð½s; 0; RÞ,

VðtÞ ¼ ^eðtÞT P^eðtÞ þ

Z n X expf2asj g t ^ 2 ð^ej ðsÞÞ ds: / j 1  l s ðtÞ t j j j¼1

Calculating the upper right Dini derivative of V(t) along the trajectory of (10) or (10⁄), and combining with the fact X T Y þ Y T X 6 X T X þ Y T Y for any matrices X and Y with appropriate dimensions, then

Dþ VðtÞ ¼ ^e_ ðtÞT P^eðtÞ þ ^eðtÞT P^e_ ðtÞ þ 6 ^e_ ðtÞT P^eðtÞ þ ^eðtÞT P^e_ ðtÞ þ

n n X X expf2asj g ^ 2 expf2asj g ^ 2 ð^ej ðt  sj ðtÞÞÞ /j ð^ej ðtÞÞ  ð1  s_ j ðtÞÞ/ j 1  1  lj l j j¼1 j¼1 n n X X expf2asj g 2 2 ^ 2 ð^ej ðt  sj ðtÞÞÞ Lj ^ej ðtÞ  expf2asj g/ j 1  lj j¼1 j¼1

¼ ^e_ ðtÞT P^eðtÞ þ ^eðtÞT P^e_ ðtÞ þ ^eðtÞT K 2 ^eðtÞ 

n X

^ 2 ð^ej ðt  sj ðtÞÞÞ expf2asj g/ j

j¼1

^ ^eðtÞÞ þ ½^eðtÞT PB BT P^eðtÞ þ /ð ~ ^eðt  sðtÞÞÞ ^ ^eðtÞÞT /ð ~ ^eðt  sðtÞÞÞT /ð 6 ^eðtÞT ðDT P  PDÞ^eðtÞ þ ½^eðtÞT PA AT P^eðtÞ þ /ð þ ^eðtÞT K 2 ^eðtÞ 

n X

^ 2 ð^ej ðt  sj ðtÞÞÞ expf2asj g/ j

j¼1 T

6 ^eðtÞ ðDT P  PDÞ^eðtÞ þ ½^eðtÞT PA AT P^eðtÞ þ

n X

L2j ^e2j ðtÞ þ ½^eðtÞT PB BT P^eðtÞ þ

n X

j¼1

þ ^eðtÞT K 2 ^eðtÞ 

n X

expf2atg/2j ðej ðt  sj ðtÞÞÞ

j¼1

^ 2 ð^ej ðt  sj ðtÞÞÞ expf2asj g/ j

j¼1

¼ ^eðtÞT ðDT P  PDÞ^eðtÞ þ ½^eðtÞT PA AT P^eðtÞ þ ^eðtÞT K 1 ^eðtÞ þ ½^eðtÞT PB BT P^eðtÞ þ

n X

expf2asj ðtÞg expf2aðt

j¼1

 sj ðtÞÞg/2j ðej ðt  sj ðtÞÞÞ þ ^eðtÞT K 2 ^eðtÞ 

n X

^ 2 ð^ej ðt  sj ðtÞÞÞ expf2asj g/ j

j¼1 T

T

6 ^eðtÞ ðDT P  PDÞ^eðtÞ þ ½^eðtÞ PA AT P^eðtÞ þ ^eðtÞT K 1 ^eðtÞ þ ½^eðtÞT PB BT P^eðtÞ þ

n X

^ 2 ð^ej ðt  sj ðtÞÞÞ expf2asj g/ j

j¼1

þ ^eðtÞT K 2 ^eðtÞ 

n X

^ 2 ð^ej ðt  sj ðtÞÞÞ expf2asj g/ j

j¼1

¼ ^eðtÞT ½DT P  PD þ PðA AT þ B BT ÞP þ ðK 1 þ K 2 Þ^eðtÞ ¼ ^eðtÞT ½DT P  PD þ PQQ T P þ ðK 1 þ K 2 Þ^eðtÞ ¼ e^eðtÞT ^eðtÞ ¼ ek^eðtÞk2 : Combining with Definition 1, the last inequality Dþ VðtÞ 6 ek^eðtÞk2 indicates V(t) converges to zero asymptotically. By Lemma 2, we conclude e(t) converges to zero globally and exponentially with a rate of a. The proof is completed. h Remark 4. In Lemma 1, it is not apparent how one can choose the gain matrix X such that Hamiltonian matrix H has no eigenvalues on the imaginary axis. Therefore, it is not simple to find the analytical solutions for the conditions of the Theorem 1. Fortunately, they can be solved numerically by the eigenvalue-solver MATLAB Tool-Box and a trial-and-error procedure. Furthermore, the sufficient conditions in the Theorem 1 would be easily satisfied if Re{kmax(D)} (the maximum real part of the eigenvalues of D) is more negative by suitably selecting the gain matrix X. Of course, the gain matrix X in synchronization control law is not unique. Here we give a tentative method to construct the gain matrix X. A rough technique is as follows: Given a positive constant a and an arbitrarily sufficiently small positive constant e, one can a suitable gain matrix X by adjusting the stable matrix D. In fact, the proposed synchronization law in this paper can be achieved by computer programming, which has the excellent advantages. A specific computational procedure scheme can be designed as follows: Step 1: Given a positive constant a and an arbitrarily sufficiently small positive constant e, choose a suitable gain matrix X such that D is a stable matrix by using any eigenvalue assignment technique. Step 2: Construct the Hamiltonian matrix H in Lemma 1 and check if H has no eigenvalues on the imaginary axis. If so, then the procedure goes to Step 4. Otherwise, the procedure continues to Step 3. Step 3: Adjust the value of Re{kmax(D)} more negative by selecting a new gain matrix X and go back to Step 2. Step 4: Obtain the state-feedback controller (7).

A. Wu et al. / Information Sciences 183 (2012) 106–116

113

Remark 5. In this paper, a piecewise-linear memristor-based recurrent neural network model is given to characterize memristor feature of pinched hysteresis. Such a model is basically a state-dependent nonlinear switching dynamical system. However, for the memristor-based system, since it consists of too many subsystems, it is very difficult to analyze its dynamic behaviors. To proceed dynamic analysis of the memristive switching system, we use differential inclusion to avoid this difficulty. Under the framework of Filippov’s solution, we can tum to analyze a relevant differential inclusion. We obtain some sufficient conditions of the synchronization control of memristor-based recurrent neural networks which are the generalization of those for conventional recurrent neural networks. 4. An illustrative example Referring to the Example 1 introduced in Zeng and Wang [41] for analyzing and designing associative memories based on recurrent neural networks, consider a neural network with 12 neurons (n = 12) of the configuration given in Fig. 2, after introducing the memristors, we will discuss the following numerical example. Example 1. Consider 12-neuron memristor-based recurrent neural network model with the configuration given in Fig. 2

xðtÞ ¼ DðxÞxðtÞ þ AðxÞgðxðtÞÞ þ BðxÞgðxðt  sðtÞÞÞ þ I;

t P 0;

ð11Þ

where

xðtÞ ¼ ðx1 ðtÞ; x2 ðtÞ; . . . ; x12 ðtÞÞT ; I ¼ ðI1 ; I2 ; . . . ; I12 ÞT ; b  DðxÞ ¼ D; if jxðtÞj < T; DðxÞ ¼ D; if jxðtÞj > T; b AðxÞ ¼ A; b BðxÞ ¼ B;

if

jxðtÞj < T;

if

jxðtÞj < T;

 AðxÞ ¼ A;  BðxÞ ¼ B;

if

jxðtÞj > T;

if

jxðtÞj > T;

gðxðtÞÞ ¼ ðgðx1 ðtÞÞ; gðx2 ðtÞÞ; . . . ; gðx12 ðtÞÞÞT ; gðxðt  sðtÞÞÞ ¼ ðgðx1 ðt  s1 ðtÞÞÞ; gðx2 ðt  s2 ðtÞÞÞ; . . . ; gðx12 ðt  s12 ðtÞÞÞÞT ; si ðtÞ ¼ i; i ¼ 1; 2; . . . ; 12; 1 gðhÞ ¼ ðjh þ 1j  jh  1jÞ: 2

Fig. 2. Interconnecting structure of a 12-neuron neural network.

ð12Þ

114

A. Wu et al. / Information Sciences 183 (2012) 106–116

In fact, system (11) has multiple equilibria, which provide good pattern characterization to be used to analyze and design associative memories. Especially, one needs to design algorithms for huge amounts of desired memory patterns, the memristive neural networks provide great conveniences. Even, when system (11) does not exhibit memristive pinched hysteresis, to store the desired memory patterns in Fig. 3 (For purpose of visualization, 1 represents white, 1 represents black), according to Corollary 5 in [41], if set Ii = 1.825 (i = 1, 2, . . . , ,12), D(x) = E12, then we can choose desired connection weight matrices

0

2:325 B B 1:825 B B 1:825 B B B 1:825 B B B 1:825 B B B 1:825 AðxÞ þ BðxÞ ¼ B B 1:825 B B B 1:825 B B B 2:425 B B B 2:425 B B 2:425 @ 1:825

1:825 1:225 4:575 1:525 3:05 1:225 1:825

0

0

0

2:325 1:225 4:575 1:525 3:05 1:225 1:825

0

0

0

1:825 2:325 5:175 1:525 3:65 1:825 1:825 1:825 4:675 1:525 3:65 1:825 1:825 1:825 5:775 2:625 3:65 1:825 1:825 1:825 5:775 2:125 3:15 1:825 1:825 2:425 6:375 2:125 4:25 2:925 2:425 2:425 6:975 2:125 4:85 2:425 1:825 2:425 6:975 2:125 4:85 2:425 1:825 2:425 6:975 2:125 4:85 2:425 1:825 2:425 6:975 2:125 4:85 2:425 1:825 1:225 4:575 1:525 3:05 1:225

0

1

C 0 C C 1:825 0 0 0 0 C C C 1:825 0 0 0 0 C C C 1:825 0 0 0 0 C C C 1:825 0 0 0 0 C C: 1:825 0 0 0 0 C C C 2:925 0 0 0 0 C C C 1:825 1:1 0 0 0 C C C 1:825 0:6 0:5 0 0 C C 1:825 0:6 0 0:5 0 C A 1:825 0 0 0 0:5

b ¼ E12 ; D  ¼ 1:5E12 ; I ¼ Next, we study the synchronization control of system (11). In order to simplify illustration, let D   0 b¼B b ¼ 3E6 ð1:825; 1:825; . . . ; 1:825ÞT ; T ¼ ð0:8; 0:8; . . . ; 0:8ÞT ; A , and 0 2E6

0

1:8

B B 0 B B 0 B B B 1:6 B B 1:6 B B B ¼B  ¼ B 1:6 A B 1:3 B B B 1:3 B B 1:3 B B B 1:5 B B 1:5 @

0

0

1:6 1:6 1:6

2

2

2

1:8

0

1:6 1:6 1:6

2

2

2

0

1:8

1:6 1:6 1:6

2

2

2

1:6 1:6

1:8

0

0

2:6 2:6 2:6

1:6 1:6 1:6 1:6

0 0

1:8 0

0 1:8

2:6 2:6 2:6 2:6 2:6 2:6

1:3 1:3 1:5 1:5 1:5

1:7

0

1:3 1:3 1:5 1:5 1:5

0

1:7

0 0

1:3 1:3 1:5 1:5 1:5

0

0

1:7

1:5 1:5

2

2

2

0:8 0:8 0:8

1:5 1:5

2

2

2

0:8 0:8 0:8

1:5 1:5 1:5

2

2

2

0:8 0:8 0:8

2:6 2:6 2:6

1

C 2:6 2:6 2:6 C C 2:6 2:6 2:6 C C C 2 2 2 C C 2 2 2 C C C 2 2 2 C C: 0:8 0:8 0:8 C C C 0:8 0:8 0:8 C C 0:8 0:8 0:8 C C C 1:3 0 0 C C 0 1:3 0 C A 0 0 1:3

Let parameters a = 0.2, e = 0.1, and choose the controller gain matrix X = 5E12, then the corresponding Hamiltonian matrix is as





Q1

Q2

Q3

QT1



Fig. 3. Four desired memory patterns (3  4).

A. Wu et al. / Information Sciences 183 (2012) 106–116

115

 18E6 0 ; Q3 ¼ diagð2:3214; 2:5918; 2:9221; 3:3255; 3:8183; 4:4201; 5:1552; 6:053; 0 8E6 7:1496; 8:4891; 10:125; 12:1232Þ. Obviously, the Hamiltonian matrix H has no eigenvalues on the imaginary axis, and the conditions of Theorem 1 are satisfied, so the coupling of variables can drive the two coupled neural networks exponentially synchronized. with Q1 ¼ 5:8E12 ; Q2 ¼



5. Concluding remarks Chua in 1971 proposed from symmetry arguments that there should be a fourth fundamental element, which he called a memristor, besides the fundamental passive circuit elements: resistor, capacitor and inductor. Although he showed that such an element had many interesting and valuable circuit properties, until now it is still very difficult to present a useful physical model or material example of a memristor. In this paper, we formulate and investigate a preliminary memristor-based recurrent neural network model to imitate the memristive pinched hysteresis. Some sufficient conditions are derived to guarantee the exponential synchronization for the memristor-based recurrent neural networks. Finally, a numerical example is discussed to illustrate the validity of the obtained result. In addition to memory applications, we believe our derived analysis can be further extended to provide valuable design insights and allow an in-depth understanding of key design implications of memristor-based memories. Acknowledgements The authors thank the Associate Editor and four anonymous reviewers for their insightful comments and valuable suggestions, which have helped us improve this paper greatly. The work is supported by the Natural Science Foundation of China under Grant 60974021, the 973 Program of China under Grant 2011CB710606, the Fund for Distinguished Young Scholars of Hubei Province under Grant 2010CDA081, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant 20100142110021. References [1] J.D. Cao, L.L. Li, Cluster synchronization in an array of hybrid coupled neural networks with delay, Neural Netw. 22 (2009) 335–342. [2] C.J. Cheng, T.L. Liao, J.J. Yan, C.C. Hwang, Exponential synchronization of a class of neural networks with time-varying delays, IEEE Trans. Syst. Man Cybern. B, Cybern. 36 (2006) 209–215. [3] L.O. Chua, Memristor-the missing circuit element, IEEE Trans. Circuit Theory CT-18 (1971) 507–519. [4] F.H. Clarke, Y.S. Ledyaev, R.J. Stem, R.R. Wolenski, Nonsmooth Analysis and Control Theory, Springer, New York, 1998. [5] F. Corinto, A. Ascoli, M. Gilli, Nonlinear dynamics of memristor oscillators, IEEE Trans. Circuits Syst. I, Reg. Pap. 58 (2011) 1323–1336. [6] B.T. Cui, X.Y. Lou, Synchronization of chaotic recurrent neural networks with time-varying delays using nonlinear feedback control, Chaos Solitons Fract. 39 (2009) 288–294. [7] J.C. Doyle, K. Glover, P.P. Khargonekar, B.A. Francis, State-space solutions to standard H2 and H1 control problems, IEEE Trans. Autom. Control 34 (1989) 831–847. [8] A.F. Filippov, Differential Equations with Discontinuous Righthand Sides, Kluwer Academic Publishers, Boston, MA, 1988. [9] Q.T. Gan, R. Xu, X.B. Kang, Synchronization of chaotic neural networks with mixed time delays, Comm. Nonlinear Sci. Numer. Simul. 16 (2011) 966– 974. [10] X.Z. Gao, S.M. Zhong, F.Y. Gao, Exponential synchronization of neural networks with time-varying delays, Nonlinear Anal.: Theory Methods Appl. 71 (2009) 2003–2011. [11] J. Hu, J. Wang, Global uniform asymptotic stability of memristor-based recurrent neural networks with time delays, In: 2010 International Joint Conference on Neural Networks, IJCNN 2010, Barcelona, Spain, 2010, pp. 1–8. [12] H. Huang, G. Feng, Synchronization of nonidentical chaotic neural networks with time delays, Neural Netw. 22 (2009) 869–874. [13] C.X. Huang, Y.G. He, L.H. Huang, W.J. Zhu, pth moment stability analysis of stochastic recurrent neural networks with time-varying delays, Inform. Sci. 178 (2008) 2194–2203. [14] M. Itoh, L.O. Chua, Memristor oscillators, Int. J. Bifur. Chaos 18 (2008) 3183–3206. [15] M. Itoh, L.O. Chua, Memristor cellular automata and memristor discrete-time cellular neural networks, Int. J. Bifur. Chaos 19 (2009) 3605–3656. [16] H.K. Khalil, Nonlinear Systems, Macmillan, New York, 1992. [17] X.D. Li, M. Bohner, Exponential synchronization of chaotic neural networks with mixed delays and impulsive effects via output coupling with delay feedback, Math. Comput. Model. 52 (2010) 643–653. [18] T. Li, S.M. Fei, K.J. Zhang, Synchronization control of recurrent neural networks with distributed delays, Phys. A: Stat. Mech. Appl. 387 (2008) 982–996. [19] T. Li, S.M. Fei, Q. Zhu, S. Cong, Exponential synchronization of chaotic neural networks with mixed delays, Neurocomputing 71 (2008) 3005–3019. [20] T. Li, A.G. Song, S.M. Fei, Y.Q. Guo, Synchronization control of chaotic neural networks with time-varying and distributed delays, Nonlinear Anal.: Theory Methods Appl. 71 (2009) 2372–2384. [21] M.Q. Liu, Optimal exponential synchronization of general chaotic delayed neural networks: An LMI approach, Neural Netw. 22 (2009) 949–957. [22] B. Liu, W.L. Lu, T.P. Chen, Global almost sure self-synchronization of Hopfield neural networks with randomly switching connections, Neural Netw. 24 (2011) 305–310. [23] X.Y. Lou, B.T. Cui, Synchronization of neural networks based on parameter identification and via output or state coupling, J. Appl. Math. Comput. 222 (2008) 40–457. [24] J.G. Lu, G.R. Chen, Global asymptotical synchronization of chaotic neural networks by output feedback impulsive control: An LMI approach, Chaos Solitons Fract. 41 (2009) 2293–2300. [25] Z. Lu, L.S. Shieh, G.R. Chen, N.P. Coleman, Adaptive feedback linearization control of chaotic systems via recurrent high-order neural networks, Inform. Sci. 176 (2006) 2337–2354. [26] F. Merrikh-Bayat, S.B. Shouraki, Memristor-based circuits for performing basic arithmetic operations, Procedia Comput. Sci. 3 (2011) 128–132. [27] A.N. Michel, L. Hou, D. Liu, Stability of Dynamical Systems: Continuous, Discontinuous and Discrete Systems, Birkhauser, Boston, MA, 2007. [28] I. Petras, Fractional-order memristor-based Chua’s circuit, IEEE Trans. Circuits Syst. II, Exp. Briefs 57 (2010) 975–979. [29] C. Posadas-Castillo, C. Cruz-Hernandez, R.M. Lopez-Gutierrez, Synchronization of chaotic neural networks with delay in irregular networks, Appl. Math. Comput. 205 (2008) 487–496.

116

A. Wu et al. / Information Sciences 183 (2012) 106–116

[30] E.N. Sanchez, L.J. Ricalde, Chaos control and synchronization, with input saturation, via recurrent neural networks, Neural Netw. 16 (2003) 711–717. [31] L. Sheng, H.Z. Yan, Exponential synchronization of a class of neural networks with mixed time-varying delays and impulsive effects, Neurocomputing 71 (2008) 3666–3674. [32] Q.K. Song, Synchronization analysis of coupled connected neural networks with mixed time delays, Neurocomputing 72 (2009) 3907–3914. [33] Q.K. Song, Synchronization analysis in an array of asymmetric neural networks with time-varying delays and nonlinear coupling, Appl. Math. Comput. 216 (2010) 1605–1613. [34] D.B. Strukov, G.S. Snider, G.R. Stewart, R.S. Williams, The missing memristor found, Nature 453 (2008) 80–83. [35] J.M. Tour, T. He, The fourth element, Nature 453 (2008) 42–43. [36] A.L. Wu, C.J. Fu, Global exponential stability of non-autonomous FCNNs with Dirichlet boundary conditions and reaction-diffusion terms, Appl. Math. Model. 34 (2010) 3022–3029. [37] A.L. Wu, Z.G. Zeng, C.J. Fu, W.W. Shen, Global exponential stability in Lagrange sense for periodic neural networks with various activation functions, Neurocomputing 74 (2011) 831–837. [38] X.S. Yang, J.D. Cao, Stochastic synchronization of coupled neural networks with intermittent control, Phys. Lett. A 373 (2009) 3259–3272. [39] S.J. Yoo, J.B. Park, Y.H. Choi, Indirect adaptive control of nonlinear dynamic systems using self-recurrent wavelet neural networks via adaptive learning rates, Inform. Sci. 177 (2007) 3074–3098. [40] W.W. Yu, J.D. Cao, W.L. Lu, Synchronization control of switched linearly coupled neural networks with delay, Neurocomputing 73 (2010) 858–866. [41] Z.G. Zeng, J. Wang, Analysis and design of associative memories based on recurrent neural networks with linear saturation activation functions and time-varying delays, Neural Comput. 19 (2007) 2149–2182. [42] C.K. Zhang, Y. He, M. Wu, Exponential synchronization of neural networks with time-varying mixed delays and sampled-data, Neurocomputing 74 (2010) 265–273. [43] Y.P. Zhang, J.T. Sun, Robust synchronization of coupled delayed neural networks under general impulsive control, Chaos Solitons Fract. 41 (2009) 1476–1480.