FPGA-Implementation of an Adaptive Neural ... - Semantic Scholar

3 downloads 299 Views 1MB Size Report
the power spectrum density (PSD) of the input and output signal of the ... [9] “Xilinx System Generator for DSP,” www.xilinx.com/tools/sysgen.htm,. Xilinx Inc.
FPGA-Implementation of an Adaptive Neural Network for RF Power Amplifier Modeling Mohammed Bahoura

Chan-Wang Park

Department of Engineering Universit´e du Qu´ebec a` Rimouski 300, all´ee des Ursulines, Rimouski, Qc, Canada. Email: Mohammed [email protected]

Department of Engineering Universit´e du Qu´ebec a` Rimouski 300, all´ee des Ursulines, Rimouski, Qc, Canada. Email: Chan-Wang [email protected]

Abstract— In this paper, we propose an architecture for FPGAimplementation of neural adaptive neural network RF power behavioral modeling. The real-valued time-delay neural network (RVTDNN) and the backpropagation (BP) learning algorithm were implemented on FPGA using Xilinx System Generator for DSP and the Virtex-6 FPGA ML605 Evaluation Kit. Performances obtained with 16-QAM modulated test signal and material resource requirement are presented for a network of six hidden layer neurons.

I. I NTRODUCTION A first step in analyzing a Power Amplifier (PA) system and designing a linearizer for a PA is to model the PA nonlinearity accurately. The most of previous models consider the memory effect in real PAs that is often arise due to thermal effects and long time constant in a dc bias circuits. This means that the AM/AM and AM/PM functions are not static, but change depending on the history of past levels. It is known that these phenomena come from memory effects and a model to treat these phenomena is needed. To consider memory effects, dynamic AM/AM and AM/PM are considered to correct nonlinearity of PA. So linearizing PAs must be adaptive to take into account memory effects. Previously many kind of PA models were reported such as Volterra series model, Wiener model, polynomial model, neural network model and the neural fuzzy model. Among them the neural network is to be used to characterize the PA and to model the PA in this paper. With neural networks based on time delayed lines (TDNN), we can even compensate the short term memory effects presented in the characteristic of PAs. Besides, the realization of adaptive architecture it allows to remedy the problems of the long term memory effects such as electro-thermal memory effect. The most of PA modeling algorithms proposed by previous authors [1], [2], [3], [4] were just implemented in a software environment, e.g. MATLAB with test equipments to model the PAs without using real Field Programmable Gate Array (FPGA). The important issues related to practical hardware implementation in FPGA were not discussed. Event though different implementation structures will lead to very different results in terms of complexity and cost in FPGA. In this paper, we present a low complexity and low cost solution for realtime implementation of the proposed neural network PA model using real FPGA chip.

II. M ETHOD A. Real-Valued Time-Delay Neural Network The real-valued time-delay neural network (RVTDNN) is based on a multi-layer perceptron (MLP) by adding tapped delay lines (TDLs) in its inputs [1] (Fig. 1). At any moment n, the past m values of the baseband inputs Iin (n) and Qin (n) are also presented to the MLP network in order to consider the memory effects. The input data are defined by an input vector x as follows: x = [Iin (n), Iin (n − 1), . . . , Iin (n − m), Qin (n), Qin (n − 1), . . . , Qin (n − m)]

(1)

where m is the memory depth. As shown in Fig. 1, the MLP network is a collection of neurones (nodes), arranged together in layers in feed-forward manner. Signals pass into the input layer nodes, progress forward through the hidden layers and finally emerge from the output layer [5]. This figure represents an example of an MLP network characterized by 2m + 2 inputs, one hidden of N nodes, and 2 outputs. Each node j, in the hidden layer, receives the output of each node i from the input layer through h a connection of weight wj,i and then produce a corresponding h response yj which is forwarded to the output layer. In fact, each node j performs a weighted sum which is transferred by a nonlinear function ϕh according to . vjh =

2m+2 X

h h wj,i xi + wj,0

j = 1, . . . , N

(2)

i=1

yjh = ϕh vjh



(3)

h where wj,0 represents the bias of the jth neuron and m is the memory depth. In the same manner, the output of each neuron k, in output layer, is given by .

vko =

N X

o o wk,j yjh + wk,0

k = 1, 2

(4)

j=1

yko = ϕo (vko )

(5)

2

E(n) =

1X 2 ek (n) 2

(7)

k=1

This error can be reduced by updating the weights using the gradient descent method: ∆wk,j (n) = −η

∂E(n) ∂wk,j (n)

(8)

where (0 < η < 1) is the learning rate. 1) Output Layer: The connection weights in the output layer are updated using [7] o ∆wk,j (n) = ηδko (n)yjh (n)

where the local gradient

δko (n)

(9)

is defined by

δko (n) = ek (n)ϕ0o (vko (n))

(10)

As a linear function is used, than ϕ0o (vko (n)) = 1. Eq. 10 can be simplified to Fig. 1.

δko (n) = ek (n)

Structure of the two-layer RVTDNN PA behavioral Model.

o wk,j

where ϕo is the activation function, is synaptic weight connecting the output of the jth neuron in the hidden layer o to the kth neuron of the output layer, and wk,0 is the bias the bias of the kth neuron. The activation functions of the hidden neurons are typically hyperbolic tangent or logistic sigmoid. The output neurons activation function can be linear or non-linear depending on the task performed by the network: a function approximation or a classification, respectively [6]. In this paper, we use an hyperbolic tangent function ϕh (x) = (1−e−2x )/(1+e−2x ) for the hidden layer and a linear function ϕo (x) = x for the output layer. h o The connection weights {wj,i , wk,j } are determined in the training phase using the backpropagation learning algorithm.

(11)

2) Hidden Layer: The connection weights in the hidden layer are updated using [7] h ∆wj,i (n) = ηδjh (n)xi (n)

where the local gradient δjh (n)

=

δjh (n)

ϕ0h (vjh (n))

(12)

is defined by 2 X

o δko (n)wk,j (n)

(13)

k=1

As an hyperbolic tangent function is used, than ϕ0h (vjh (n)) = 1 − ϕ2h (vjh (n)). Eq. 13 can be simplified to δjh (n) = (1 − ϕ2h (vjh (n)))

2 X

o δko (n)wk,j (n)

(14)

k=1

B. Back-Propagation Algorithm The backpropagation algorithm performs the gradient descent on error. It can be implemented in two different ways: sequential mode and batch mode [7]. In sequential mode, the weights are updated after each training example (pattern) is applied to the network. In batch mode, all the training examples, that constitute an epoch, are applied to the network before the weights are updated. It is obvious that for a realtime applications, the sequential learning must be chosen. At the nth iteration (i.e. presentation of the nth training example), the error signal at the output of neurone k is defined by [7] ek (n) = dk (n) − yk (n)

(6)

where dk (n) and yk (n) are the desired and the actual output of this neuron, respectively. The instantaneous value of the total error energy at the output layer is defined as

C. Reference Model To illustrate the performance of the RVTDNN model, we use a reference PA system based on the Wiener model. We used the most popular memoryless nonlinear PA model proposed by Saleh [8], which is defined by the following AM/AM and AM/PM characteristics A(r(t)) =

αA r(t) 1 + βA r2 (t)

(15)

Φ(r(t)) =

αΦ r(t) 1 + βΦ r2 (t)

(16)

where r(t) stands for the envelope of the applied input signal. Typical parameter values are: αA = 2.1587, βA = 1.1517, αΦ = 4.0033 and βΦ = 9.1040. The memory effects are modeled by a 2nd order finite impulse response FIR filter defined by H(z) = 1 + 0.5z −2

(17)

Fig. 2.

Neural network architecture based on Xilinx system generator blockset for RF Power amplifier modeling.

III. FPGA I MPLEMENTATION The RVTDNN model is implemented on FPGA using Xilinx System Generator for DSP [9] and the Virtex-6 FPGA ML605 Evaluation Kit [10]. Fig. 2 presents an architecture of the twolayer RVDTNN (Fig 1) with six neurons in the hidden layer (N = 6). All blocks are detailed in this figure. The hyperbolic tangent function is implemented using a look-up table (LUT) schema that takes advantage from the symmetric characteristic of this function [11]. A delay is added to compensate the one introduced by the ROM block. Table I gives in detail the resource requirement and the maximum operating frequency as reported by the Xilinx ISE tool. These values are given for fixed-point data of 2’s complement signed 36-bit number having 32 fractional bits. It can be seen that the increasing of the number of hidden layer neurons is mainly limited by the number of the available RAMB36E1 and DSP48E1 blocks. After successful simulation, the hardware co-simulation compilation automatically creates bitstream file and associates it with a JTAG co-simulation block. IV. E XPERIMENT AND R ESULTS The proposed architecture was evaluated with a 16-QAM modulated test signal generated using the communication toolbox of MATLAB. The hardware-in-the-loop co-simulation

TABLE I R ESOURCE UTILIZATION AND MAXIMUM OPERATING FREQUENCY OF THE V IRTEX -6 XC6VLX240T CHIP.

Flip Flops LUTs Bonded IOBs RAMB36E1s DSP48E1s Maximum operating frequency

2,240 out of 301,440 7,122 out of 150,720 217 out of 600 384 out of 416 640 out of 768 13.585 MHz

permits to incorporate design running in an FPGA directly in Simulink simulation. When the design is simulated, the compiled portion (JTAG co-simulation block) is actually running on the hardware. Fig. 3 shows the cartesian components of the outputs of the reference and RVTDNN models. Fig. 4 presents the AM/AM and AM/PM characteristics of these models. It is obvious that both nonlinearity and memory effects are well modeled at the same time with the RVTDNN model. Finally, Fig. 5 presents the power spectrum density (PSD) of the input and output signal of the reference and RVTDNN models.

Fig. 3. Output signal in the time domain for the reference and RVTDNN PA models.

V. C ONCLUSION A dynamic neural network of two layers has been successfully implemented on Virtex-6 XC6VLX240T chip for adaptive PA behavioral modeling. The obtained results with the 16-QAM modulated signal show the high performance of this low-cost architecture. R EFERENCES [1] T. Liu, S. Boumaiza, and F. M. Ghannouchi, “Dynamic behavioral modeling of 3G power amplifiers using real-valued time-delay neural networks,” Microwave Theory and Techniques, IEEE Transactions on, vol. 52, no. 3, pp. 1025–1033, March 2004. [2] D. Luongvinh and Y. Kwon, “Behavioral modeling of power amplifiers using fully recurrent neural networks,” in Microwave Symposium Digest, 2005 IEEE MTT-S International, June 2005, pp. 1979–1982. [3] Y. Cao, X. Chen, and G. Wang, “Dynamic behavioral modeling of nonlinear microwave devices using real-time recurrent neural network,” Electron Devices, IEEE Transactions on, vol. 56, no. 5, pp. 1020–1026, May 2009. [4] M. Rawat, K. Rawat, and F. M. Ghannouchi, “Adaptive Digital Predistortion of Wireless Power Amplifiers/Transmitters Using Dynamic RealValued Focused Time-Delay Line Neural Networks,” Microwave Theory and Techniques, IEEE Transactions on, vol. 58, no. 1, pp. 95–104, Jan. 2010. [5] Q. Chen, Y. W. Chan, and K. Worden, “Structural fault diagnosis and isolation using neural networks based on response-only data,” Computers & Structures, vol. 81, no. 22-23, p. 2165, 2003. [6] D. Cherubini, A. Fanni, A. Montisci, and P. Testoni, “A fast algorithm for inversion of MLP networks in design problems,” COMPEL-The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 24, no. 3, pp. 906–920, 2005. [7] S. Haykin, Neural Networks: A comprehensive Foundation, 2nd ed. Upper Saddle River, NJ: Prentice-Hall, 1999. [8] A. A. M. Saleh, “Frequency-Independent and Frequency-Dependent Nonlinear Models of TWT Amplifiers,” Communications, IEEE Transactions on, vol. 29, no. 11, pp. 1715–1720, Nov. 1981. [9] “Xilinx System Generator for DSP,” www.xilinx.com/tools/sysgen.htm, Xilinx Inc. [10] “Virtex-6 FPGA ML605 Evaluation Kit,” www.xilinx.com/products/ devkits/EK-V6-ML605-G.htm, Xilinx Inc. [11] J. L. Bastos, H. P. Figueroa, and A. Monti, “FPGA implementation of neural network-based controllers for power electronics applications,” in Applied Power Electronics Conference and Exposition, 2006. APEC ’06. Twenty-First Annual IEEE, 2006, pp. 1–6.

Fig. 4. AM/AM and AM/PM characteristics for the reference and RVTDNN PA models.

Fig. 5. Power spectral density of input and output signals for the reference and RVTDNN PA models.