TRAINING A NEURAL NETWORK WITH A FINANCIAL CRITERION ...

28 downloads 20 Views 216KB Size Report
TRAINING A NEURAL NETWORK WITH A FINANCIAL. CRITERION RATHER THAN A PREDICTION CRITERION. YOSHUA BENGIO. Dept. IRO, Universit e de  ...
TRAINING A NEURAL NETWORK WITH A FINANCIAL CRITERION RATHER THAN A PREDICTION CRITERION YOSHUA BENGIO Dept. IRO, Universite de Montreal, Montreal, Qc, Canada, H3C 3J7 and CIRANO, Montreal, Qc, Canada A common approach to quantitative decision taking with nancial time-series is to train a model using a prediction criterion (e.g., squared error). We nd on a portfolio selection problem that better results can be obtained when the model is directly trained in order to optimize the nancial criterion of interest, with a dierentiable decision module.

1 Introduction Most applications of learning algorithms to nancial time-series are based on predicting the value (either discrete or continuous) of output (dependent) variables given input (independent) variables. For example, the parameters of a multi-layer neural network are tuned in order to minimize a squared error loss. However, in many of these applications, the ultimate goal is not to make good predictions, but rather to use these often noisy predictions in order to take some decisions. In fact, the performance of these systems is sometimes measured in terms of nancial pro tability and risk criteria, after some heuristic decision taking rule has been applied to the trained model's outputs. Because of the limited amount of training data, and because nancial timeseries are often very noisy, we argue here that better results can be obtained by choosing the model parameters in order to directly maximize the nancial criterion of interest. What we mean by training criterion in this paper is a scalar function of the training data and the model parameters. This scalar function is minimized (or maximized) with an optimization algorithm (such as gradient descent) by varying the parameters. Previous theoretical and experimental work has already shown the advantages of optimizing a criterion that is closer to the real objective of learning (e.g., for classi cation tasks (Hampshire & Waibel, 1990), or (Driancourt et al , 1991), and multi-module systems (Bottou & Gallinari, 1991 Bengio, 1996a)). In particular, it has been proven (Hampshire & Kumar, 1992) for classi cation tasks that a strategy that separates the learning of a model of the data and the decision-taking rule is less optimal than one based on training the model with respect to the decision surfaces, which may be determined by a discriminant function associated to each class (e.g., one output of a neural network for each class). In (Bengio, 1997), we show why :

1

two modules in sequence (such as 1 , for prediction, and 2 , for decisions, in Figure 1) can be more optimally trained with respect to a nancial criterion that depends directly on the output of 2 by training them jointly with respect to , rather than independently training 1 to minimize a prediction error over some horizon to be decided. In the area of nancial decisions, it has already been shown (Leitch & Tanner, 1991) in cases of interest rate forecasting that pro t and forecast accuracy are not signi cantly correlated. In this paper, we push this idea further by estimating the parameters of a neural network predictor in such a way as to maximize the utility of the decisions taken on the basis of those predictions. With this procedure, we nd that better nancial utility is obtained than when the parameters are estimated in order to minimize forecasting errors. In section 2, we present a particular cost function for optimizing the pro ts of a portfolio, while taking into account losses due to transaction costs. It should be noted that including transactions in the cost function makes it nonlinear (and not quadratic) with respect to the trading decisions. When the decisions are taken in a way that depends on the current asset allocation (to minimize transactions), all the decisions during a trading sequence become dependent of each other. In section 3 we present a particular decision taking, i.e., trading, strategy, and a dierentiable version of it, which can be used in the direct optimization of the model parameters with respect to the nancial criterion. In section 4, we describe a series of experiments which compare the direct optimization approach with the prediction approach. M

M

C

M

C

M

2 A Training Criterion for Portfolio Management In this paper, we consider the practical example of choosing a discrete sequence of portfolio weights in order to maximize pro ts, while taking into account losses due to transactions. We will simplify the representation of time by assuming a discrete series of events, at time indices = 1 2 . We assume that some decision strategy yields, at each time step , the portfolio weights =( 0 1 ), for + 1 weights. In the experiments, we will apply this model to managing stocks as well as a cash asset (which may earn short-term interest). We will assume that each transaction (buy or sell) of an amount of asset costs j j. This may be used to take into account the eect of dierences in liquidity of the dierent assets. In the experiments, in the case of cash, the transaction cost is zero, whereas in the case of stocks, it is 1%, i.e., the overall cost of buying and later selling back a stock is about 2% of its value. A more realistic cost function should take into account the non-linear eects of the amount that is sold or bought: transaction fees may be t



t

wt

wt

 wt

 : : :  wt n

n

n

v

i

ci v

2

:::T

d MSE Criterion

C

1

M1

x

y

prediction module

M2

w

decision module

Financial Criterion

C

Figure 1: Task decomposition: a prediction module ( 1 ) with input and output , and a decision module ( 2 ) with output . In the case of separate optimization, an intermediate criterion (e.g., mean squared error) is used to train 1 (with desired outputs ). In the case of joint optimization of the decision module and the prediction module, gradients with respect to the nancial criterion are back-propagated through both modules (dotted lines). M

M

x

y

w

M

d

proportionally higher for small transactions, transactions may only be allowed with a certain granularity, and slippage losses due to low relative liquidity may be considerably worse for large transactions. The training criterion is a function of the whole sequence of portfolio weights. At each time step , we decompose the change in value of the assets in two categories: the return due to the changes in prices (and revenues from dividends), , and the losses due to transactions, . The overall return ratio is the product of and over all the time steps = 1 2 : t

Rt

Lt

Y overall return ratio =

Rt

Lt

t

Rt Lt



:::T

(1)

t

This is the ratio of the nal wealth to the initial wealth. Instead of maximizing this quantity, in this paper we maximize its logarithm (noting that the logarithm is a monotonic function): C

def

=

X(log

Rt

t

3

+ log ) Lt

(2)

The yearly percent return is then given by ( ; 1)  100%, where is the number of time steps per year (12, in the experiments), and is the number of time steps (number of months, in the experiments) over which the sum is taken. The return due to price changes and dividends from time to time + 1 is de ned in terms of the portfolio weights and the multiplicative returns of each stock , CP =T

e

P

T

Rt

t

t

wt i

rt i

rt i

def

= value +1 value i=

t

(3)

t i

where value represents the value of asset at time , assuming no transaction takes place: represents the relative change in value of asset in the period to +1. Let be the actual worth of the th asset at time in the portfolio, and let be the combined value of all the assets at time . Since the portfolio is weighted with weights , we have i

t i

t

rt i

t

t

i

at i

i

t

at

t

wt i

at i

and at

=

X

def

=

at i

(4)

at wt i

=

X

i

(5)

at w t i

i

Because of the change in value of each one of the assets, their value becomes 0

at i

def

=

(6)

rt i at i :

Therefore the total worth becomes 0

at

=

X

=

0

at i

X

i

rt i at i

=

at

X

i

rt i wt i

(7)

i

so the combined worth has increased by the ratio Rt

i.e., Rt

=

0

def at

=

X

(8)

at

(9)

wt i rt i :

i

After this change in asset value, the portfolio weights have changed as follows (since the dierent assets have dierent returns): 0

wi t

0

def at

=

0

i

at

= 4

wt i rt i Rt

:

(10)

At time + 1, we want to change the proportions of the assets to the new portfolio weights +1 , i.e, the worth of asset will go from to +1 . We then have to incur for each asset a transaction loss, which is assumed simply proportional to the amount of the transaction, with a proportional cost for asset . These losses include both transaction fees and slippage. This criterion could easily be generalized to take into account the fact that the slippage costs may vary with time (depending on the volume of oer and demand) and may also depend non-linearly on the actual amount of the transactions. After transaction losses, the worth at time + 1 becomes t

wt

0

i

0

0

at wt i

at wt

i

ci

i

X j X j (1 ; t

at+1

=

0

at

;

ci at wt+1 i

;

at wt i

ci wt+1 i

;

wt i

0

i

=

0

at

0

0

0

j

j)

(11)

:

i

The loss

Lt

due to transactions at time is de ned as the ratio t

Lt

Therefore Lt

=1;

def

=

X

at 0

at;1

(12)

:

j

ci wt i

;

j

0

(13)

wt;1 i :

i

To summarize, the overall pro t criterion can be written as follows: C

=

X t

X

log(

i

log(1 ;

rt i wt i

X

j

)+

ci wt i

i

;P

wt;1 i rt;1 i i

wt;1 i rt;1 i

j)

(14)

At each time step, a trading module computes , from 1 and from the predictor output . To backpropagate gradients with respect to the cost , when function through the trader from the above equation, one computes given . The trading module can then compute from , and this 1 process is iterated backwards in time. At each time step, the trading module from . also computes To conclude this section, it should be noted that the introduction of transaction losses in the training criterion makes it non-linear in the decisions (whereas the pro t term is linear in the decisions). Note that it is not even dierentiable everywhere (but it is dierentiable almost everywhere, which is enough for gradient descent optimization). Furthermore, when the decision at wt

0

wt;

yt

@C

@C

@C

0 @ wt

@ w0

t;

@C

@C

@ yt

@wt

5

@ wt i @C

@ wt

time is taken in function of the previous decision (to avoid unnecessary transactions), all the decisions are coupled together, i.e., the cost function can't be separated as a sum of independent terms associated to the network output at each time step. For this reason, an algorithm such as back-propagation through time has to be used to compute the gradients of the cost function. t

3 The Trading Modules We could directly train a module producing in output the portfolio weights , but in this paper we use some nancial a-priori knowledge in order to modularize this task in two subtasks: 1. with a \prediction" module (e.g., 1 in gure 1), compute a \desirability" value for each asset on the basis of the current inputs, 2. with a trading module, allocate capital among the given set of assets (i.e., compute the weights ), on the basis of and 1 (this is done with the decision module 2 in gure 1). In this section, we will describe two such trading modules, both based on the same a-priori knowledge. The rst one is not dierentiable and it has hand-tuned parameters, whereas the second one is dierentiable and it has parameters learned by gradient ascent on the nancial criterion . The apriori knowledge we have used in designing this trader can be summarized as follows:  We mostly want to have in our portfolio those assets that are desirable according to the predictor (high ).  More risky assets (e.g., stocks) should have a higher expected return than less risky assets (e.g., cash) to be worth keeping in the portfolio.  The outputs of the predictor are very noisy and unreliable.  We want our portfolio to be as diversi ed as possible, i.e., it is better to have two assets of similar expected returns than to invest all our capital in one that is slightly better.  We want to minimize the amount of the transactions. In order to produce the portfolio weights , at each time step, the trading module takes as input the vectors (predictor output) and 1 (previous weights, after changes in value due to multiplicative returns 1 ). Here we wt i

M

yt i

wt i

yt

0

wt;

i

M

C

yt i

wt

0

yt

wt;

rt;

6

are assuming that the assets 0 ; 1 are stocks, and asset represents cash (earning short-term interests). The portfolio weights are non-negative and sum to 1. Our rst experiments were done with a neural network trained to minimize the squared error between the predicted and actual asset returns. Based on advice from nancial specialists, we designed the trading algorithm described in gure 2. Statement 1 in gure 2 is to minimize transactions. Statement 2 assigns a discrete quality (good, neutral, or bad) to each stock in function of how the predicted return compares to the average predicted return and to the return of cash. Statement 3 computes the current total weight of bad stocks that are currently owned, and should therefore be sold. Statement 4 uses that money to buy the good stocks (if any), distributing the available money uniformly among the stocks (or if no stock is good increase the proportion of cash in the portfolio). In the experiments, the parameters 0 , 1 , 2 , 0 , 1 , 2 , were chosen using basic judgment and a few trial and error experiments on the rst training period. However, it is dicult to numerically optimize these parameters because of the discrete nature of the decisions taken. Furthermore, the predictor module might not give out numbers that are optimal for the trader module. This has motivated the dierentiable trading module described in gure 3. Statement 1 of gure 3 de nes two quantities (\goodness" and \badness"), to compare each asset with the other assets, indicating respectively a willingness to buy and a willingness to sell. Statement 2 computes the amount to sell based on the weighted sum of \badness" indices. Statement 3a then computes a quantity that compares the sum of the goodness and badness indices. Statement 3b uses that quantity to compute the change in cash (using a different formula depending on wether is positive or negative). Statement 3c uses that change in cash to compute the amount available for buying more stocks (or the amount of stocks that should be sold). Statement 4 computes the new proportions for each stock, by allocating the amount available to buy new stocks according to the relative goodness of each stock. There are 9 parameters, 2 = f 0 , 1 , 0 , 1 , 0 , 1 , 0 , 1 , g, ve of which have a similar interpretation as in the hard trader. However, since we can compute the gradient of the training criterion with respect to these parameters, their value can be learned from the data. From the algorithmic de nition of the function ( 1 , 2 ) one can easily write down the equations for and 2 , when given the gradients , using the chain rule. 1 :::n

n

wt i

c

c

b

c

b

b



t

t



c

c

b

@C t;

a

s

s



@ yt i

@C

i

a

@C

0

wt wt;  yt  

@w 0

b

@C

@

@ wt j

7

1. By default, initialize for all = 0 . 2. Assign a quality (equal to good, neutral, or bad) to each stock ( = 0 wt i

w

0

i

t i

:::n

t i

P

(a) Compute the average desirability 1 (b) Let rankt i be the rank of in the set f (c) If and rankt i 0 and 1 yt

yt i

yt i > c yt

quality

Else,

n;1 i=0

yt

yt i > c yt n

Then

n

i

yt i

.

0  : : :  yt n;1

; 1):

:::n

g.

> c2

good,

t i

If yt i < b0 y t or yt i < b1 yt n or rankt i < b2 Then, qualityt i bad, Else, qualityt i neutral.

3. Compute the total weight of bad stocks that should be sold: (a) Initialize 0 (b) For = 0 ;1  If quality = bad and 1 0 (i.e., already owned), Then (SELL a fraction of the amount owned) + 1 kt

i

:::n

w

t i

kt

kt

wt i

0

w

w

0

;

t;

t;1 i

i

w

0

t;

i

>

0

t;1 i

4. If 0 Then (either distribute that money among good stocks, or keep it in cash): (a) Let number of good stocks not owned. (b) If 0 kt >

st

st >

Then { (also use some cash to

+

0

buy good stocks)

(1 ; ) { For all good stocks not owned, BUY: . Else (i.e., no good stocks were not already owned) { Let number of good stocks, { If 0 Then For all the good stocks, BUY: 1 + Else (put the money in cash) + . 1 kt

kt

wt n

w

0

w

t;1 n

t;1 n



wt i

kt =st

0

s

t

0

st >

wt i

wt n

w

0

t;

w

0

t;

n

i

0

kt =s

t

kt

Figure 2: Algorithm for the \hard" trading module.

4 Experiments We have performed experiments in order to study the dierence between training only a prediction module with the Mean Squared Error (MSE) and training 8

1. (Assign a goodness value and a badness value between 0 and 1 for each stock)  (Compute the average desirability) 1 =01 .  (goodness) sigmoid( 0 ( ; max( 0 1 )))  (badness) sigmoid( 1 (min( 0 1 ) ; )) 2. (Compute the amount to \sell", to be oset later by an amount to \buy") 1 1 =0 sigmoid( ) 3. (Compute the change in cash) (a) tanh( 0 + 1 =01 ( ; )) (b) If 0 (more bad than good, increase cash) Then 1 + Else (more good than bad, reduce cash) gt i

bt i

yt

gt i

s

bt i

P

n;

kt

P

yt i

s

n;

n

i

yt i

c yt  c yt n

b yt  b yt n

yt i

0

 bt i w i t;

i

t

a

a

P

n;

bt i

i

gt i

t >

wt n

0

t;

n

t kt

; 1 So the amount available to buy is: ;( ; 1 ) wt n

(c)

w

at

kt

wt n

w

w

0

0

t;

t;

 n t

n

4. (Compute amount to \buy", oset by previous \sell", and compute the new weights on the stocks) 1 (a) (a normalization factor) =0 )+ (b) 1 (1 ; sigmoide( ) wt i

st

wt i

P

n; i

w

0

t;

gt i i

 bt i

gt i at st

Figure 3: Algorithm for the \soft" (dierentiable) trading module.

both the prediction and decision modules to maximize the nancial criterion de ned in section 2 (equation 14). 4.1 Experimental Setup The task is one of managing a portfolio of 35 Canadian stocks, as well as allocate funds between those stocks and a cash asset (n = 35 in the above sections, the number of assets is n + 1 = 36). The 35 companies are major companies of the Toronto Stock Exchange (most of them in the TSE35 Index). The data is monthly and spans 10 years, from December 1984 to February 1995 (123 months). We have selected 5 input features (x is 5-dimensional), 2 of which represent macro-economic variables which are known to inuence the business cycle, and 3 of which are micro-economic variables representing the pro tability of the company and previous price changes of the stock. We used ordinary fully connected multi-layered neural networks with a sint

9

gle hidden layer, trained by gradient descent. The same network was used for all 35 stocks, with a single output at each month for stock . Preliminary experiments with the network architecture suggested that using approximately 3 hidden units yielded better results than using no hidden layer or many more hidden units. Better results might be obtained by considering dierent sectors of the market (dierent types of companies) separately, but for the experiments reported here, we used a single neural network for all the stocks. When using a dierent model for each stock and sharing some of the parameters, signi cantly better results were obtained (using the same training strategy) on that data (Ghosn & Bengio, 1997). Here, the parameters of the network are therefore shared across time and across the 35 stocks. The 36th output (for desirability of cash) was obtained from the current short-term interest rates (which are also used for the multiplicative return of cash, ). To take into account the non-stationarity of the nancial and economic time-series, and estimate performance over a variety of economic situations, multiple training experiments were performed on dierent training windows, each time testing on the following 18 months. For each of the 10 training trials (with dierent initial weights), 4 training experiments were performed on dierent training windows (of respectively 33, 51, 69 and 87 months), each time using the following 18 months for early-stopping validation and the remaining 18 months for testing and estimating generalization. The overall return was computed for the whole test period (72 months, 03/89{02/95). Training lasted between 10 and 200 iterations of the training set, with early stopping based on the performance on the validation set. A buy-and-hold benchmark was used to compare the results with a conservative policy. For this benchmark, the initial portfolio is distributed equally among all the stocks (and no cash). Then there are no transactions. The returns for the benchmark are computed in the same way as for the neural network (except that there are no transactions). The excess return is de ned as the dierence between the overall return obtained by a network and that of the buy-and-hold benchmark. yt i

t

i

rt n

4.2 Results

In the rst series of experiments, the neural network was trained with a mean squared error criterion in order to predict the return of each stock over a horizon of three months. We used the \hard decision trader" described in gure 2 in order to measure the nancial pro tability of the system. We quickly realized that although the mean squared error was gradually improving during training, the pro ts made sometimes increased, sometimes decreased. This 10

Table 1: Comparative results: network trained with Mean Squared Error to predict future return vs network trained with nancial criterion (to directly maximize return). Averages and standard deviations are over 10 trials. The test set represents 6 years, 03/89-02/95.

Net Trained with MSE Criterion Net Trained with Financial Criterion

Avg. Excess (Standard Avg. Excess (Standard Return on Deviation) Return on Deviation) Training Sets Test Sets 8.9%

(2.4%)

2.9%

(1.2%)

19.9%

(2.6%)

7.4%

(1.6%)

actually suggested that we were not optimizing the \right" criterion. This problem can be visualized by looking at scatter plots of the pro t versus prediction mean squared error (Bengio, 1997 Bengio, 1996b). Although there is a tendency for returns to be larger for smaller MSE, many dierent values of return can be obtained for the same MSE. This constitutes an additional (and undesirable) source of variance in the generalization performance. Instead, when training the neural network with the nancial criterion, the corresponding scatter plots of excess return against training criterion would put all the points on a single exponential curve, since the excess return is maximized by the training procedure. For the second series of experiments, we created the \soft" version of the trader described in gure 3, and trained the parameters of the trader as well as the neural network in order to maximize the nancial criterion de ned in section 2 (which is equivalent to maximizing the overall excess return). A series of 10 training experiments (with dierent initial parameters) were performed (each with four training, validation and test periods) to compare the two approaches. Table 1 summarizes the results. During the whole 6year test period (March 89 - February 95), the benchmark yielded returns of 6.8%, whereas the network trained with the prediction criterion and the one trained with the nancial criterion yielded in average returns of 9.7% and 14.2% respectively (i.e, 2.9% and 7.4% in excess of the benchmark, respectively). The direct optimization approach, which uses a specialized nancial criterion clearly yields better performance on this task, both on the training and test data. The experiments were then replicated using arti cially generated returns, and similar results were observed. The arti cially generated returns were ob11

tained from an arti cial neural network with 10 hidden units (i.e., more than the 3 units used in the prediction module), and with additive noise on the return. Again we observed that decreases in mean squared error of the predictor were not signi cantly correlated with increases in excess return. When training with respect to the nancial criterion instead, the average excess return on the test period increased from 4.6% to 6.9%. As in the experiments with real data, the nancial performance on the training data was even more signi cantly superior when using the nancial criterion (corroborating the hypothesis that as far as the nancial criterion is concerned, the direct optimization approach oers more capacity than the indirect optimization approach).

5 Conclusion We consider decision-taking problems on nancial time-series with learning algorithms. Theoretical arguments suggest that directly optimizing the nancial criterion of interest should yield better performance, according to that same criterion, than optimizing an intermediate prediction criterion such as the often used mean squared error. However, this requires de ning a dierentiable decision module, and we have introduced a \soft" trading module for this purpose. Another theoretical advantage of such a decision module is that its parameters may be optimized numerically from the training data. The inadequacy of the mean squared error criterion was suggested to us by the poor correlation between its value and the value of the nancial criterion, both on training and test data, con rming previous observations (Leitch & Tanner, 1991). We have shown with a portfolio management experiment on 35 Canadian stocks with 10 years of data that the more direct approach of choosing the parameters of the model in order to optimize the nancial criterion of interest yields better pro ts than the indirect prediction approach. In general, for other applications, one should carefully look at the ultimate goals of the system. Sometimes, as in our example, one can design a dierentiable cost and decision policy, and obtain better results by optimizing the parameters with respect to an objective that is closer to the ultimate goal of the trained system.

Acknowledgements The author would like to thank S. Gauthier and F. Gingras for their prior work on data preprocessing, E. Couture and J. Ghosn, for their useful comments, as well as, the NSERC, FCAR, IRIS Canadian funding agencies for support. 12

We would also like to thank Andre Chabot from Boulton-Tremblay Inc. for the economic data series used in these experiments.

References Bengio, Y. 1996. Neural Networks for Speech and Sequence Recognition. International Thompson Computer Press, London, UK. Bengio, Y. 1996. Using a nancial training criterion rather than a prediction criterion. Technical Report #1019, Dept. Informatique et Recherche Operationnelle, Universite de Montreal. Bengio, Y. 1997. Using a nancial training criterion rather than a prediction criterion. to appear in the International Journal of Neural Systems . Bottou, L. and P. Gallinari. 1991. A framework for the cooperation of learning algorithms. In Advances in Neural Information Processing Systems 3, R. P. Lippman, R. Moody and D. S. Touretzky (eds), pp. 781{788, Denver, CO. Driancourt, X., L. Bottou and P. Gallinari. 1991. Learning vector quantization, multi-layer perceptron and dynamic programming: Comparison and cooperation. In International Joint Conference on Neural Networks, vol. 2, pp. 815{819. Ghosn, J. and Y. Bengio. 1997. Multi-task learning for stock selection. In Advances in Neural Information Processing Systems 9, vol. 9, Cambridge, MA. MIT Press. Hampshire, J. B. and B. V. K. V. Kumar. 1992. Shooting craps in search of an optimal strategy for training connectionist pattern classi ers. In Advances in Neural Information Processing Systems, J. Moody, S. Hanson and R. Lippmann (eds), vol. 4, pp. 1125{1132, Denver, CO. Morgan Kaufmann. Hampshire, J. B. and A. H. Waibel. June 1990. A novel objective function for improved phoneme recognition using time-delay neural networks. IEEE Transactions of Neural Networks 1(2), 216{228. Leitch, G. and J. Tanner. 1991. Economic forecast evaluation: Pro ts versus the conventional error measures. The American Economic Review , 580{590.

13