Geocarto International Electronic Edition ... - Semantic Scholar

3 downloads 0 Views 204KB Size Report
Consequently the two technologies complement each other. In this paper we .... between layers L1 - L2 and L2 - L3 are adjusted so as to minimize the error ...
Fuzzy Neural Network Models for Supervised Classification: Multispectral Image Analysis Arun D. Kulkarni Computer Science Department, The University of Texas at Tyler TX 75799, USA

Kamlesh Lulla Office of Earth Sciences NASA / Johnson Space Center Houston, TX 77058, USA

Abstract It has been well established that neural networks provide a reasonable and powerful alternative to conventional classifiers. During the past few years there has been a large and energetic upswing in research efforts aimed at synthesizing fuzzy logic with neural networks. This combination of fuzzy logic and neural networks seems natural because two approaches generally attack the design of “intelligent” systems from quite different angles. Neural networks provide algorithms for learning, classification, and optimization whereas fuzzy logic deals with issues such as reasoning on a higher (semantic or linguistic) level. Consequently the two technologies complement each other. In this paper we propose two novel fuzzy-neural network models for supervised learning. The first model consists of three layers, and the second model consists of four layers. In both models, the first two layers implement fuzzy membership functions and the remaining layers implement the inference engine. Both models use the gradient decent technique for learning. As an illustration, we have analyzed two Thematic mapper images using these models. Results are presented in the paper.

Introduction Neural networks provide algorithms for optimization, classification, and clustering, whereas fuzzy logic is a tool for representing and utilizing data and information that possesses non-statistical uncertainty. Neural networks are suitable when the system learns from sample data points, whereas fuzzy logic techniques are more appropriate when classification rules are obtained from experts’ knowledge. Neural networks with learning algorithms such as backpropagation can learn from examples, and are suitable for supervised classification. Self organizing networks with learning algorithms such as competitive learning can implement clustering, and are suitable for unsupervised classification. Fuzzy sets which allow partial membership were introduced by Zadeh (1965). Fuzzy logic techniques in the form of approximate reasoning provide decision support and expert systems with powerful reasoning capabilities. The permissiveness of the human thought process suggests that much of the logic behind human reasoning is not traditional two valued or even multivalued logic, but logic with fuzzy truths, fuzzy connectives, and fuzzy rules of inference (Zadeh, 1973). The exploitation of tolerance for imprecision and uncertainty underlies the remarkable human ability to understand distorted speech, 42

decipher sloppy handwriting, comprehend nuances of natural language, summarize text, recognize and classify images and more generally, make rational decisions in an environment of uncertainty and imprecision (Zadeh, 1994). Fuzzy logic provides a tool to explore imprecision in decision making. During the past decade fuzzy logic has found a variety of applications in pattern recognition. With a fuzzy logic decision system, decision rules can be specified in terms of linguistic variables. Recently, many hybrid systems that combine neural networks and fuzzy logic techniques have been reported in the literature (Lee, 1990; Lin and lee, 1991; Berenji and Khedkar, 1992) There are many ways to synthesize fuzzy logic and neural networks. The first approach is to use input-output signals and/or weights in a neural network as fuzzy sets along with fuzzy neurons (Buckley and Hayashi, 1994). Several authors have proposed models for fuzzy neurons (Gupta, 1994). The second approach is to use fuzzy membership functions to pre-process or post-process data with neural network models (Lin and Lee, 1994; Horikawa et al., 1992; Takagi and Hayashi, 1991, Pal and Mitra, 1992). The third approach to combine neural networks with fuzzy logic is to build a classifier with multiple stages and have some stages implemented with neural networks and some with knowledge based fuzzy decision systems. Yet another way to combine neural networks with fuzzy logic is Geocarto International, Vol. 14, No. 4, December 1999 Published by Geocarto International Centre, G.P.O. Box 4122, Hong Kong.

to use fuzzy associative memories (FAMs). In a simple implementation, FAM is a fuzzy logic rule with an associated weight. A mathematical frame work exists that can map FAM to a neural network, with which a fuzzy logic decision system can be built using a backpropagation learning algorithm (Kosko, 1992). Many fuzzy neural networks models that use fuzzy membership functions for preprocessing have been developed. Lin and Lee (1991) have suggested a general neural network model for a fuzzy logic control and decision system. Their model consists of five layers. Nodes at layer one are input nodes (linguistic nodes) which represent input linguistic variables. Nodes at layers two and four are term nodes which act as membership functions to represent terms of the respective linguistic variable. Nodes in layer three represent fuzzy rules. They have used a two phase learning scheme. In phase one, they use a self organized learning scheme to locate initial membership functions and to find the presence of a rule. In phase two they use a supervised learning scheme to optimally adjust the membership functions for desired output. Horikawa et al. (1992) have proposed three types of fuzzy neural network models. They have used sigmoid fuzzy membership functions. In their model the first five layers correspond to premise part of a fuzzy rule. Weights connecting these units represent the center central position and gradient of the sigmoid function. They have categorized their models in three types depending upon the consequent part of the model. The premise part or the first five layers are same for all the three types of models. The consequent part changes in three types. The consequent part can be a constant, first order equation or a fuzzy variable. Takagi and Hayashi (1991) have proposed a model for neural network driven fuzzy reasoning. The first step in their model is determination of fuzzy inference rules. They use a clustering technique to group data. The second step deals with attribution of arbitrary input for each rule. Pal and Mitra (1992) have developed a fuzzy neural network model that uses a backpropagation learning algorithm. They have used the model to classify Indian Telgu vowels sounds. They have also suggested a method for extracting rules with a fuzzy neural network model (Mitra and Pal, 1992). Jang (1993, 1995) presents an adaptive network model for a fuzzy inference system. The model represents adaptive network-based fuzzy inference systems (ANFIS). The ANFIS model is a generic model, and neural networks and fuzzy inference systems can be considered as special instants of an adaptive network when proper node functions are assigned. In this paper, we propose two novel fuzzy-neural network models for supervised classification. The first model consists of three layers and the second model consists of four layers. The first two layers in both models implement fuzzy membership functions whereas the remaining layers implement the inference engine. We have analyzed multispectral images using both the models. In the subsequent sections, we discuss fuzzy inference systems, neural networks, fuzzy-neural network models, and use these models for multispectral image analysis.

Fuzzy Inference Systems Fuzzy inference systems are used in many decision making or classification applications (Cox, 1994; Kulkarni et al., 1994). Fuzzy set theory is an extension of crisp set theory. It is extended to handle the concept of partial truth or partial membership. Zadeh (1975) described that the presence or absence of an element x in a set A is given by its characteristic function µA(x) where µA(x) = 1 if x ∈ A, and, µA(x) = 0 if x ∉ A. In fuzzy logic, the basic idea is that membership values are indicated by a value in the range [0, 1], with 0 representing total falseness and 1 representing total truth (Zadeh, 1975). A fuzzy set A in a universe of discourse U is characterized by a membership function µA which takes a value in the interval [0,1] that is µA → [0,1]. Thus a fuzzy set A in U may be represented by a pair. Each pair consists of a generic element x and its grade of membership function, that is A = { (x, µA(x) ) | x ∈ U }, x is called a support value if µA(x) > 0. A general model of a fuzzy decision system is given in Figure 1. The input to the fuzzy decision system is a vector of crisp values. The fuzzifier in Figure 1 performs the mapping from an observed feature space to fuzzy membership values. A crisp value xi is mapped to the fuzzy set Txi1 with degree, µxi2 and the fuzzy set Txi2 with degree µxi2 and so on. The rule base contains a set of fuzzy rules R. The set of fuzzy rules of the system are given by, R = {R1, R2, R3, ....... Rn}

(1)

where the ith fuzzy rule is R = if (x1 is Tx1, and ...... and xp is Txp), then (y1 is Ty1 and ...... and yq is Tq1 )

(2)

The if part of the rule forms a fuzzy set Tx1 x ...... x Tx2 of preconditions. The then part of Ri is the union of q independent outputs. The inference engine matches the preconditions of rules and performs implication. If there are two rules R1, R2 with firing strengths α1, α2, then the firing strengths for the following rules R1 and R2 are given as R1 = if x1 is Tx11 and x2 is Tx21 then y is Ty1 R2 = if x1 is Tx12 and x2 is Tx22 then y is Ty2

(3)

and αi = µx1i ^ µx2i Input fuzzifier

(4)

inference engine

output defuzzifier

rule base

learning Figure 1

Fuzzy inference decision system 43

neti = Σoutj wij j

where ^ is the fuzzy and operator and is defined as αi = min (µ , µ1 ) i 1

i

(5)

The above two rules, R1 and R2, lead to the corresponding decision with membership function myi, i = 1, 2, which is defined as

µyi = α1 ^ µyi

The output decision can be obtained by combining the two decisions

µy = µy1 ∨ µy2

(7)

where v is the fuzzy or operator, which is defined as,

µy = max (µy1, µy2)

(8)

Equations (6) and (9) define the fuzzy operators AND and OR respectively. The input for the defuzzification process is an aggregated fuzzy set and the output is a single number. The defuzzification process helps to move out from a fuzzy set to a crisp value. The most popular method used for defuzzification is the centriod method.

Neural Network Models Neural network models are often described in terms of the topology of the network activation function for processing units, and the learning algorithm. Many research papers and text books deal with neural networks and their applications for pattern recognition (Kulkarni, 1994, Pao 1989). Bischof et al. (1972) have used multilayer perceptron for multispectral image analysis. It is well known that a simple three-layer feed-forward network with a backpropagation learning algorithm can be used as a supervised classifier. A simple three-layer feed-forward network is shown in Figure 2. The learning algorithm is described below. Layer L1 serves as the input layer, layer L2 is the hidden layer, and layer L13 represents the output layer. The number of units in the output layer is equal to the output decisions. The net-input and output for units in layers L2 and L3 are given by L2 L3 y1

y2 x3

x4

Figure 2 44

∆wij = -∂E/∂wij

y3

A three-layer feed-forward network

(11)

Equation (13) can be reduced to ∆wij = α δi oj

(12)

where α is a training rate coefficient (typically 0.01 to 1.0), oj is the output of neuron j in layer L3, and δi, is given by δ = [∂F (neti) / ∂neti] (di - oi) = oj (l - oi) (di - oi)

(13)

In Equation (15), oi, represents the actual output of neuron i in layer L3, and di represents the target or the desired output at neuron i in layer L3. Layer L2 has no target vector, so Equation (13) can not be used in Layer L1. The backpropagation algorithm trains hidden layers by propagating the output error back layer by layer, adjusting weights at each layer. The change in weights between layers L1 L2 can be obtained as (l4)

where β is a training rate coefficient for layer L 1 (typically 0.01 to 1.0), oj is the output of neuron j in layer L4, and δHi = oi (1 - oi) Σ dk Wik k

x2

(10)

where outi is the output of unit i and Ø is a constant. The network works in two phases the training phase and the decision making phase. During the training phase weights between layers L1 - L2 and L2 - L3 are adjusted so as to minimize the error between the desired and the actual output. The backpropagation learning algorithm is described below. Step 1: Present a continues valued input vector x = (x1, x2,... xn)T to layer L1 and obtain the output vector y = (y1, y2... ym)T at layer L3. In order to obtain the output vector y, calculation is done layer by layer from L1 to layer L3. Step 2: Calculate change in weight. In order to do this, the output vector y is compared with the desired output vector or the target vector d, and the error is then propagated backward to obtain the change in the weight ∆wij that is used to update the weight. ∆wij for weights between layers L2 L3 is given by

∆wij = β oj δHi

L1 x1

where neti is the net-input and outj is the output of the unit j in the preceding layer, wij is the represents the weight between the units i and j. out i = 1 / {1 + exp [-(neti + Ø)]}

(6)

(9)

(15)

In Equation (15), oi is the output of neuron i in layer L1, and the summation term represents the weighted sum of all d values corresponding to neurons in layer L 3 that are obtained by using Equation (16). Step 3: Update the weights wij (n + 1) = wij(n) + ∆wij

(16)

where wij(n + 1) represents the value of the weight at

iteration n+1 (after adjustment), and wij(n) represents the value of the weight at iteration n. Step 4: Obtain error ε for neurons in layer L3. ε = Σ (oi - di)2

(17)

If the error e is greater than some minimum εmin, then repeat Steps 2 through 4; otherwise terminate the training process.

Fuzzy-Neural Network Models A model for a three-layer fuzzy-neural decision system is shown in Figure 3. Each layer consists of a number of simple processing units. Layer L1 is the input layer and layer L2 performs the functions of the fuzzifier block shown in Figure 1. Units in L2 may be compound units so as to implement a desired membership function. We have chosen Gaussian membership functions. However, membership functions of other shapes such as triangular or π-shaped functions can be used. Initially, membership functions are determined using the mean and standard deviation values of input variables. Subsequently, during learning these functions are updated. Layers L2 and L3 represent a twolayer feed-forward network. The connection strengths connecting these layers encode fuzzy rules used in decision making. In order to encode decision rules, we have used a gradient descent search technique (Pao, 1989). The algorithm minimizes the mean squared error obtained by comparing the desired output with the actual output. The model learns in two phases. During the first phase of learning the weights between layers L2 and L3 are updated so as to minimize the mean squared error, and during the second phase fuzzy membership functions are updated. Once the learning is completed the model can be used to classify any unknown input sample. Layers in the model are described below. Layer Ll. The number of units in this layer is equal to the number of input features. Units in this layer correspond to input features, and they just transmit the input vector to the next layer. The net-input and activation function for this layer are given by neti = xi outi = neti

(18)

where neti indicates the net-input, and outi indicates the output of unit i. Layer L2. This layer implements membership functions. In the present case we have used five term variables {verylow, low, medium, high, very high} for each input feature value. The number of units in layer L2 is five times the number of units in L1. The net-input and activation function for units are chosen so as to implement Gaussian membership functions which are given by f(x, σ, m) = exp [-{(x-m) /2σ }] 2

2

(19)

where m represents the mean value and σ represents the standard deviation for a given membership function. The net-input and output for units in L2 are given by

neti = xi outi = f(x; σ, m)

(20)

Layers L2 and L3. These layers implement the inference engine. Layers L2 and L3 represent a simple two-layer feedforward network, Layer L2 serves as the input layer and L3 represents the output layer. The number of units in the output layer is equal to the number of output classes. The net-input and out-put for units in L3 is given by neti = Σ outj wij j

(21)

outi = 1 / { 1 + exp [-(neti + φ)] }

(22)

where outi is the output of unit i and φ is a constant. Initially weights between layers L2 and L3 are chosen randomly, and subsequently they are updated. The fuzzy membership are initially determined based on the minimum and maximum values of input features. The algorithm minimizes the mean squared between the desired and actual outputs. The learning algorithm is described below. Step 1: Present a continues values input vector x = (x1, x2, ... xn)T to layer L1, and obtain the output vector o = (o1, o2, ... om)T at layer L3. In order to obtain the output vector o, calculations are done layer by layer from L1 to L3. Step 2: Calculate the change in weights. In order to calculate the change in weights the output vector o is compared with desired output vector or target vector d, and the mean squared error is then propagated backward. The change in weight ∆wij is given by ∆wij = -α ∂E/∂wij

(23)

where α is a training rate coefficient (typically 0.01 to 1.0) and E represents the mean squared error at L3. Using the chain rule for evaluating the derivative in Equation (25), we get ∆wij = -α δi oj

(24)

where δi = oi (1 - oi) (di - oi) In order to update membership function, we need to find change in parameters i. e. mean values and standard deviations that define membership functions. Again using the chain rule, the change in mean values is given by ∆mkj = -ß ∂E/∂m

(25)

n ∆mkj = ßexp[{(xk - m)2/2σ 2}{(xk-m)/σ 2}] Σδi wij (26) i=1 where and δi = oi (l-oi) (di -yi) Similarly for changes in standard deviation values we get ∆σkj = -γ∂E/∂σ

(27)

n ∆σ kj= γ exp[{(xk-m)2 / 2σ2} {(xk - m)/σ3}]Σδi wij (28) i=1 45

L2

L3

L1

O1

x1

O2

x2 O3

Om

xn

Figure 3

A three-layer fuzzy-neural inference

Step 3: Update weights and membership functions wij (k + l) = wij(k) + ∆wij mij (k + l) = mij(k) + ∆mij σ ij (k + l) = σ i (k) + ∆σ ij

(29)

The update procedure can be implemented in two phases. During the first phase we update weights and consider membership functions as constants, where as during the second phase we update membership functions and keep the updated weights unchanged. Step 4: Obtain the mean squared error at L3. m (30) ε = Σ (oi - di) 2 i=1 If the error is greater than some minimum value εmin then repeat Steps 2 through Step 4. A model for a four-layer fuzzy-neural inference system is shown in Figure 4. Each layer consists of a number of simple processing units. Layer L1 is the input layer and layer L2 performs functions of the fuzzifier block shown in Figure 1. Layers L2 L3, and L4 represent a backpropagation network which performs the same functions as the inference engine and knowledge base. The connection strengths connecting these layers encode the decision rules used in decision making. The fuzzy inference engine in a fuzzy decision system maps input fuzzy memberships to output fuzzy memberships. This mapping is determined by fuzzy rules. A feed-forward neural network also maps multiple 46

inputs to multiple outputs. Therefore it is always possible to implement an inference engine with a feed-forward neural network. The mapping function for the neural network is determined by the training samples. In order to encode the decision rules we have used the backpropagation learning algorithm which essentially uses a gradient search technique. The algorithm minimizes the mean squared error obtained by comparing the desired output with the actual output. The model works in two phases the training phase and the decision making phase. During the training phase the model is trained using training set data. The layers in the model are described below. Layers L1 and L2. The number of units in this layer is equal to the number of input features. Units in this layer correspond to the input features and they just transmit the input vector to the next layer. Layers L3, L4 and L5. These layers implement the inference engine. Layers L3, L4, and L5 represent a simple three-layer feed-forward network with a backpropagation 1earning algorithm. Layer L3 serves as the input layer, layer L4 is the hidden layer, and layer L5 represents the output layer. The number of units in the output layer is equal to the output decisions. The learning algorithm for this model is as below. Step 1. Present a continues values input vector x = (x1, x2, ... xn)T to layer L1, and obtain the output vector o = (o1, o2, ... om)T at layer L3. In order to obtain the output vector o, calculations are done layer by layer from L1 to L3. Step 2. Calculate the change in weights between layers L2 L3 and L3 L4 using Equations (12) and (14), respectively. Step 3: Update the weights using Equation (16). Step 4: Obtain error ε for neurons in layer L3. ε = Σ (oi - di) 2 i

(31)

If the error ε is greater than some minimum εmin, then repeat Steps 2 through 4; otherwise terminate the training process.

Computer Simulation We have developed software to simulate the three models: a three-layer feed-forward network with back propagation learning, a three-layer fuzzy inference system, and a fourlayer fuzzy inference system. These models are used as supervised classifier to classify pixels based on their spectral signatures. We have considered two Thematic Mapper (TM) scenes: the first scene represents the Mississippi river bottom-land, and the second scene represents Yellowstone forest area. Both scenes are of the size 512 pixels x 512 scans. Each pixel in these scenes were represented by a vector of five gray values. We used only five spectral bands (bands 2, 3, 4, 5, and 7), because these bands showed the maximum variance and contained information needed to identify various classes. During the training phase the network was trained using training set data. We selected three training set areas, each of the size 10 scans x 10 pixels, for each scene. These represent three different

L3 L2 L4 y1 L1 y2

x1

y3 x2

y4 x3 y5

Figure 4

A four-layer fuzzy-neuarl inference system

classes. Each class is represented by a small homogeneous area. Only a small fraction (300 pixels) of the entire data set (256134 pixels) was used training for samples. The target output vectors for three classes were defined as (1,0,0), (0,1,0), (0,0,1). These vectors represented classes 1 through 3, respectively. During the learning phase the networks were trained with training set data, and during the decision making phase the training set data was reclassified to check the accuracy. In the case of a backpropagation network layers L1, L2 and L3 contained five three, and thirteen units, respectively. Units in layer L 1 represent input features, and units in L 3 correspond to output classes. The network was trained using 300 training samples in each case. Spectral signatures for three classes for Mississippi scene are shown in Figure 5. The original data for both the scenes is shown in Figures 6a and 7a. The corresponding classified output images obtained using the backpropagation network are shown in Figures 6b and 7b, respectively. In the case of Mississippi scene the network was able to converge in 2000 iterations. In the case of a three-layer fuzzy neural network system. layers L1, L2, and L3 and contained five, twenty-five, and three units, respectively. Units in layer L1 represent input features. Units in L2 correspond to five fuzzy sets for each feature. These fuzzy sets are represented by the term set {very-low, low, medium, high, very-high}. Units in L3 represent output categories. Learning for the network was achieved in two phases. During the first phase weights between layers L 2-L3 are

updated, and during the second phase membership functions are updated. For Mississippi scene the model was able to converge in 2500 iterations. Two scenes were analyzed with this model, and the classified output is shown in Figures 6c and 7c, respectively. In the case of a four-layer fuzzy inference system, the first two layers are same as the threelayer fuzzy inference system. However, layers L 2 , L 3 , and L 3 represent a three-layer backpropagation network. In this model layers L1, L2, L3, and L4 contained five, twenty-five, thirty-two, and three units respectively. We have used Gaussian fuzzy membership functions. During learning, weights between layers L2-L3 and L3-L4 were updated. Learning in this model was achieved in a single phase. Here, the fuzzymembership function were pre-determined and were not updated during learning. In the case of Mississippi scene the models was able to converge in fifty iterations. The output obtained with this model are shown in Figures 6d and 7d, respectively.

Discussions and Conclusion In this paper, we have developed and implemented two new fuzzy-neural network models for supervised classification. Conventional statistical classifiers are sequential in nature. With a classifier such as the maximum likelihood classifier, for each pixel we obtain the probability of that pixel belonging to each of the classes and assign the pixel to a class with the greatest probability. This process is time consuming as we have to evaluate each pixel for all the possible classes. Neural networks are preferred as they work in parallel and once the network is trained, we present the input sample and the network yields the output decision. The backpropagation network was able to learn in 2000 iterations, whereas, the three-layer fuzzy inference system was took 2500 iterations for convergence. The model took a large number of iterations due to fine tuning of fuzzy membership function. Updating of membership functions enables to reduce the mean squared error further. The four-layer fuzzy inference system was able to learn in fifty iterations. This is because of the fact that the fuzzy membership functions are not updated in this model. Here, the model has more weights, and consequently the model could learn with pre-determined fuzzy membership functions. In this case there was no need for fine tuning. The fuzzification process is a nonlinear mapping which increases dimensions of the feature space, however, it also increases the separability of 47

70 60

reflectance

50 class1

40

class2

30

class3

20 10 0 1

2

3

4

5

spectral bands Figure 5

Spectral signatures for Mississippi scene

Figure 6a

Raw data (band-5)

Figure 6b

Classified output with a back-propagation network

Figure 6c

Classified output with a three-layer fuzzy inference system

Figure 6d

Classified output with a four-layer fuzzy inference system

48

Figure 7a

Raw data (Band-5)

Figure 7b

Classified output using a backpropagation network

Figure 7c

Classified output with a three-layer fuzzy inference system

Figure 7d

Classified output with a four-layer fuzzy inference system

classes in the feature space. In order to evaluate the effect of fuzzification we used separability measure J=||Sb||/||Sw||, where Sb is the between class scatter matrix, Sw is the within class scatter matrix, and ||.|| denotes the matrix norm (Duda and hart, 1973). We evaluated the separability of the classes before and after fuzzification, and we found 31.31 percent increase in the separability. Also with a fuzzy-neural inference system decision system we can interpret decision rules in terms of linguistic variables (Mitra and Pal, 1994). This can be achieved by displaying the fuzzy membership values at layer L2 and the output decision vector. In the present case we used predetermined Gaussian membership functions. However, other functions such as triangular and π-shaped functions with different overlaps can be used for fuzzification. Neural networks represent a powerful and reasonable alternative to conventional classification methods. Our experiment suggests that by combining fuzzy logic with neural networks we can develop more efficient decision systems.

Acknowledgments The research work presented here was supported in part by the 1997 NASA / ASEE summer faculty fellowship and 1996-97 University Faculty Research grants.

References Bischof, H., et al., 1992, Multispectral classification of landsatimages using neural networks. IEEE Transactions on Geoscience and Remote Sensing, 30:482-490. Berenji, H. R., and Khedkar, P., 1992. Learning and tuning fuzzy logic controllers through reinforcements. IEEE Transactions on Neural Networks, 3:724-740. Bezdek, J., C., 1993, Editorial-fuzzy models-what are they and why. IEEE Transactions on fuzzy systems, 1:1-5. Buckley, J. J., and Hayashi, Y., 1994, Fuzzy neural networks. Fuzzy Sets, Neural Networks, and Soft Computing. Editors: Yager, R. R. and Zadeh, A., Van Nostrand, New York 233-249. 49

Cox, E., 1994, The fuzzy systems handbook. Academic Press, Cambridge, MA. Duda, R. O. and Hart, P. E., 1973, Pattern classification and scene analysis. John Wiley and Sons, New York NY. Gupta, M. M., 1994, Fuzzy neural networks: theory and applications. Proceedings of SPIE, 2353:303-325. Horikawa, S., Furuhashi, T., and Uchikawa, Y., 1992, On fuzzy modeling using neural networks with the backpropagation algorithm. IEEE Transactions on Neural Networks, 3:801-806. Jang J. S. R., 1993, ANFIS: Adaptive network based fuzzy inference systems. IEEE Transactions on Systems, Man and Cybernetics, 23:665-685. Jang J. S. R., and Sun C. T., 1995, Neuro-fuzzy modeling and control, Proceedings of IEEE, 83:378-406. Jang, J. S. R., Sun, C. T., and Mizutani, E., 1997, “Neuro-fuzzy and soft computing,” Prentice Hall, Upper Saddle River, NJ. Kosko, B., 1992, Neural networks and fuzzy systems. Prentice Hall, Englewood Cliffs, NJ. Krishnapuram, R., and Lee, J., 1992, Fuzzy-set based hierarchical networks for information fusion in computer vision. Neural Networks, 5:335-350. Kulkarni, A. D., 1994, Artificial neural networks for image understanding, Van Nostrand Reinhold. New York, NY. Kulkarni, A. D., Giridhar, G. B., and Coca, P., l995, Neural network based fuzzy logic decision systems for multispectral image analysis. Neural, Parallel & Scientific Computations, 3:205-218.

50

Lee, C. C., 1990, Fuzzy logic in control systems: Fuzzy logic controller Part I. IEEE Transactions on Systems, Man, Cybernetics, 120:404418. Lin, Chin-Teng, and Lee George C. S., 1991, Neural network based fuzzy logic control and decision system. IEEE Transactions on Computers, 40:1320-1336. Lin C. T. and Lee George C. S., 1994, Reinforcement structure / parameter learning neural network based fuzzy logic control systems, IEEE Transactions on Fuzzy Systems, 2:46-63. Mendel, J. M., 1995, Fuzzy logic systems for engineering: A Tutorial. Proceedings of IEEE, 83:345-377. Pal, S. K., and Mitra, S., 1992, Multilayer perceptron, fuzzy sets, and classification, IEEE Transactions on Neural Networks, 3:683-697. Pao, Y. H., 1989, Adaptive pattern recognition and neural networks. Addison-Wesley, Reading, MA. Takagi, H. and Hayashi, I., 1991, NN-Driven fuzzy reasoning. International Journal of Approximate Reasoning, 5:191-212. Yager, R. R., and Zadeh, L. A. (Editors), 1994, Fuzzy sets, neural networks, and soft computing. Von Nostrand Reinhold, NY. Zaheh, L. A., 1965, Fuzzy sets. Information and Control, 8:338-352. Zadeh, L. A., 1973, Outline of a new approach to analysis of complex systems and decision processes. IEEE Transactions on Systems, Man, and Cybernetics, 3:28-44. Zadeh, L. A., 1994, Fuzzy logic neural networks, and soft computing. Communications of the ACM, 37:77-84.

Workshop on

Remote Sensing in Meteorology and Climate Research Basel/Switzerland, Feburary 21-22, 2000 Aim of the EARSeL SIG for Topics Atmosphere and Meteorology From the beginning of remote sensing this technique was used for meteorological purposes. In the meantime the use of remotely sensed data has widely expanded within meteorology and atmospheric sciences. Changes of the chemical composition of the atmosphere, global climate problems, atmospheric hazards, weather prediction, storm forecast, urban climate, environmental monitoring are only some of the topics within meteorology and climatology where satellite data play an important role. The EARSeL Special Interest Group Atmosphere was created at the beginning of the 90s and organised a first workshop in 1995 in Basel. The coming workshop is foreseen to reactivate the activities of the SIG.

Topics All topics related to meteorology, climatology and applications are welcome. Special emphasis should be put on - Data calibration and atmospheric correction - Atmospheric sounding - Atmospheric gases and aerosols - Radiation and heat balance - Weather prediction - Hazard management - Application in regional and local planning - Ground based systems (Lidar, Radar) - Potentials of new sensors for atmospheric research

Scientific Committee

Call for papers

Prof. Dr. E. Parlow – Institute of Meteorology, Climatology and Remote Sensing, Basel (CH) Prof. Dr. H. Gossmann – Institute of Physical Geography, Freiburg (FRG) Prof. Dr. B. Lundén – Institute of Physical Geography, Stockholm (S) Prof. Dr. L. Wald – Ecole des Mines de Paris, Sophia Antipolis (F)

You are cordially invited to express your interest in this workshop by submitting an abstract of your intended paper or poster presentation (max. one A4-page). All papers will be scrutinised by the scientific committee. The deadline for submitting abstracts is December 1st, 1999. You will be informed about the acceptance of your paper or poster by December 31st. The camera ready copy of the paper should be submitted at the workshop.

Organising Committee Prof. Dr. E. Parlow – MCR Lab PD Dr. D. Scherer – MCR Lab Madeleine Godefroy – EARSeL Secretariat Josette Pfefferli – MCR Lab Günter Bing – MCR Lab

Further Information Prof. Dr. E. Parlow MCR Lab University of Basel Spalenring 145 CH – 4055 Basel, Switzerland Fax: +41-61-272-69-23 e-mail: [email protected] Internet: http://www-earsel.cma.fr/ http://www.gib.unibas.ch/mcr/InfoBox/SIG.htm

Abstract can be sent to (Fax or e-mail): Prof. Dr. E. Parlow MCR Lab University of Basel Spalenring 145 CH – 4055 Basel, Switzerland Fax: +41-61-272-69-23 e-mail: [email protected]

Language English

Location The workshop will take place at the University of Basel. Detailed information concerning lecture room etc. is announced in the preliminary program. 51