'Useful' Measures of Fuzzy Information, Total ...

4 downloads 0 Views 180KB Size Report
Rakesh Kumar Bajaj2 Jaypee University of Information Technology, Waknaghat, ... Keywords: Fuzzy entropy, 'useful' fuzzy information, total fuzzy information.
‘Useful’ Measures of Fuzzy Information, Total ambiguity and Fuzzy Directed Divergence D.S.Hooda1 , Jaypee Institute of Engineering and Technology, Guna, INDIA, Rakesh Kumar Bajaj2 Jaypee University of Information Technology, Waknaghat, INDIA. Abstract In the present communication, a new concept of ‘useful’ fuzzy information has been introduced by incorporating the uncertainties of fuzziness and probabilities of randomness. A measure of ‘useful’ total fuzzy information has been defined and computed. We have also defined a new ‘useful’ measure of fuzzy directed divergence of a fuzzy set from another fuzzy set and proved its validity. In the end, the constrained optimization of ‘useful’ fuzzy entropy and ‘useful’ fuzzy directed divergence has been studied.

Keywords: Fuzzy entropy, ’useful’ fuzzy information, total fuzzy information and constrained optimization. 2000 Mathematics Subject Classification:94D05 and 94A15.

1

Introduction

Zadeh (1965) introduced the concept of fuzzy sets and developed the theory to measure the ambiguity of a fuzzy set. Fuzzy set theory makes use of entropy to measure the degree of fuzziness in a fuzzy set, which is also called fuzzy entropy (1983) (1994). Fuzzy entropy is the measurement of fuzziness in a fuzzy set, 1 2

ds [email protected] [email protected](Corresponding Author)

1

and thus has especial important position in fuzzy systems, such as fuzzy pattern recognition systems, fuzzy neural network systems, fuzzy knowledge base systems, fuzzy decision making systems, fuzzy control systems and fuzzy management information systems. Let U be any set. A fuzzy subset A in U is characterized by a membership function µA : U → [0, 1]. The value µA (x) represents the grade of membership of x ∈ U in A. The membership function may be described as follows:    0, if x does not belong to A and there is no ambiguity,   µA (x) = 1, if x belongs to A and there is no ambiguity,     0.5, if there is maximum ambiguity whether x belongs to A or not. Two fuzzy sets A and B are said to be fuzzy-equivalent if µB (xi ) = either µA (xi ) or 1 − µA (xi ) for each value of i. It is clear that fuzzy-equivalent sets have the same entropy but two sets may have the same fuzzy entropy without being fuzzy equivalent. From the fuzziness point of view there is no essential difference between fuzzy equivalent sets. A standard fuzzy set is that member of the class of fuzzy equivalent sets all of whose membership value are less than or equal to 0.5. Let X is a discrete random variable with probabilities P = (p1 , p2 , ..., pn ) in an experiment. We call (X, P ) a discrete probabilistic framework. The information contained in this experiment is given by the well known Shannon’s entropy (1948): H(P ) = −

n X

pi log pi

(1.1)

i=1

It should be noted that the meaning of fuzzy entropy is quite different from the classical Shannon entropy because no probabilistic concept is needed in order to define it. Fuzzy entropy deals with vagueness and ambiguous uncertainties, while Shannon entropy deals with probabilistic uncertainties. Deluca and Termini (1972) characterized the fuzzy entropy and introduced a set of following properties (P1 - P4) for which fuzzy entropy should satisfy them: 2

• Sharpness (P1): H(A) is minimum iff A is a crisp set, i.e.

µA (x) =

0 or 1; ∀x. • Maximality (P2): H(A) is maximum iff A is most fuzzy set, i.e. µA (x) = 0.5; ∀x. • Resolution (P3): H(A) ≥ H(A∗ ), where A∗ is sharpened version of A. ¯ where A¯ is the complement of A i.e. • Symmetry (P4): H(A) = H(A), µA¯ (xi ) = 1 − µA (xi ). The properties of fuzzy entropy are widely accepted have become a criterion for defining any new fuzzy entropy. Corresponding to entropy due to Shannon (1948), Deluca and Termini (1972) suggested the following measure of fuzzy entropy: n X H(A) = − [µA (xi ) log µA (xi ) + (1 − µA (xi )) log (1 − µA (xi ))].

(1.2)

i=1

As (1.2) satisfies all four properties (P1) to (P4), so it is a valid measure of fuzzy entropy. Later on Bhandari and Pal (1993) made a survey on information measures on fuzzy sets and gave some measures of fuzzy entropy. Corresponding to Renyi’s (1961) entropy they have suggested the following measure: Hα (A) =

n 1 X log [µαA (xi ) + (1 − µA (xi ))α ]; 1 − α i=1

α 6= 1, α > 0

(1.3)

and corresponding to Pal and Pal’s (1989) exponential entropy they introduced n X £ ¤ 1 He (A) = √ log µA (xi )e1−µA (xi ) + (1 − µA (xi )) eµA (xi ) − 1 . n e − 1 i=1

(1.4)

Note that, the Shannon entropy does not take into account the effectiveness or importance of the events, while in some practical situations of probabilistic nature subjective considerations also play their own role. Belis and Guisau (1968) 3

considered qualitative aspect of information and attached a utility distribution U = (u1 , u2 , ..., un ), where ui > 0 for each and is utility or importance of an event xi whose probability of occurrence is pi . In general ui is independent of pi . They suggested that the occurrence of an event removes two types of uncertainty: the quantitative type related to its probability of occurrence, and the qualitative type related to its utility (importance) for the fulfillment of some goal set by the experimenter. Bhaker and Hooda (1993) gave the generalized mean value characterization of the useful information measures for incomplete probability distributions:

n P

H(P ; U ) = −

i=1

ui pi log pi n P

,

(1.5)

ui pi

i=1

and

n P

1 Hα (P ; U ) = log i=1 n P 1−α

ui pαi ; α 6= 1, α > 0.

(1.6)

ui p i

i=1

The first attempt to quantify the uncertainty associated with a fuzzy event in the context of a discrete probabilistic framework appears to have been made by Zadeh (1968), who defined the (weighted) entropy of A with respect to (X, P ) as n X H(A, P ) = − µA (xi )pi log pi ,

(1.7)

i=1

where µA is the membership function of A and pi is the probability of occurrence of xi . One can notice that this situation contains the different types of uncertainties, e.g., randomness, ambiguity, and vagueness; i.e. randomness and fuzziness. This measure does not satisfy properties (P1) to (P4), nor was it meant to. H(A, P ) of a fuzzy event with respect to P is less than Shannon’s entropy which is of P alone. In section 2 of the present paper, we have introduced a new concept of ‘useful’ fuzzy information measure by incorporating the uncertainties of fuzziness and 4

probabilities of randomness having utilities. In section 3, a new measure of total ‘useful’ fuzzy information, by considering the usefulness of an event along with fuzzy uncertainties and random uncertainties is computed. In section 4, we have defined a ‘useful’ measure of fuzzy directed divergence of fuzzy set A from fuzzy set B and also proved its validity.

2

‘Useful’ Fuzzy Information Measures

Let x1 , x2 , . . . , xn are members of the universe of discourse having probabilities of occurrence p1 , p2 , . . . , pn and utility or importance u1 , u2 , . . . , un respectively. Let A be a fuzzy set. Then µA (x1 ), µA (x2 ), . . .,µA (xn ) are ambiguities or uncertainties which lie between 0 and 1, but these are not probabilities because their sum is not unity. However, µA (xi ) , ΦA (xi ) = P n µA (xi )

i = 1, 2, . . . , n,

(2.1)

i=1

is a probability distribution. Since µA (x) and 1 − µA (x) give the same degree of fuzziness, therefore incorporating altogether the uncertainties of fuzziness and probabilities of randomness having utilities, we suggest the following measure of fuzziness of fuzzy set analogous to Deluca and Termini’s fuzzy entropy (1.2): n P

H(A; P ; U ) = −

ui pi [µA (xi ) log µA (xi ) + (1 − µA (xi )) log (1 − µA (xi ))]

i=1

n P

, (2.2) ui pi

i=1

which can be called as ‘useful’ fuzzy entropy of fuzzy set A. Theorem 1: The measure (2.2) satisfies the four properties (P1) to (P4) which are necessary for validity of a measure of fuzziness in fuzzy set A. 5

Proof : (P1) Sharpness: If H(A; P ; U )= 0 then n P

ui pi [µA (xi ) log µA (xi ) + (1 − µA (xi )) log (1 − µA (xi ))] = 0,

i=1

⇒ µA (xi ) log µA (xi ) + (1 − µA (xi )) log (1 − µA (xi )) = 0,

[ui , pi > 0 ∀i]

⇒ either µA (xi ) = 0 or 1 ∀i = 1, 2, . . . , n, ⇒ A is a crisp set. Conversely, let A be a crisp set, then either µA (xi ) = 0 or µA (xi ) = 1 ∀i. ⇒ µA (xi ) log µA (xi ) + (1 − µA (xi )) log (1 − µA (xi )) = 0 , ∀i n P ⇒ ui pi [µA (xi ) log µA (xi ) + (1 − µA (xi )) log (1 − µA (xi ))] = 0, [ui , pi > 0 ∀i] i=1

⇒ H(A; P ; U ) = 0. Hence H(A; P ; U )= 0 if and only if A is non-fuzzy set or crisp set. (P2) Maximality: Differentiating H(A; P ; U ) with respect to µA (xi ), we have n

∂H(A; P ; U ) X 1 − µA (xi ) = ui pi log ∂µA (xi ) µA (xi ) i=1

.

(2.3)

Case 1 : 0 < µA (xi ) < 0.5, A (xi ) In this case, log 1−µ >0⇒ µA (xi )

∂H(A;P ;U ) ∂µA (xi )

> 0.

Thus H(A; P ; U ) is an increasing function of µA (xi ) satisfying 0 < µA (xi ) < 0.5. Case 2 :

0.5 < µA (xi ) < 1,

A (xi ) In this case, log 1−µ 0. n P 1−α ui pi

(2.7)

i=1

and n P

1 i=1 He (A; P ; U ) = √ n e−1

£ ¤ ui pi log µA (xi )e1−µA (xi ) + (1 − µA (xi )) eµA (xi ) − 1 n P i=1

7

ui pi (2.8)

’useful’ measures of fuzzy information respectively. It may be noted that when α → 1, (2.7) reduces to (2.2). Thus we can define a ‘useful’ fuzzy information measure corresponding to the generalized measures of fuzzy entropies.

3

‘Useful’ Measure of Total Fuzzy Information

Here Total ‘useful’ Information Measure is concerned with integration of Fuzzy and Probabilistic uncertainties with utilities. There have been several attempts to combine probabilistic and fuzzy uncertainties when (X, P ) is a discrete probability framework. The entropy given by (1.7) is a measure of uncertainty associated with a fuzzy event, and was the first composite measure of probabilistic and fuzzy uncertainty. Next, total information measure of a fuzzy system was introduced and studied by Deluca and Termini (1972), which has been used by some authors as a measure of uncertainties of fuzziness and randomness of events in an experiment, as following: (a) The uncertainty deduced from the “random” nature of the experiment. The average of this uncertainty is computed by Shannon’s entropy.i.e. H(P ) given by H(P ) = −

n X

pi log pi

(3.1)

i=1

(b) The uncertainty that arises from the fuzziness of the fuzzy set relative to the ordinary set. This amount is given by the fuzzy entropy (1.2).

8

(c) The statistical average, m, of the ambiguity of the whole set is given by n X m(µA , p1 , p2 , ..., pn ) = − pi [µA (xi ) log µA (xi ) + (1 − µA (xi )) log (1 − µA (xi ))]. i=1

(3.2)

(d) The total information measure is obtained by adding two kinds of uncertainties (3.1) and (3.2). Further, we combine probabilistic and fuzzy uncertainties with utilities when (A; P ; U ) is a discrete probabilistic framework with utilities and fuzzy set A which is characterized by membership function µA . If we also consider importance or usefulness of an event, then total ‘useful’ information measure is a measure of fuzzy uncertainties, random uncertainties and utilities of events, and that is computed as follows: (i) If we consider the uncertainties with ‘usefulness’ from “random” nature of an experiment, then average measure of these uncertainties was introduced and characterized by Bhaker and Hooda (1993): n P

H(P ; U ) = −

i=1

ui pi log pi n P

.

(3.3)

ui pi

i=1

(ii) If we consider the uncertainty that arises from fuzziness of the fuzzy set, then we can compute amount of ambiguity by taking the above proposed ‘useful’ fuzzy entropy which is average as suggested by Deluca and Termini (1972):

n P

H(A; P ; U ) = −

i=1

ui pi Hi (A) n P

, ui p i

i=1

where Hi (A) = [µA (xi ) log µA (xi ) + (1 − µA (xi )) log (1 − µA (xi ))]. 9

(3.4)

(iii) ‘Useful’ Measure of Total Fuzzy Information of fuzzy set A in a random experiment is given by sum of (3.3) and (3.4): HT otal (A; P : U ) = H(P ; U ) + H(A; P ; U )

4

(3.5)

‘Useful’ Fuzzy Directed Divergence Measures

Let A and B be two standard fuzzy sets with same supporting points x1 , x2 , . . . , xn and with fuzzy vectors µA (x1 ), µA (x2 ), . . . , µA (xn ) andµB (x1 ), µB (x2 ), . . . , µB (xn ). The simplest measure of fuzzy directed divergence as suggested Bhandari and Pal (1993), is I(A, B) =

n · X i=1

¸ µA (xi ) (1 − µA (xi )) µA (xi ) log + (1 − µA (xi )) ln , µB (xi ) (1 − µB (xi ))

(4.1)

Next, incorporating altogether the uncertainties of fuzziness of fuzzy set A from fuzzy set B and probabilities of randomness having utilities, corresponding to (4.1) we define the following ‘useful’ measure of fuzzy directed divergence of fuzzy set A from fuzzy set B: n P

I(A, B; P ; U ) =

i=1

h i (1−µA (xi )) i) ui pi µA (xi ) log µµBA (x + (1 − µ (x )) log A i (xi ) (1−µB (xi )) n P

(4.2)

ui pi

i=1

and ‘useful’ measure of fuzzy symmetric divergence as J(A, B; P ; U ) = I(A, B; P ; U ) + I(B, A; P ; U ).

(4.3)

Further, we show that I(A, B; P ; U ) is a valid measure. So we prove that I(A, B; P ; U ) ≥ 0 with equality if µA (xi ) = µB (xi ) for each i = 1, 2, . . . , n. Let

n P i=1

µA (xi ) = s,

n P

µB (xi ) = t and

i=1

n P

i=1

10

ui pi = u, then

n X

µ ui p i

i=1

µA (xi ) µA (xi ) log µB (xi )



s ≥ us log . t

(4.4)

Similarly, we can prove that ¶ µ n X n−s 1 − µA (xi ) ui pi (1 − µA (xi )) log ≥ u(n − s) log . 1 − µB (xi ) n−t i=1

(4.5)

Adding (4.4) and (4.5), we get µ ¶ · ¸ n X µA (xi ) 1 − µA (xi ) s n−s ui pi µA (xi ) log + (1 − µA (xi )) log ≥ u s log + (n − s) log µ (x ) 1 − µ (x ) t n−t B i B i i=1 (4.6) Let f (s) = s log st + (n − s) log n−s , then n−t ¢ ¡ and f 0 (s) = log st − log n−s n−t f 00 (s) =

1 s

+

1 n−s

=

n s(n−s)

> 0.

Thus f 00 (s) > 0, which shows that f (s) is a convex function of sand has s t

=

n−s n−t

=

n n

= 1. Now, if A = B (i.e. s = t), then n P ui pi = u > 0, f (s) = 0. Hence f (s) > 0 and vanishes only when s = t. As its minimum value when

i=1

therefore I(A, B; P ; U ) ≥ 0 and vanishes only when A = B. Thus I(A, B; P ; U ) is a valid measure of ‘useful’ fuzzy directed divergence of fuzzy sets A and B; and consequently, J(A, B; P ; U ) is a valid ‘useful’ measure of symmetric divergence. It may be noted that if B = AF , the most fuzzy set, i.e. µB (xi ) = 0.5 ∀xi , then I(A, AF ; P, U ) = n log 2 − H(A; P ; U ). The measure (4.2) can further be generalized parametrically. Corresponding to the following measure of fuzzy directed divergence as defined by Bajaj and Hooda (2008): n ¤ £ 1 X α 1−α Iα (A, B) = (x ) + (1 − µ (x )) (1 − µ (x )) , log µαA (xi )µ1−α i i i A B B α − 1 i=1 (4.7)

11

we suggest the following ‘useful’ measure of fuzzy directed divergence of fuzzy set A from fuzzy set B: n P

Iα (A, B; P ; U ) =

1 i=1 α−1

£ ¤ α 1−α ui pi log µαA (xi )µ1−α B (xi ) + (1 − µA (xi )) (1 − µB (xi )) n P

ui p i

i=1

(4.8)

and ‘useful’ measure of fuzzy symmetric divergence as Jα (A, B; P ; U ) = Iα (A, B; P ; U ) + Iα (B, A; P ; U ).

(4.9)

Further, it can be proved that Iα (A, B; P ; U ) is a valid measure by showing Iα (A, B; P ; U ) with equality if µA (xi ) = µB (xi ) for each i = 1, 2, . . . , n. Note that, if B = AF , the most fuzzy set, i.e., µB (xi ) = 0.5 ∀xi , then it is can be shown that Iα (A, AF ; P, U ) = n log 2 − Hα (A; P ; U ).

5

Constrained optimization of ‘useful’ Fuzzy Entropy

Consider the ‘useful’ fuzzy entropy given by (2.2) subject to

µA (xi ) =m 1 − µA (xi ) m

⇒ µA (xi ) =

µA (xi ) = α. Using

i=1

Lagrange’s multiplier method, we have

ui pi log

n P

e ui pi 1+e

m ui pi

where m is determined by 12

=

(5.1)

1 − ump

1+e

i i

,

(5.2)

φ(m) =

m

n X

e ui pi

i=1

1 + e ui pi

m

−α=0

(5.3)

and

φ0 (m) =

n X i=1

m

1 e ui pi ui pi

³ ´2 ≥ 0. m ui pi 1+e

(5.4)

Note that φ(−∞) = −α,

φ(0) =

n − α, 2

φ(∞) = n − α.

(5.5)

Using (5.5), we see that equation (5.3) has a unique root which is negative if α
n2 . Observe that n 2

• If m is negative, i.e., if α < • If m is positive, i.e., if α > • If m = 0, i.e., if α =

n 2

n 2

then µA (xi ) ≤ then µA (xi ) ≥

then µA (xi ) =

1 2

1 2 1 2

∀i ∀i.

∀i.

Thus for maximization, µA (xi )’s are either all less than equal to greater than equal to

1 2

or all are

1 2

1 2

or all

.

Now if µA (xi ) is replaced by 1 − µA (xi ), the fuzziness of the ith element does n P not change, but its contribution to the sum µA (xi ) changes. As such it will be i=1

better to take a fuzzy set in standard form i.e. µA (xi ) ≤

1 2

∀i.

Now, consider (2.2) where µA (xi ) is given by (5.2). Differentiating (2.2) with respect to µA (xi ), we get n X µA (xi ) dH ui pi log =− >0 dµA (xi ) 1 − µA (xi ) i=1

13

1 since µA (xi ) ≤ . 2

(5.6)

Differentiating (5.2) with respect to m, we get 1



m

e ui pi dµA (xi ) up = ³ i i m ´2 > 0. dm 1 + e ui pi

(5.7)

Differentiating (5.3) with respect to m, we get 1

n



m

X u p e ui pi dα i i = ³ ´2 > 0. m dm u p i=1 1+e i i

(5.8)

Combining (5.6), (5.7) and (5.8), we conclude that H(A; P ; U ) is an increasing function of α. Hence H(A; P ; U ) is maximum when α = µA (xi ) =

1 2

n 2

and then m = 0,

∀i.

Thus Hmax (A; P ; U ) = log 2 and Hmax increases from 0 to log 2 as α increases from 0 to n2 . Now minimum value of entropy will occur when as many of µA (xi )’s are 0 or 1 as possible subject to of course the constraint being satisfied. Also without loss of generality we assume that u1 p1 ≤ u2 p2 ≤ . . . ≤ un pn . • When α = 0, all µA (xi )’s have to be zero which gives Hmin (A; P ; U ) = 0. • When α = 12 , one of µA (xi )’s can be Hmin (A; P ; U ) =

1 2

and rest have to be 0, so that

u1 p1 log 2 . n P ui pi i=1

• Similarly, when α = 1, two of µA (xi )’s can be that Hmin (A; P ; U ) =

1 2

and rest have to be 0, so

(u1 p1 +u2 p2 ) log 2 . n P ui pi i=1

• When α = n2 , Hmin (A; P ; U ) = log 2. • When α lies between

m−1 2

and

m 2

, m − 1 of µA (xi )’s can be

a fraction η and rest can be 0, so that 14

1 2

, one can be

Hmin (A; P ; U ) =

(u1 p1 +u2 p2 +...+um−1 pm−1 ) log 2−um pm (η log η+(1−η) log(1−η)) . n P ui pi i=1

Thus as α varies from

m−1 2

to

m , 2

Hmin varies from

(u1 p1 +u2 p2 +...+um−1 pm−1 ) log 2 n P ui pi i=1

to log 2. Therefore, Hmax increases from 0 to log 2 continuously while Hmin also increases from 0 to the same value but in a piecewise continuous manner.

6

Constrained Optimization of ‘useful’ Fuzzy Directed Divergence

The maximum value of the ‘useful’ fuzzy directed divergence is obtained when as many possible of µA (xi )’s are 0 or 1. • When α = 0, the maximum value is −

n P

ui pi log (1 − µB (xi ))

i=1

• When α =

1 2

B (xn ) , the maximum value is un pn log 1−µ − µB (xn )

n P

ui pi log (1 − µB (xi ))

i=1

provided u1 p1 ≤ u2 p2 ≤ . . . ≤ un pn .

• Similarly, when α = 1, the maximum value is given by n 1 − µB (xn ) 1 − µB (xn−1 ) X un pn log + un−1 pn−1 log − ui pi log (1 − µB (xi )). µB (xn ) µB (xn−1 ) i=1 • Finally, when α = n2 , the maximum value is −

n P

ui pi log µB (xi ).

i=1

Note that, the maximum value of ‘useful’ fuzzy directed divergence measure n P is a piecewise continuous function which increases from − ui pi log (1 − µB (xi )) to −

n P

i=1

ui pi log µB (xi ).

i=1

Next, for minimization of the ‘useful’ fuzzy directed divergence measure of a standard fuzzy set A from a standard fuzzy set B, consider (4.2) subject to n P µA (xi ) = α ≤ n2 .

i=1

15

Again using method of Lagrange’s multiplier, we get µ ¶ µA (xi )(1 − µB (xi )) ui pi log = m, (1 − µA (xi ))µB (xi ) ⇒

(6.1)

m m µA (xi ) µB (xi ) = e ui pi = e ui pi βi , 1 − µA (xi ) 1 − µB (xi )

(6.2)

where m is determined by ψ(m) =

m

n X

e ui pi βi

i=1

1 + e ui pi βi

m

and 0

ψ (m) =

n X i=1

− α = 0.

(6.3)

m

e ui pi uβi pi i

³

m

1 + e ui pi βi

´2 ≥ 0.

(6.4)

Note that ψ(−∞) = −α,

ψ(0) =

n X

µB (xi ) −

i=1

n X

µA (xi ),

ψ(∞) = n − α. (6.5)

i=1

Now using (6.5), we see that equation (6.3) has a unique root which is, n P

• positive if

n P

µB (xi )


i=1

• zero if

n P i=1

µA (xi ),

i=1

i=1

µB (xi ) =

n P

µA (xi )

i=1 n P

µA (xi ).

i=1

If m ≥ 0 then µA (xi ) ≥ µB (xi ) so that each minimizing value is less than the corresponding given value. Thus in every case either µA (xi ) < µB (xi ) or in every case µA (xi ) > µB (xi ) or in every case µA (xi ) = µB (xi ).

16

References Zadeh, L.A., 1965. Fuzzy Sets. Information And Control, 8, 338 - 353. Ebanks, B. R., 1983. On Measures of fuzziness and their representations. Journal of Math. Anal and Applications, 94, 24 - 37. Pal, N. R. and Bezdek, J. C., 1994. Measuring fuzzy Uncertainty, IEEE Trans Fuzzy Systems, 2, 107-118. Shannon, C.E., 1948. The Mathematical theory of Communication, Bell Syst. Tech. Journal, 27, 423 - 467. De Luca, A. and Termini, S., 1972. A definition of non-probabilistic entropy in the setting of fuzzy set theory, Information And Control, 20, 301 - 312. Bhandari, D. and Pal, N.R., 1993. Some new information measures for fuzzy sets, Information Science, 67, 204 - 228. Renyi, A., 1961. On measures of entropy and information.

In: Proc. 4 th

Berkeley Symp. Math. Stat. Probab., 1, 547 - 561. Pal, N.R. and Pal, S.K., 1989. Object background segmentation using new definition of entropy, Proc. IEEE, 136, 284 - 295. Bhaker, U.S. and Hooda, D.S., 1993. Mean value characterization of useful information measures, Tamkang Journal of Math, 24, 383 - 394. Zadeh, L.A., 1968. Probability measures of fuzzy events, J. Math. Analysis and Applications, 23, 421 - 427. Kapur, J.N., 1997. Measures of Fuzzy Information, Math. Sci. Trust Soc., New Delhi. Bajaj, R.K. and Hooda, D.S., 2008. Generalized Measures of Fuzzy DirectedDivergence, Total Ambiguity and Information Improvement, Austrian Journal of 17

Statistics(Communicated.) Belis, M. and Guiasu, S., 1968. A quantitative-qualitative measure of information in cybernetic systems, IEEE Trans. Inf. Th., IT-14, 593 - 594.

18