White Noise Solution for Nonlinear Stochastic Systems

0 downloads 0 Views 222KB Size Report
Abstract: This paper proposes an alternative theory to the Ito calculus due to Balakrishnan: the white noise theory in Hilbert spaces. The proposed approach ...
White Noise Solution for Nonlinear Stochastic Systems In memory of A.V. Balakrishnan Filippo Cacace ∗ Francesco Conte ∗∗ Alfredo Germani ∗∗∗ Giovanni Palombo ∗∗∗ ´ Universit` a Campus Bio-Medico di Roma, Via Alvaro del Portillo 21, 00128 Roma, Italy (e-mail: [email protected]). ∗∗ Dipartimento di Ingegneria Navale, Elettrica, Elettronica e delle Telecomunicazioni , Universit` a degli studi di Genova, Via all’Opera Pia 11A, 16145 Genova, Italy (e-mail: [email protected]). ∗∗∗ Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica, Universit` a degli studi dell’Aquila, Via Vetoio (Coppito 1), 67100 Coppito (AQ), Italy (e-mail: {alfredo.germani-giovanni.palombo}@univaq.it ). ∗

Abstract: This paper proposes an alternative theory to the Ito calculus due to Balakrishnan: the white noise theory in Hilbert spaces. The proposed approach extends Blakrishnan’s theory to a new class of nonlinear systems. The method uses the theory of differential geometry to devise a suitable map which transforms the starting system in an equivalent one; then the techniques of white noise theory is applied to this equivalent system. Finally, by means of the inverse map, the existence of a white noise solution for the starting system is proved. Keywords: White Noise Theory, Nonlinear Systems, Nonlinear Stochastic Modeling 1. INTRODUCTION In the stochastic modeling of real physical systems a fundamental problem is the definition of a mathematical model of the noise. In the standard Ito’s calculus the Wiener process is employed to this aim. A Wiener process is, roughly speaking, the integral of a white Gaussian noise process, whose precise definition is generally acknowledged to be problematic. The basic motivation of the White Noise theory developed by Balakrishnan [1]–[8] is to provide a consistent alternative to the usual Ito calculus that leads to solutions potentially more suited to engineering applications. A theoretical justification of this attempt lies in the following property of Wiener spaces. Integrals of paths of bounded variation and of finite energy belong to a set of Wiener measure zero. Unfortunately physical signals are exactly functions of bounded variation and of finite energy. Thus results obtained via Ito theory, that hold on a set of Wiener measure 1, are not necessarily the most adequate when applied to real data. This is due to the fact that countably additive measure introduce many ”non-physical events” while others are dismissed as ”trivial” (zero-probability). And this is why white noise theory makes use of finite additive measure (see [4], [9], [7], [13]). An other advantage of this theory is a practical one. Indeed, this alternative theory, one can solve stochastic differential equations making use of the standard calculus for ordinary differential equation: there is no more need to define a new calculus and to use the correction term. White Noise theory has been successfully applied to the study of linear systems, where the solution obtained using

the ”Ito approach” as well as the one with the ”white noise approach” coincide (see [1]), which means that those two approaches yield the same random variable (that is, the distribution is the same). However differences appear as soon as we consider nonlinear functions. Here the theory is far from complete. Some results are available in [1] where white noise solutions for a class of distributed bilinear systems are studied. In [11] similar results are available for a class of linear perturbed distributed systems. In [12] a polynomial approximation for a class of physical random variable is studied. In this paper we consider the problem of the existence of the white noise solution for a wide class of nonlinear systems. That is, we want to demonstrate that the solution of a certain class of nonlinear differential stochastic equation are random variables (in particular physical random variables) that can be handled through the usual calculus. The novelty of our approach lays in the employment of the white noise theory jointly with differential geometry theory. The central contribution of the paper is the introduction of a map that transforms the system in a more tractable form. This makes it feasible to prove the existence of the white noise solution for the transformed system. Since the transformation is a global diffeomorphism, the existence of a white noise solution for the original system is eventually proved. The paper is organized as follows. In Section 2 we recall some basic concepts of white noise theory in Hilbert spaces. In Section 3 the problem is presented together with some preliminary results. In Section 4 we demonstrate the existence of a white noise solution for nonlinear system, under suitable hypotheses. In Section 5 we show an ap-

plication to a systems with scalar noise, which leads to a relaxation of the hypotheses. Conclusions follow in Section 6. 2. WHITE NOISE THEORY: PRELIMINARY NOTIONS In the following a brief introduction concerning white noise theory will be proposed, together with some practical motivation. For a complete description see [1], [2], [7],[8], [10] and [13]. First of all, we are going to define a cylindrical probability triple, composed by a sample space, an algebra and a measure. The sample space is an Hilbert space H . Not every Hilbert space is suitable as a model of a physical signals. Indeed any physical signal taking values in a separable Hilbert space H 0 and observed in a finite time interval [0, T ], is expected to have bounded variation and finite energy. Thus a natural space to model this is H = L2 ([0, T ]; H 0 ), or the space of functions with finite second moment taking values from [0, T ] into H 0 . Now we have to find an algebra and a measure. Let us introduce the concept of cylinder set: Definition 1. Let H be real separable Hilbert space. Let P be the class of all finite dimensional projections on H . For any P ∈ P we call the set C = {x|x ∈ P −1 (B)}, B Borel set in the range of P , a cylinder set with base B. For a fixed P we denote the collection of all such sets by CP , which is a σ−algebra. The collection C = {C ∈ CP |P ∈ P } of all cylinder sets is an algebra. Definition 2. A finitely additive measure µ on (H , C) is called a cylindrical measure if it is countably additive on the σ−algebra Cp . If µ(H ) = 1 we call (H , C, µ) a cylindrical probability space. For any finite number n of independent elements {hi } of H , consider the map φ : H → Rn defined by φ(h) = ([h, h1 ], ..., [h, hn ]), which is the projection operator of h onto the space Rn by means of the inner product by h1 , ..., hn . The mapping induces a countably additive probability measure ν on Rn defined by ν(B 0 ) = µ({h ∈ H |φ(h) ∈ B 0 }), B 0 Borel set in Rn . Obviously ν is countably additive by definition of cylindrical measure. Moreover we can define, for the case n = 1, the characteristic functional corresponding to a cylinder measure as Z Z ∞ i[h,x] Cµ (x) = e dµ(h) = eiy dνh0 (y) H

−∞

where νh0 is the measure on R defined by νh0 ((−∞, y]) = µ({h ∈ H |[h, h0 ] ≤ y}). Let B denote the Borel σ−algebra of H ; B is also the smallest σ−algebra containing C ([2]), the collection of all cylinder sets. The Gaussian Cylindrical measure, defined in the following, is of a particular interest. Definition 3. The cylindrical measure µG is called Gaussian measure if the corresponding characteristic functional has the form 2 1 CµG (h) = e− 2 khk .

Remark 4. One of the greatest limitation of this approach is that, in general, a cylindrical measure cannot be extended to be countably additive on B . In particular µG suffers this limitation in most cases. Having defined the probability space (H , C, µ) we can introduce the concept of random variable. Let f : H → H 0 be a Borel measurable map. This is not enough to qualify f as a random variable, since inverse images of Borel sets are not necessarily in C, or f −1 (B 0 ) 6∈ C, B 0 Borel set in H 0 . Let us introduce a first class of functions which are random variables. Definition 5. Let f : H → H 0 be a Borel measurable map and let P ∈ P denote a finite-dimensional projection operator on H . The function in the form f ◦ P is called a tame function. A tame function is always a random variable, indeed inverse images of tame functions can always be assigned probabilities. Let us now introduce the space L (H , C, µ; H 0 ) as the class of Borel measurable functions f : H → H 0 such that for all  > 0, δ > 0, ∃P0 ∈ P such that for any P1 , P2 ∈ P , with P0 ≤ Pi , i = 1, 2, (P0 ≤ Pi if the range of P0 is contained in the range of P1 ) implies that µ({h ∈ H | |f ◦ P1 (h) − f ◦ P2 (h)| > δ} < . We will call an element in L (H , C, µ; H 0 ) a H 0 −valued µ−random variable. Definition 6. A mapping f : (H , C, µ) → (H 0 , C 0 ) is called a µ−weak random variable if, for all h0 ∈ H 0 [h0 , f ] ∈ L (H , C, µ; R). Remark 7. Note that the definition of random variable and weak random variable depends on the measure defined on H , contrary to the classical definition of random variable which is independent of the measure. The class of random variables introduced so far is rather poor. We are going to extend this class by considering the Cauchy sequences of functions in this class. Definition 8. Let f : H → H 0 be a Borel measurable map and let {Pn } ⊂ P be any sequence of finite dimensional projections converging strongly to the identity. f (·) is a phisical random variable (PRV) if (1) the sequence of tame functions {f (Pn h)} is Cauchy in probability for every sequence {Pn } ∈ P ; (2) the sequence {νn } of probability measure induced by f ◦ Pn and defined as νn = µ ◦ (f ◦ Pn )−1 converges strongly to the same probability measure ν on H 0 for every sequence {Pn } ∈ P . Condition 2) is equivalent to say that the limit Z 0 0 C(h ) = lim ei[f ◦Pn h,h ] dµ(h) n→∞

H

is independent of the sequence {Pn } chosen. If f is a PRV µ can always be extended to events of the form f −1 (B 0 ), B 0 Borel set in H 0 , by µ({f −1 (B 0 )}) = lim µ({h ∈ H |f ◦ Pn (h) ∈ B 0 }), n→∞

where the limit always exists by definition. The class of H 0 −valued physical random variables is denoted as L 0 (H , C, µ; H 0 ).

To conclude our description we introduce a model of the Gaussian noise. It is natural to suppose that the Gaussian noise has a bandwidth much larger than the signal one. It follows that a suitable representation is an identity map on H endowed with the Gaussian measure µG with identity covariance operator. This is called Gaussian white noise, or simply, white noise. We can state it equivalently using a ”Function Space” definition of the white noise processes. Thus consider a new separable Hilbert space H 00 and define W = L2 ([0, T ], H 00 ). Any element of W will be a white noise sample function or sample point. We shall use the generic notation: ω , to denote sample point. Each ω then is an element of W , with corresponding function ω(t), 0 < t < T , which is defined a. e. in t as an element of H 00 , and for each element h ∈ H 00 , [ω(t), h] is a Lebesgue measurable function of t, and square integrable in [0, T ]. As with any Lebesgue measurable function, we cannot talk about the value at any fixed t, for arbitrary ω. Using this definition, and recalling the definition of cylinder Gauss measure, it is clear that for any h ∈ W , [ω, h] is Gaussian with zero mean and variance [h, h], and further for any two elements g, h in W , the two random variables [ω, h], [ω, g] are jointly Gaussian with covariance [h, g]. It is now necessary to characterize a PRV. To achieve this target the following continuity concept is needed. Definition 9. Let H , H 0 be real separable Hilbert spaces. A map F : H → H 0 is continuous in x ∈ H with respect to the S-topology if, for any  > 0, there exists an HilbertSchmidt operator L (x) : H → H such that kL (x)(x − x0 )k < 1

(1)

kF (x) − F (x0 )k < ,

(2)

implies F is uniformly S-continuous on U ⊂ H when the HilbertSchmidt operator in (1) does not depend on x ∈ U . A weaker notion in given in the following Definition 10. A map F : H → H 0 is said to be uniformly S-continuous around the origin (USCAO) if F is uniformly S-continuous on sets Un = {x ∈ H : kLn xk ≤ 1} {Ln }∞ 1

where such that

is a sequence of Hilbert-Schmidt operators

kLn kH.S. → 0

and

∞ [

Un = H .

n=1

Obviously a map uniformly S-continuous is also USCAO (Ln = n1 L). Now we have all the tools to give a criterion to characterize a PRV. Theorem 1. A sufficient condition for a map F : H → H 0 to be a PRV is that it is USCAO. One useful characterization of maps which are USCAO has been given by [12] Theorem 2. A map F : H → H 0 is USCAO if and only if there exists an Hilbert-Schmidt operator L : H → H and a continuous map g : H → H 0 such that F = g ◦ L.

(3)

3. PROBLEM FORMULATION AND PRELIMINARIES Consider a nonlinear system in the form x(t) ˙ = f (x(t)) + g(x(t))ω(t), (4) where x(t) ∈ Rn , ω(t) ∈ Rp , 0 ≤ t ≤ T . f and g are real-valued bounded uniformly Lipschitz functions. We will use the following notation: W tx = L2 ([0, t], Rn ), W tω = L2 ([0, t], Rp ). The problem is to show that eq. (4) admits a solution x(ω) ∈ W Tx for every ω ∈ W Tω such that the map x : (W Tω , C, µG ) → W Tx is a PRV, that is, there there exists a white noise solution to eq. (4). The novelty of our approach lies in the employment of the above described concepts of white noise theory, jointly with some tools of differential geometry. The aim is to find a map that transforms (4) in a form that allows us to apply white noise theory theorems. 3.1 Notions of differential geometry m×q . Consider the representation If y ∈ Rm , z ∈ Rq dy dz ∈ R of a smooth vector field f as a smooth mapping assigning the n−dimensional vector f (x) to each point x. Hence a function h(x) : Rn → Rp can be represented as the collection of its p n−dimensional vector fields: h(x) = (h1 (x), ..., hp (x)). Now some basic definitions will follow. Given d smooth vector fields, we can define the following space. Definition 11. ∆(x) = span{f1 (x), ..., fd (x)} is called a smooth distribution.

We shall omit the x dependence in the following. Definition 12. A distribution ∆ is said to be nonsingular if dim(∆(x)) = d ∀ x. Definition 13. A distribution ∆ is said to be involutive if the Lie bracket of any two pair of vector field belonging to ∆ belongs to ∆: t1 , t2 ∈ ∆ ⇒ [t1 , t2 ] ∈ ∆. (5) Checking whether or not a nonsingular distribution is involutive amounts to check that rank(f1 , ..., fd ) = rank(f1 , ..., fd , [fi , fj ]), for all x and all 1 ≤ i, j ≤ d. Definition 14. A nonsingular d-dimensional distribution ∆ = span{f1 , · · · , fd } is said to be completely integrable if there exist n − d real valued smooth functions i λ1 , · · · , λn−d such that ∂λ ∂x [f1 (x), · · · , fd (x)] = 0 and the ∂λn−d ∂λ1 vectors ∂x , · · · , ∂x are independent exact differentials. For the sake of completeness we recall the well known Frobenius Theorem. Theorem 3. (Frobenius Theorem). A non singular distribution is completely integrable if and only if it is involutive. 3.2 Preliminary Results Theorem 4. Consider f (x) and g(x) as in (4). If the distribution generated by g(x), G = span{g1 , ..., gd } is

¯ and involutive and nonsingular of dimension dim(G) = d, if there exist d¯ scalar functions γ1 , ..., γd¯ such that the (d¯ × d) matrix   dγ1 dγ1 dγ1 g1 g2 · · · gd  dx dx dx   dγ2 dγ2 dγ1   g g · · · gd   dx 1 dx 2 dx  (6)   .. ..   .. ..  . . . .    dγd¯ dγd¯ dγd¯ g1 g2 · · · gd dx dx dx is constant and has full rank, then there exists   λ1  ..   .     λ ¯ (7) Φ(x) =  n−d   γ1   .   ..  γd¯ which is a diffeomorphism. Moreover Φ(x) has a nonsingular Jacobian matrix JΦ (x) and the product JΦ (x)g(x) is constant. Proof. By Theorem 3, G is a completely integrable distribution, or there exists n − d¯ scalar functions λ1 , ..., λn−d¯ i such that dλ dx [g1 (x), ..., gd (x)] = 0. Moreover, since there ¯ exist d scalar functions γ1 , ..., γd¯ such that (6) is constant and has full rank, it is clear that, defining Φ(x) as in (7), we have that the Jacobian matrix JΦ (x) is nonsingular. Indλn−d¯ 1 dλ2 deed the first n− d¯ rows dλ are independent dx , dx , ..., dx dγd¯ 1 dγ2 ¯ by Frobenius Theorem. The latter d rows dγ dx , dx , ..., dx i are independent too since (6) has full rank. Moreover, dλ dx dγj ¯ j = 1, ..., d¯ is orthogonal to dx for each i = 1, .., n − d, by construction. In addition JΦ (x)g(x) is constant since dλi dx [g1 (x), ..., gd (x)] = 0, which is the ”upper part” of the product JΦ (x)g(x), and (6) is constant, which is the ”lower part” of the product. Germani and Sen established several powerful results in PRV’s theory in [11]. We report here two such results in our notation. Theorem 5. Let F be a uniformly Lipschitz operator from H into H , that is, there exists a constant K < ∞ such that kF (z) − F (y)k ≤ Kkz − yk ∀ z, y ∈ W n , then, denoting the identity operator by I, operator (I − F )−1 is continuous for every x ∈ H . We are going to use this theorem for a particular kind of functions, defined as follows. Given Φ(x) as in Theorem 4 and f (x) uniformly Lipschitz, define the operator F : Rn → Rn such that F (x) = JΦ (x)f (x) and consequently the operator F˜ : L2 ([0, T ], Rn ) → L2 ([0, T ], Rn ) defined as Z t F˜ h = l, l(t) = F (h(τ ))dτ. (8) 0

Now we can establish when a function in the form of F˜ is uniformly Lipschitz. Lemma 15. Given F˜ as in (8), F˜ is uniformly Lipschitz if exists a constant K such that

dF (h) ≤K dh for all h ∈ L2 ([0, T ], Rn ).

(9)

Proof. kF˜ (h1 ) − F˜ (h2 )k2 =

T

Z

kF˜ (h1 )(t) − F˜ (h2 )(t)k2 dt

0 T

Z = 0

T

Z = 0

T

Z ≤

2

Z t

F (h1 (τ )) − F (h2 (τ ))dτ

dt

0

Z ! 2

t Z 1 dF (h) dh

dα dτ

dt

0

dh dα h=αh1 +(1−α)h2 0 2 Z t Q(τ, h1 , h2 ) k(h1 (τ ) − h2 (τ ))k dτ dt 0

0

Z

T

≤T

2

[Q(τ, h1 , h2 )] dτ k(h1 − h2 )k

2

0

where Q(τ, h1 , h2 ) =

R1 0

k



dF (h) dh

kdα. Then h=αh1 +(1−α)h2

kF˜ (h1 ) − F˜ (h2 )k ! 12 Z T 2 2 ≤ T [Q(τ, h1 , h2 )] dτ k(h1 − h2 )k 0





Z T

T

! 12 2

[Q(τ, h1 , h2 )] dτ

k(h1 − h2 )k

(10)

0

or F˜ is uniformly Lipschitz if there exists K constant such that dFdh(h) < K for all h ∈ L2 ([0, T ], Rn . Theorem 6. Let (H , C, µ) be a cylinder probability triple on the real separable Hilbert space H . Moreover let H 1 and H 2 be two separable Hilbert spaces endowed with the measures µ1 and µ2 respectively. Let f1 : (H , C, µ) → (H 1 , µ1 ) be a PRV and let f2 : (H 1 , µ1 ) → (H 2 , µ2 ) be a continuous function. Then the map f2 ◦ f1 : (H , C, µ) → (H 2 , µ2 ) is a PRV. Another important result due to Balakrishnan stated in [2] is the following. Theorem 7. Let f (ω) = Lω where L is any linear bounded transformation mapping H into H 0 . f (·) is a PRV if and only if L is Hilbert-Schmidt. 4. WHITE NOISE SOLUTION In the following we will use W m to indicate the space L2 ([0, T ]; Rm ). If the hypotheses of Theorem 4 hold, there exists a map Φ(x) as in (7) which is a diffeomorphism. Now we can apply Φ(x) to (4) and obtain the following equivalent system in the new variable z(t) = Φ(x(t)) dΦ(x(t)) dΦ(x(t)) dx(t) z(t) ˙ = = dt dx(t) dt = JΦ (x(t))f (x(t)) + JΦ (x(t))g(x(t))ω(t) = F (z(t)) + Bω(t), (11) where the relation x(t) = Φ−1 (z(t)) has been used. F (·) is a nonlinear mapping and B is a constant matrix. Integrating (11) from 0 to t both sides of the equation and defining

F˜ : W ˜:W B

n

m

→ W n , F˜ h = l, ˜ = l, → W n , Bh

Z

t

l(t) =

6. CONCLUSIONS

F (h(τ ))dτ ; Z0 t

l(t) =

Bh(τ )dτ ; 0

by choosing as initial condition zero, without loss of generality, we obtain ˜ z = F˜ z + Bω or ˜ z = (I − F˜ )−1 Bω. If F˜ is uniformly Lipschitz, by Theorem 5, (I − F˜ )−1 is a ˜ Hilbertcontinuous mapping. On the other hand, being B ˜ Scmitdh, by Theorem 7 Bω is a PRV. Then by Theorem 6, z is a PRV. Under the hypothesis that Φ(·) is a global diffeomorphism, we can define the operator ¯ −1 : W n → W n , Φ ¯ −1 (h) = Φ−1 (h(·)) Φ (12) −1 −1 ¯ ¯ Then x = Φ (z) is a PRV too by Theorem 6, being Φ (·) a continuous mapping. Then there exists the white noise solution for (4). 5. APPLICATION In this section we prove that every equation in the form of (4) with p = 1 (g : Rn → Rn , ω(t) ∈ R), has a solution which is a PRV. In this case only the following hypotheses are needed. H1 g(x) 6= 0 ∀x ∈ Rn . H2 Φ(x) defined as in Theorem 4 is a global diffeomorphism. H3 dFdh(h) of Lemma 15 is bounded. Lemma 16. Given a system in the form of (4) with p = 1. If H1 –H3 are verified then x is a PRV. Proof. Given g(x) ∈ Rn such that g(x) 6= 0 for all x ∈ Rn , the distribution generated by g(x): G = span{g(x)} is clearly nonsingular of dimension 1. To show that G is involutive it is enough to show that rank(g) = rank(g, [g, g]), which is obviously true since [g, g] = 0. By Frobenius Theorem G is completely integrable, so there exist n − 1 scalar functions λ1 , · · · , λn−1 which differentials are such that dλi g = 0 1 ≤ i ≤ n − 1. dx Moreover, being g(x) uniformly Lipschitz, there exists a ¯ such that λ ¯ : dλ¯ g = 1. Then, the scalar function λ dx relevant matrix constructed as in (6) is constant and has full rank. Then by Theorem 4, there exist Φ(x) which is a diffeomorphism. Being dFdh(h) bounded, by Lemma 15 F˜ is uniformly Lipschitz. Then the same method proposed in Section 4 can be applied to prove that x is a PRV. Remark 17. To show that x is a PRV it is enough to prove the previous Lemma holds for all x ∈ X , where X is the manifold of all possible realizations of the state process x. It is to say that hypotheses H1 , H2 , H3 must hold only on X instead of holding on all Rn . For the sake of clarity we prove it only for the general case, since the proof for x ∈ X easily follows.

In this work we proved the existence of white noise solutions, under suitable hypotheses, for a class of nonlinear systems. In particular we have shown that nonlinear stochastic equations with multiplicative scalar noise admit a white noise solution under mild assumptions. A natural continuation of this work is the generalization to other type of equations. The exploration of the Radon-Nykodim derivative and the study of the induced measure of the white noise solution are other important topics to investigate in view of potential applications, for example for the identification problem of unknown equation parameters through the maximum likelihood approach. REFERENCES [1] [2] [3] [4] [5] [6] [7]

[8] [9] [10] [11]

[12]

[13]

Balakrishnan, A.V. Stochastic bilinear partial differential equations Proc. of the second USA-ITALY Seminar on Variable Structure Systems, 1975. A.V. Balakrishnan. Applied Functional Analysis. Springer-Verlag, Application of Mathematics, 1976. A.V. Balakrishnan, A white noise verstion of the Girsanov formula Proc. Symp. Stochastic Differential Equations, 1976. A. V. Balakrishnan, Parameter Estimation in Stochastic Differential Systems Theory and Applications, Advances in Statistics, Academic Press 1976 A.V. Balakrishnan, Radon-Nikodym derivatives of a class of weak distributions on Hilbert spaces Appl. Math. Opt., 3, 209-225, 1977. A.V. Balakrishnan, Likelihood ratios for signals in additive white noise. Appl. Math. Opt., 3, 341-356, 1977. A.V. Balakrishnan, Parameter estimation in stochastic differential systems: theory and application. Developments in Statistics, Academy Press, New York, 1, 1978. A.V. Balakrishnan, Nonlinear white noise theory, basic concepts. Appl. Math. Opt., 3, 341-356, 1977. R.E. Mortensen, Balakrishnan’s white noise model versus the Wiener process model. IEEE Conference on Decision and Control, 1977. G. Kallianpur, R. Karandikar. White noise calculus and non-linear filtering theory Annals of Probability, 13, 1033, 1985. A. Germani, Prodip Sen. White noise solutions for a class of distributed feedback systems with multiplicative noise. Ricerche di Automatica, Oderisi Gubbio, 10, 1, 1979. A. De Santis, A. Gandolfi, A. Germani, P. Tardelli Polynomial approximation for a class of physical random variables Proceedings of the American Mathematical Society, 120, 1, 1994. A. Bagchi , R. Mazumdar Direct modelling of white noise in stochastic systems. Modelling identification and adapting control, Lecture Notes in Control, Springer, 1991.