SELF-ORGANIZING SENSOR NETWORKS DESIGNED AS ... - Infocom

1 downloads 0 Views 924KB Size Report
Scaglione have proposed a novel design paradigm, based on pulse coupled ... tributed decision, self-synchronization, fault tolerance, scalability, and small .... two oscillators are coupled (i.e., aij = 0), only if their distance is smaller .... r introduced in (9) is a Bernoulli random variable, with pdf ... The stars refer to the situation.
SELF-ORGANIZING SENSOR NETWORKS DESIGNED AS A POPULATION OF MUTUALLY COUPLED OSCILLATORS Sergio Barbarossa, Francesco Celano INFOCOM Dpt., University of Rome “La Sapienza”, Via Eudossiana 18, 00184 Rome, Italy E-mail: @infocom.uniroma1.it

ABSTRACT The mathematical models of populations of biological oscillators are a powerful tool for designing sensor networks with high energy efficiency, fault tolerance and scalability. Recently, Hong and Scaglione have proposed a novel design paradigm, based on pulse coupled oscillators, where the decision of each sensor is encoded as the time position of the emitted pulses. In this work, we propose an alternative approach, based on linear (not necessarily pulse) oscillators with nonlinear coupling, that provides a novel framework to design sensor networks that, in addition to the properties of distributed decision, self-synchronization, fault tolerance, scalability, and small complexity, allows for local information storage or information propagation, in analog form, through mutual coupling among nearby oscillators. 1. INTRODUCTION Sensor networks are gaining more and more importance as a tool to acquire information about environments difficult to reach. Examples of applications include the detection of intruders or the monitoring of temperature, vibrations, radiation or pollution [1]. The basic trend in the current research on sensor networks is to move from centralized and highly reliable single-node systems to a multitude of cheap, lightweight components that are possibly individually unreliable but, as a whole, are capable of solving complex tasks through spontaneous self-organization. The major problems in the design of sensor networks are the following: i) each sensor should have minimum energy consumption and should then perform only simple tasks; ii) each sensor should be allowed to “fall asleep”, at random times, for periodic recharge of its battery without compromising the network functionality; iii) the overall network should have detection and estimation capabilities superior to those of each individual sensor, possibly without the need for a centralized fusion center; iv) the information gathered from the sensor nodes should spread through the network without the need for complicated multiple access or routing techniques; v) the network should be scalable, i.e. capable of operating correctly irrespective of the number of sensors. This set of requirements poses an extremely challenging, and apparently impossible, problem to solve. Nevertheless the analysis of many biological organisms reveals that in nature there are many systems meeting the previous requirements. One example is the heartbeat. In our bodies, there is no master clock operating with great precision. Nevertheless, our heart beats in a very regular fashion, it is capable to adapt to external solicitations, with very limited energy consumption and it lasts

(hopefully) for a long time. Furthermore, each natural pacemaker cell, responsible for the cardiac rhythm, has a life cycle much smaller than our average life time. The stability of the overall system is indeed the result of the collective behavior of the population of mutually coupled pacemaker cells, whose individual reliability and precision is limited, but, as a whole, give rise to an incredibly reliable system. The mathematics of populations of mutually coupled oscillators provides then an important tool to synthesize sensor networks capable of satisfying the previous requirements. This idea has been initially proposed by Hong and Scaglione [2], [3], where the sensors operate as pulse coupled integrate-and-fire oscillators, according to the Peskin’s model proposed to describe the heart physiology [6]. In [2], [3], the decision of each sensor is encoded into the time position of the pulses emitted periodically from each sensor. It was proved in [7] that, under very mild conditions, a population of globally coupled pulsed oscillators converges to a unique stable equilibrium where all oscillators emit a pulse at the same time. It is this self-synchronization capability that was exploited in [2], [3] to achieve distributed decision and information propagation in a very simple way. In this work we propose an alternative mathematical model for the population of oscillators. Our underlying motivation is twofold. We remove the potential limitations of [2], [3] resulting from encoding the information into temporal shifts. This could in fact cause an ambiguity problem to distant observers, unable to discriminate between the information-bearing time shift and the propagation delay. At the same time, we design a system capable of switching between two alternative behaviors: as a local information storage or as an information gatherer and spreader, simply acting on a few control parameters. The overall network may operate as a distributed detector or as a distributed estimator of a continuous field. 2. MUTUALLY COUPLED OSCILLATORS In our proposed scheme, the network is composed of N nodes composed of a sensor and a dynamical system. The sensor may work either as a detector or as an estimator. In the first case, the ith sensor takes a decision about the event of interest (like, e.g., the intrusion of a person or the level of radiation) and sets a parameter, let us say ωi , of the associated dynamical system accordingly. For example, in case of detection it sets ωi = Ω1 , whereas in case of no detection, it sets ωi = Ω0 . Alternatively, if the sensor works as an estimator, it sets ωi proportional to the estimated variable. In this work we concentrate on the behavior of the network as a distributed detector.

After sensing the environment, the dynamical system (oscillator) present in each node (let us say the i-th one) is let to evolve from an initial condition given by the parameter ωi , according to the following equation N KX aij F [θj (t) − θi (t)], i = 1, . . . , N, (1) θ˙i (t) = ωi + ci j=1

where θi (t) is the state function of the ith sensor (θi (0) may be initialized as a random number); F (·) is a nonlinear function, assumed to be an odd function of its argument; the coefficients aij are real variables that describe the coupling between sensors i and j; K is a control loop gain; ci is a coefficient that quantifies the attitude of the i-th sensor to adapt its values as a function of the signals received from the other nodes. The function F (x) takes into account the mutual coupling between the sensors. By reciprocity, the coefficients aij satisfy the condition aij = aji . In this paper, we choose F (x) = sin(x). In such a case, the model (1), when aij = 1 for all i and j, is known as the Kuramoto’s model [5] and it represents a population of linear dynamic systems (oscillators) which are nonlinearly coupled. In our work, the coefficients a ij take into account the local coupling between oscillators, so that two oscillators are coupled (i.e., aij 6= 0), only if their distance is smaller than the coverage radius of each sensor1 . In the rest of the paper, we will denote the parameters ωi and the functions θi (t) as the natural pulsations and the instantaneous phases of the i-th oscillator, in accordance to Kuramoto’s terminology. However, it is important to emphasize that neither ωi nor θi (t) are necessarily the pulsation and instantaneous phase of a sinusoidal carrier. They are, in general, physical parameters whose choice is dictated by implementation constraints. For example, the oscillators may be pulsed oscillators, as in ultra-wideband systems, where θi (t) is the time where the i-th node emits a pulse. In this case, the information is carried by the rate with which the pulse emission time varies with time, somehow mimicking the functioning of the neurons in the brain. We say that the overall population synchronizes if all sensors oscillate with the same pulsation, i.e. θ˙i (t) = θ˙∗ (t), ∀i. It is easy to verify that, thanks to the reciprocity aij = aji and to the oddness of F (x), if we multiply both sides of (1) by ci and take the summation of all the equations in (1), over the index i, we get N X i=1

ci θ˙i (t) =

N X

c i ωi .

(2)

i=1

Hence, if the system synchronizes, the common pulsation must necessarily be constant and equal to PN i=1 ci ωi θ˙∗ (t) := ω ∗ = P . (3) N i=1 ci If the coefficients ci are all equal to each other, ω ∗ is simply the average of the initial pulsations ωi . However, if each sensor knows the SNR with which it has taken its initial decision, it can set ci = SNRi , so that the final common pulsation becomes PN i=1 SN Ri ωi ω∗ = P . (4) N i=1 SN Ri 1 The coverage radius is assumed to be the same for all sensors, even though this could be changed to accommodate for different network topological models, like small worlds or scale-free networks.

This is an interesting behavior, as it shows that the sensors with the highest SNR are the ones that weight more in the distributed decision. Ideally, if there is a noiseless sensor (i.e., with infinite SNR), it forces all other sensors to take its same decision and then prevents the other sensors from making errors, even if they are noisy. To better understand the behavior of the proposed system, we start with the simple case of two coupled oscillators and then we will illustrate the general case. 2.1. Two-oscillators system A system with only two oscillators is relatively easy to analyze, as described in [8] through an insightful and elegant geometric interpretation. Introducing the function ψ(t) := θ2 (t) − θ1 (t), and setting a12 = 1/2, we can rewrite (1) as (we set, for simplicity, ci = 1, ∀i) ˙ ψ(t) = ω2 − ω1 − K sin[ψ(t)]. (5) ˙ Representing ψ(t) as a function of ψ(t), as in Fig. 1 (where we 6 ω −ω +K 2

5

.

1

4

ψ(t) 3

2 ω2 − ω1

1

o

0

*

ψ(t)

ω −ω − K 2

−1

−2

0

1

1

2

3

4

5

6

˙ Fig. 1. Variation of ψ(t) as a function of ψ(t). suppose, with no loss of generality, that ω2 > ω1 ), it is easy to realize that there exists only one stable equilibrium point when ˙ ˙ ψ(t) = 0. The equilibria correspond to the points where ψ(t) = 0. ˙ Within each period of 2π, there are two points where ψ(t) = 0, but only one of them is stable, namely the one represented by the ˙ circle drawn in Fig.1. In fact, looking at the sign of ψ(t), the system reacts to any slight shift from the circle tending to move back to the circle (the arrows in Fig. 1 show the flow direction, ˙ given by the sign of ψ(t)). Conversely, the equilibrium represented by the star is unstable because any shift from that point leads to an indefinite departure from the equilibrium. In general, there exists one stable equilibrium point, in each period, if K > |ω2 − ω1 |.

(6)

˙ At the equilibrium, ψ(t) = 0, so that θ˙1 (t) and θ˙2 (t) coincide and are equal to ω ∗ = (ω1 + ω2 )/2. The instantaneous phases differ by a constant term, equal to ³ω − ω ´ 2 1 θ2 (t) − θ1 (t) = arcsin . (7) K

This equation shows that the only way to reduce this phase difference consists in choosing a value of K sufficiently greater than the difference |ω2 − ω1 |. 2.2. N -oscillators system Whereas the case with only two sensors is easy to analyze, it is much more difficult to study the general case of N sensors. Nevertheless, for large N , we may exploit the mean field theory approximation [5], typically used in the study of phase transitions in thermodynamic, to derive an approximate solution. In the following derivations, for simplicity of notation, we set ci = 1, but the extension to the general case is straightforward. If we introduce the complex function ri (t)ejαi (t) :=

N X

aij ejθj (t) ,

(8)

j=1

the mean field approximation consists in assuming that, for large values of N , after a transient, if the system converges, the functions ri (t) tends to a constant, independent of the index i, for all i, and θj (t) = ω ∗ t + θj0 , so that, for large N , we have N X

aij e

jθj (t)

j=1

≈e

jω ∗ t



(9)

re .

We will now show that, in practice, the mean field approximation (9) is very good even with values of N not excessively large, provided that K is sufficiently large. 20 d = 20

18 16

d = 16

14

i

r (t)

d = 10 8 6 4 2

100

200

300

400

500 time index

600

700

800

θ˙i (t) = ωi − Kr sin[θi (t) − ω ∗ t − α], i = 1, . . . , N.

900

(10)

The interesting aspect of this approximation is that the state equation of each sensor has the same behavior as the two-sensor case, irrespective of the number of coupled oscillators. Proceeding then as in the two-sensor case, there exists one stable equilibrium if Kr > |ωi − ω ∗ |.

(11)

What is important to emphasize about (11) is that the existence of an equilibrium depends on the value of r, which, on its turn, depends on the collective behavior of the oscillators. Looking at the definition (8), at the beginning of the state evolution, the value of r is typically small and there might be just a few oscillators satisfying (11). However, as the number of synchronized oscillators increases, the value of r increases. As a consequence, it is more likely that other oscillators will respect condition (11). In turn, the value of r increases and so on. There is then a sort of positive feedback such that more and more oscillators become locked to each other. Conversely, since r ≤ d, the network cannot synchronize if there are some oscillators for which (12)

The maximum value of r is equal to the network degree. Hence, the larger is the coverage radius of each oscillator, the higher is the probability for the network to synchronize. But increasing the coverage radius requires more transmission power. Alternatively, given d, we may increase K to avoid the possibility for (12) to be true.

10

0

Multiplying both sides of (9) by e−jθi (t) and taking the imaginary part, the mean field approximation (9) allows us to rewrite (1) as

Kd < |ωi − ω ∗ |.

12

0

network. In particular, in Fig. 2 we show the behavior of ri (t), as a function of time, for different degrees, i.e. d equal to 10, 16, and 20 (full connectivity). Fig. 2 shows that, after a transient, all functions ri (t) tend to values slightly less than the network degree d. Ideally, the maximum possible value of r is exactly d and this value is achievable if all oscillators are perfectly synchronous to each other, so that all exponentials ejθj (t) in (8) sum up coherently. However, in general this is not the case and we will provide, later on, an approximate procedure to evaluate r analytically.

1000

Fig. 2. Variation of ri (t) as a function of t, for different degrees. As an example, in Fig. 2 we report the values of ri (t) obtained over 20 independent realizations of a network composed of N = 21 sensors. In each realization, each sensor starts with a random phase θi (0), uniformly distributed between 0 and 2π. The natural pulsations ωi , at the beginning of each experiment, are generated as binary random variables equal to Ω0 = 0, with probability p0 = 0.2 or to Ω1 = 100, with probability 1 − p0 = 0.8, as they are supposed to be the result of a binary decision. The network is generated as a regular graph, i.e. a graph where the number of sensors coupled to any given node is the same for all nodes. Using graph terminology, we call such a number d the degree of the

We derive now the value of r analytically in the case where the network operates as a collective detector. In such a context, each sensor has two alternative hypotheses and it decides comparing the likelihood ratio with a threshold: If the threshold is exceeded, the oscillator sets ωi = Ω1 , otherwise it sets ωi = Ω0 . Denoting by 1 − p0 the detection probability, in case of synchronization, if the number of nodes is sufficiently high, each sensor oscillates with ˙ a final pulsation approximately equal to θ(t) = ω ∗ = p 0 Ω0 + (1 − p0 )Ω1 . The instantaneous phase of each oscillator is θi (t) = ω ∗ t + θi0 , where θi0 is a binary random variable that assumes the values  ³ ´  Θ0 := arcsin ω∗ −Ω0 , with prob. p0 ; ³ Kr ´ θi0 = (13)  Θ1 := arcsin ω∗ −Ω1 , with prob. 1 − p0 . Kr Setting, for simplicity, all coefficients aij = 1, if the nodes i and j are at a distance less that a given coverage radius R0 , the variable

20

average value (Ω0 + Ω1 )/2. We assume homogeneity and stationarity, so that all sensors have the same detection and false alarm probability. We choose K, Ω0 and Ω1 , so that Kd > |Ω1 − Ω0 |. An example is reported in Fig. 4, showing 100 independent realizations of θ˙i (t), for a network of 21 sensors. We can see that in

o

18

16

o

14 300

12 f(r)

250

10

o

200

8

150

intersections

100 ωi(t)

6

4

2

50 0

2

4

6

8

10

r

12

14

16

18

20 −50

Fig. 3. Graphical solution of the implicit equation yielding the value of r.

−100 −150 −200

r introduced in (9) is a Bernoulli random variable, with pdf à ! d X d k pR (r) = p0 (1 − p0 )d−k δ(r − rk ), k

(14)

k=0

where d is the network degree (equal for all nodes) and p rk := k 2 + (d − k)2 + 2k(d − k) cos(Θ1 − Θ0 ). The expected value of rk is then à ! d X d k mR = p0 (1 − p0 )d−k rk := f (r). k

(15)

(16)

k=0

This is a function of r. Imposing that the expected value mR coincides with the value of r used in the mean field approximation yields an implicit equation f (r) = r that can be solved to find r as the fixed point of f (r). An example is shown in Fig. 3, relative to the same experimental setup as in Fig. 2, for different network degrees. From Fig. 3 we notice that the fixed points of f (r), shown as the intersection points, are very close to the asymptotic values of ri (t) provided by the simulations shown in Fig. 2, for different network degrees. 3. DISTRIBUTED DETECTION AND ESTIMATION We show now the performance of the proposed network used as a distributed detector first and then as a local information storage device. The setup is the same described in the previous section. Each oscillator starts with a value of ωi depending on its initial decision and then it evolves as a member of a population of mutually coupled oscillators. The SNR is supposed to be the same for all sensors, so that we can set ci = 1, ∀i. The need for mutual coupling is precisely the wish to making the sensors arrive at a common decision, without requiring complicated exchanges of data among the sensors themselves. We quantify here the gain achievable through mutual coupling in the detection performance of each sensor. After running the synchronization algorithm, each sensor compares its own final θ˙i (t) with a threshold equal to the

0

50

100

150

200 time index

250

300

350

400

Fig. 4. Variation of θ˙i (t) as a function of t. all trials, all sensors synchronize to the same value, i.e., equivalently, the sensors reach a common decision, as desired. In case of perfect synchronization, θ˙i (t) coincides with the value ω ∗ given by (3). In the given detection context, ω ∗ is a Bernoulli random variable, with mean mω = p0 Ω0 + (1 − p0 )Ω1 and variance σω2 = p0 (1 − p0 )(Ω21 − Ω20 )/d, where d is the degree of each sensor. Approximating the Bernoulli distribution with the Gaussian pdf (this approximation is valid for dp0 (1 − p0 ) À 1), we may approximate the final detection probability as Ãs ! d (p0 − 0.5)2 Pd = Q , (17) p0 (1 − p0 )

√ R∞ 2 where Q(x) := 1/ 2π x e−u /2 du. In Fig.5, we report the probability that the sensor takes a wrong decision, obtained by simulations (stars or plus signs) or using the theoretical approximation (17) (solid lines), as a function of the SNR on each sensor, assuming AWG noise. The number of nodes is reported besides each set of curves. The sensors are located over a square grid, with inter-element distance equal to 1. The stars refer to the situation where the coverage radius of each node is r0 = 4.1 and K = 100. The plus signs refer instead to the case where the coverage radius is 2.1 and K = 500. The curve relative to N = 1 is the reference curve where each sensor takes its own decision, without coupling with the other sensors. We can notice a very good agreement between theory and simulation. But what is more interesting to notice is how the performance improves by increasing the number of sensors. This means that, to guarantee a final overall error probability, or detection probability for a given false alarm probability, if the network is composed of many coupled oscillators, one can strongly relax the requirement on the SNR of each single sensor. In other words, the network of coupled oscillators is much more reliable then the single oscillators. Finally, we consider the application of our sensor network as a distributed device to store information locally. In this case, each node of the network

0

0

10

5 N=1

10 −1

10

15

P

e

y

N=16

25

N=64

−2

20

10

30

35

N=100

40

−3

10 −20

−15

−10

−5

0

5

10

0

SNR (dB)

5

10

Fig. 5. Average error probability: theoretical values (solid lines) and simulations (star and plus signs).

15

20 x

25

30

35

40

30

35

40

Fig. 6. Noisy field. 0

4. REFERENCES [1] Iyengar, S.,S., Brooks, R., R., (Eds.), Distributed Sensor Networks, Chapman & Hall/CRC, Boca Raton, 2005. [2] Hong, Y.-W., Scaglione, A. “Distributed change detection in large scale sensor networks through the synchronization of

5

10

15

y

senses a physical parameter, which represents, typically, a continuous function of space. The node samples this function locally and sets its own initial pulsation ωi proportional to the estimated parameter (ωi is then typically a continuous variable in this case). In this kind of application, we do not want that the network would reach a global common estimate that would destroy the local information. To make this possible, the network parameters K and r0 are chosen so that the network cannot reach a global synchronization. More precisely, for a given range of the function to be estimated, K and D are chosen so that (12) is satisfied. The idea is, in principle, similar to what suggested in [4]. An example is reported in Fig. 6, where each nodes samples a two-dimensional function f (x, y) superimposed to additive Gaussian noise. The network is composed, as in the previous example, by a set of sensors placed over a rectangular grid of 40 × 40 nodes. The intensity in the figure is proportional to the initial pulsation of each sensor, proportional to the observed value. After letting the network to evolve according to (1), the final pulsations are shown in Fig. 7. This example shows that the network is capable of smoothing the individual local decisions thus improving the reliability of the decisions, but still retaining the information locally. In conclusion, in this paper we have proposed a novel way of designing a distributed detection or estimation system with minimal complexity in the exchange of information among the sensors and no need for any centralized detector. In this work, we have only mentioned the possibility to adapt the parameters of each dynamical system as a function of the information about the SNR available at each node. In parallel works, we have quantified the advantages achievable by incorporating this possibility and we have extended this idea to the case where the overall system evolves towards the global maximum likelihood estimator, in the general case of vector observation.

20

25

30

35

40 0

5

10

15

20 x

25

Fig. 7. Reconstructed field after synchronization.

[3]

[4]

[5] [6]

[7]

[8]

pulse-coupled oscillators”, Proc. of ICASSP ’2004, pp. III869–872, Lisbon, Portugal, July 2004. Hong, Y.-W., Cheow, L., F., Scaglione, A. “A simple method to reach detection consensus in massively distributed sensor networks”, Proc. of ISIT ’2004, Chicago, July 2004. Campbell, S., Wang, D., Jayaprakash, C., “Synchrony and desynchrony in integrate-and-fire oscillators”, Neural Computation, vol. 11, pp. 1595–1619, 1999. Kuramoto, Y., Chemical Oscillations, Waves, and Turbulence, Dover Publications, August 2003. Peskin, C. S., Mathematical Aspects of Heart Physiology, Courant Institute of Mathematical Sciences, New York Univ., 1975. Mirollo, R., Strogatz, S., H., “Synchronization of pulsecoupled biological oscillators”, SIAM Journal on Applied Mathematics, vol. 50, pp. 1645–1662, 1990. Strogatz, S., H., Nonlinear Dynamics and Chaos, pp. 273– 278, Perseus Book Publishing, Cambridge, MA, Dec. 2000.