Random deployment of wireless sensor networks - Semantic Scholar

3 downloads 0 Views 2MB Size Report
Amar Aissani. LRIA Laboratory, USTHB,. Algiers 16111, Algeria. E-mail: [email protected]. Abstract: Sensor placement is a fundamental issue in wireless ...
Int. J. Ad Hoc and Ubiquitous Computing, Vol. 15, Nos. 1/2/3, 2014

133

Random deployment of wireless sensor networks: a survey and approach Mustapha Reda Senouci* Ecole Militaire Polytechnique, Algiers 16111, Algeria E-mail: [email protected] *Corresponding author

Abdelhamid Mellouk LiSSi Laboratory, UPEC, Paris 94400, France E-mail: [email protected]

Amar Aissani LRIA Laboratory, USTHB, Algiers 16111, Algeria E-mail: [email protected] Abstract: Sensor placement is a fundamental issue in wireless sensor networks (WSNs). The sensorpositions can be predetermined to guarantee the quality of surveillance provided by the WSN. However, in remote or hostile sensor field, randomised sensor placement often becomes the only option. In this paper, we survey existing random node placement strategies. We categorise random placement strategies into simple and compound. An empirical study has been carried out yielding a detailed analysis of random deployment intrinsic properties, such as coverage, connectivity, faulttolerance, and network lifespan. We also investigate the performance of a hybridisation of the simple diffusion model that places a large number of nodes around the sink and the constant diffusion that provides high coverage and connectivity rates. We show that such hybridisation ensures better performance. The obtained results give helpful design guidelines in using random deployment strategies. Keywords: WSNs; wireless sensor networks; random deployment; sensor placement; coverage; connectivity; fault-tolerance; network lifespan. Reference to this paper should be made as follows: Senouci, M.R., Mellouk, A. and Aissani, A. (2014) ‘Random deployment of wireless sensor networks: a survey and approach’, Int. J. Ad Hoc and Ubiquitous Computing, Vol. 15, Nos. 1/2/3, pp.133–146. Biographical notes: Mustapha Reda Senouci is currently working toward the PhD in co-supervision between UPEC (University of Paris-Est, France) and USTHB (University of Science and Technology Houari Boumediene, Algeria). He received the Master’s degree in Mobile Computer Science form USTHB in 2009 (with distinction) and the Engineer degree in Computer Science from the Ecole Militaire Polytechnique (EMP) in 2005 (with distinction), where he is currently a teacher-researcher. His research interests include mobile ad-hoc networks, wireless sensor networks and wireless sensor/actor networks. Abdelhamid Mellouk is Full Professor at University of Paris-Est Créteil VdM (UPEC), Networks & Telecommunications (N&T) Department and LiSSi Laboratory, France. Head of several executive national and international positions, he was the founder of the Network Control Research activity at UPEC with extensive international academic and industrial collaborations. His general area of research is in adaptive real-time control for high-speed new generation dynamic wired/wireless networks in order to maintain acceptable quality of service/experience for added-value services. He is an active member of the IEEE Communications Society and has held several offices including leadership positions in IEEE Communications Society Technical Committees. Amar Aissani is Professor at University of Science and Technology Houari Boumediene (USTHB). He graduated in mathematics from Constantine University (1977) and Minsk University (Belarus, 1980). He received PhD degree in Applied Mathematics from Vilnius University (Lithuania, 1983) after a dissertation thesis prepared at Minsk University under the supervision of Professor G.A. Medvediev. From 1990 to 1998, he took parts in the foundation of the Mathematic Department

Copyright © 2014 Inderscience Enterprises Ltd.

134

M.R. Senouci et al. (University of Blida). He is the author of several publications and three student textbooks. His field of interest with his students and colleagues covers reliability, queuing systems and other stochastic models. This paper is a revised and expanded version of a paper entitled ‘An analysis of intrinsic properties of stochastic node placement in sensor networks’ presented at the IEEE Global Communications Conference (GLOBECOM 2012), Anaheim, California, USA, 3–7 December, 2012.

1 Introduction A wireless sensor network (WSN) consists of a large number of sensor nodes (or sensors) and one or more management nodes, typically called sinks or base stations. Sensors monitor physical phenomena and produce sensory data. A sink, on the other hand, collects data from sensors. Such a network can be used to monitor the environment, detect, classify and locate specific events, and track targets over a specific region of interest (RoI) (Akyildiz et al., 2002). Sensor placement is a fundamental issue in WSNs design (Akyildiz et al., 2002; Younis and Akkaya, 2008). It determines many intrinsic properties of the WSN, such as its coverage, connectivity, cost and lifespan. Sensors can generally be placed in the RoI either manually (i.e., deterministically) or randomly depending on the type of sensors, application, and the RoI (Akyildiz et al., 2002; Younis and Akkaya, 2008). In deterministic deployment, each sensor is placed at predetermined coordinates which is usually pursued for indoor applications (Akyildiz et al., 2002; Younis and Akkaya, 2008), when sensors are expensive or when their operation is significantly affected by their position. Optimal deterministic deployment is a challenging NP-hard problem for most of the formulations of sensor deployment (Younis and Akkaya, 2008; Senouci et al., 2011, 2012a). In harsh environments such as a battlefield or a disaster region, deterministic deployment of sensors is very risky and/or infeasible. In this case, random deployment often becomes the only option. Random deployment has been studied extensively over the last decade. Two approaches are commonly used, namely: theoretical studies and empirical studies. The former is usually conducted for very few random deployment models under several simplifying assumptions (Gupta and Kumar, 1998; Zhang and Hou, 2004; Wan and Yi, 2006; Wang et al., 2008). On the other hand, simulations analyses consider a couple of metrics and several random deployment models (Ishizuka and Aida, 2004a,b; Onur et al., 2007). With both approaches, the effectiveness of random deployment has been shown to be certain. However, there has been little research on the analysis of intrinsic properties of random deployment. By intrinsic properties we mean a full set of metrics such as coverage, connectivity, fault-tolerance, and lifespan. Furthermore, previous works fail to provide a holistic view of random deployment properties. Therefore, in this paper, we conduct an empirical study to analyse the intrinsic properties of several random node placement strategies.

The original contributions of this paper are the following. First, we survey and categorise existing random node placement strategies. Second, in order to capture the intrinsic properties of random deployment, we propose to use a set of metrics. We distinguish two classes: •

deployment metrics



functional metrics.

Third, we carry out extensive simulations yielding a detailed analysis of the intrinsic properties of random node placement strategies. Fourth, we devise a practical random deployment strategy that mixes the simple diffusion and the constant diffusion strategies, and we show that such hybridisation ensures better performance. The obtained results provide helpful design guidelines in using random deployment strategies. The rest of this paper is organised as follows. We survey existing random node placement strategies in Section 2. Section 3 discusses related works. In Section 4, we present the simulations settings and analyse the experimental results in Sections 5 and 6. Section 7 discusses our practical random deployment strategy. In Section 8, we provide a conclusion and outline perspectives for further works.

2 Stochastic node placement In random node placement, sensor-positions are defined by a probability density function (PDF).1 Depending on the deployment strategy, the coordinates of the sensor positions may follow a particular distribution. In this section, we define the PDF and briefly describe the characteristics of each random node placement strategy. We categorise the random placement strategies into simple and compound (Figure 1). Simple strategies are mere variants of the simple diffusion strategy, whereas compound strategies are realised by repeated simple diffusion.

2.1 Simple random node placement strategies 2.1.1 Simple diffusion The simplest way to deploy sensors is to scatter them from the air (Akyildiz et al., 2002; Wang et al., 2008; Ishizuka and Aida, 2004b). Since all the information must reach the sink, the distribution is centred on the sink. Lightweight sensors will have a high air resistance, this randomises

Random deployment of wireless sensor networks: a survey and approach

135

Figure 1 Random node placement strategies taxonomy

their placement, we call the resulting distribution a simple diffusion. This deployment process was modelled by a linear diffusion equation, whose solution is a two-dimensional normal distribution (Ishizuka and Aida, 2004a). The PDF of sensor-positions is: f (x) =

1 h(∥x − c∥), x ∈ R2 2πσ 2 −r 2

h(r) , e( 2σ2 ) .

likely to end up further out. Along the axis of flight, the node distribution is Uniform, while it is Gaussian in the orthogonal direction (Onur et al., 2007). We call the resulting distribution a continuous diffusion. The PDF of sensor-positions is (the x-axis is the axis of flight): 1 , x∈R |B| (y−m)2 1 e− 2σ2 , y ∈ R f (y) = √ 2 2πσ

f (x) = (1)

In equation (1), c ∈ R2 is the mean position at ground which is just under the point where sensors are scattered and σ 2 is the variance of the distribution. The variance is determined by various factors (e.g., shape or weight of sensors, or the height at which sensors are released). It should be pointed out that when air currents are strong, sensor-positions are governed by another formula (Ishizuka and Aida, 2004a). Figure 2 shows an example of a simple diffusion of 498 nodes in a RoI of 300 × 300 m. We consider that any random node placement can be realised by repeated simple diffusion with different means and variances. The theoretical basis for this consideration is described in Ishizuka and Aida (2004b).

2.1.2 Continuous diffusion If sensors are thrown off an aircraft that flies over the middle of the RoI, most sensors are expected to fall somewhere close to the central line (denoted by B), and several sensors are

(2)

where |B| is the length of the axis of flight. Figure 3 shows an example of a continuous diffusion of 497 sensors in a RoI of 600 × 200 m. Figure 2 An example of simple diffusion

136

M.R. Senouci et al. where |RoI| is the area of the RoI. An example of constant diffusion is illustrated in Figure 6. Here, the number of sensors is 400 and the RoI is 300 × 300 m.

Figure 3 An example of continuous diffusion

Figure 6 An example of constant diffusion

2.1.3 Discontinuous diffusion In this model, the sensors are dropped by an aircraft that flies over the middle of the RoI. We propose a discontinuous dropping of the sensors defined as follow: sensors are thrown discontinuously in a single flying over. In each throw, n sensors will be dropped. In the end, we will have N dropped sensors, with N = n × number of throws. Figure 4 shows an example of a discontinuous diffusion with n = 98 and the number of throws = 5 in a RoI of 600 × 200 m. If we increase the number of throws, the discontinuous diffusion converges to the continuous diffusion, as shown in Figure 5.

2.2.2 R-random This distribution was proposed in Ishizuka and Aida (2004a), where the nodes are uniformly scattered with respect to the radial and angular directions from the sink. The R-random node distribution pattern simulates the effect of an exploded shell and follows the following PDF for sensor-positions in polar-coordinates within a distance R from the sink: 1 f (r, θ) = 0 ≤ r ≤ R, 0 ≤ θ ≤ 2π (4) 2πR An example of R-random placement is illustrated in Figure 7. Here, the number of sensors is 394 and the RoI is 300 × 300 m.

Figure 4 An example of discontinuous diffusion (5 throws)

Figure 7 An example of R-random

Figure 5 An example of discontinuous diffusion (10 throws)

2.2 Compound random node placement strategies A compound random deployment model is a random deployment model realisable by repeated simple diffusion with different means and variances. Therefore, to obtain practically a compound random deployment model we may need several flights over the RoI. In this section, we survey some existing compound random node placement strategies.

2.2.1 Constant diffusion In many works (Younis and Akkaya, 2008; Gupta and Kumar, 1998), the sensors are placed in such a way that their density is constant. Such random distribution is called constant diffusion. The PDF of the sensor-positions is given by the following equation: f (x) =

1 , x ∈ R2 |RoI|

(3)

2.2.3 Power-law This distribution is characterised by the following two features (Ishizuka and Aida, 2004b). First, the density of sensors is higher near the sink. Second, the degree of the sensors follows a power law. The PDF of the sensor-positions in polarcoordinates is: α+1 r α ( ) f (r, θ) = 2πR R 0 ≤ r ≤ R, 0 ≤ θ ≤ 2π, −1 ≤ α ≤ 1. (5) The characteristics of the Power-Law placement are similar to those of the R-random placement.

Random deployment of wireless sensor networks: a survey and approach

137

2.2.4 Exponential

3 Related works

In this model, the distribution follows an Exponential law (Zhang and Hou, 2004). The PDF of sensor-positions is:

In Ishizuka and Aida (2004a), the authors evaluated the faulttolerance against random failure from the random deployment viewpoint. Results prove that the tolerance against failure is low in constant placement, while the R-random placement has high fault-tolerance. In a more recent paper (Ishizuka and Aida, 2004b), the same authors show that the Power-law placement can raise fault-tolerance with appropriately selected control parameters. Gupta and Kumar (1998) studied necessary conditions on the transmission range needed for asymptotic connectivity of constant diffusion. Under the assumptions that nodes are deployed as a Poisson point process in a square region, the authors in Zhang and Hou (2004) derive the node density required in order to maintain full coverage with high probability. Wan and Yi (2006) study how the probability of the k-coverage changes with the sensing radius or the number of sensors. They assume that the sensors are deployed as either a Poisson point process or a Uniform point process in a square region. Wang et al. (2008) identify intrinsic properties of coverage/lifespan of a WSN that follows two-dimensional Gaussian distribution. They show that Gaussian distribution can effectively increase the network lifespan. In Vassiliou and Sergiou (2009), the authors discuss a performance study of congestion control algorithms when nodes are deployed under four different topologies, namely: simple diffusion, constant placement, R-random placement and grid placement. The results depict that congestion control algorithms are highly affected by the node placement. In a more recent paper (Sergiou and Vassiliou, 2010), the same authors evaluate the energy utilisation performance of a congestion control and avoidance algorithm under four different topologies, namely: grid placement, biased random placement, simple diffusion, and random placement. Obtained results show that the best performance is obtained when the sensors are densely deployed near hot spots like the sink. Unlike prior efforts, this work is based on an empirical study and provides a holistic view of the intrinsic properties of random node placement strategies.

f (x) = λe−λx .

(6)

An example of Exponential diffusion is illustrated in Figure 8. Here, the number of sensors is 436, the RoI is 300 × 300 m and λ = 100. Figure 8 An example of exponential diffusion

2.2.5 Stensor In Sinha and Pal (2007), the authors proposed Stensor, a partition-based random node placement algorithm. They assume a rectangular area classified into small cells. Each cell cannot host more than one sensor. Sensors are distributed in those cells according to the following PDF: √

f (x) =

e−

λ

x!

x

λ2

, x ≥ 0.

(7)

An example of Stensor placement is illustrated in Figure 9. Here, the number of sensors is 16 and the RoI contains 1000 cells.

4 Simulation settings Figure 9 An example of Stensor placement

To capture intrinsic properties of random deployment, we propose to use a set of metrics. We distinguish two classes:

Source: Sinha and Pal (2007)



deployment metrics



functional metrics.

Deployment metrics are related to sensor-positions in the RoI, they are relatively independent of the protocol stack of the WSN. We consider the following deployment metrics: coverage, connectivity, and connected coverage. On the other hand, functional metrics are related to the network operations and therefore dependent on its protocol stack. We consider the following significant functional metrics: network lifespan, fault-tolerance, and routing-related metrics.

138

M.R. Senouci et al.

We have carried out a two-stage simulation assuming that (x, y) coordinates of the nodes are independent and the deployed nodes are homogeneous. The first simulation stage assesses deployment metrics using our simulation tool. In the second simulation stage, we have conducted extensive simulations using the network simulator ns-2 (Breslau et al., 2000) to analyse functional metrics. We use the well known SPEED (He et al., 2003) as a routing protocol. We repeated hundred times each simulation with variation of the seed; arithmetic means are reported on the graphs and tables. We observed that for a degree of confidence of 95%, the simulations results remain in (2–9)% compared to the average. The communication radius (Rc = 40 m), sensing radius (Rs = 18 m) and characteristics of the protocol stack have been chosen to represent an off-the-shelf device. Table 1 summarises the parameters of the second simulation stage. Table 1 The second scenario parameters Parameters

Values

Channel type Radio-propagation model Antenna type Network interface type MAC type Interface queue type Size of the queue Energy model txPower, rxPower

Wireless channel Two-ray ground OmniAntenna Phy/WirelessPhy IEEE 802.11 Queue/DropTail/PriQueue 50 packets Energy model 0.8W , 0.4W

5 Deployment metrics analysis

because it allows a deployment with an acceptable quality of surveillance at a reduced cost. The second drawn conclusion is that the 300 additional nodes inevitably generate redundancies in coverage, which motivates the study of k-coverage.

5.1.2 k -coverage According to the results depicted in Figure 10(b), if we consider the 2-coverage rate for 100 deployed nodes, we have the following order: the constant diffusion (29%), the continuous diffusion (27%), the simple diffusion (25%), the Exponential diffusion (21%), and finally the R-random diffusion (20%). By deploying 200 nodes, the 2-coverage rate of the R-random exceeds that of the Exponential diffusion by 6%, for the other distributions the order remains the same. For 300 and 400 nodes, the order is always the same, and we notice that the 2-coverage rate of the R-random reaches that of the simple diffusion quickly and it exceeds it for a number of nodes equal to 500. According to the results depicted in Figure 10(a)–(c), we can classify the random node placement strategies according to the k-coverage rate as follow: •

constant diffusion



continuous diffusion



R-random



simple diffusion



exponential diffusion.

5.2 Connectivity analysis

5.1 Coverage analysis

5.2.1 Number of connected components

5.1.1 Simple coverage

According to the results depicted in Figure 11(a), we can say that the number of deployed nodes is inversely proportional to the number of connected components, where each connected component is a set of nodes that are linked to each other by paths, and which is connected to no additional nodes in the network. For 60 nodes, we have an average of 11, 11, 9, 8, and 8 connected components for the constant diffusion, R-random, continuous diffusion, Exponential diffusion, and simple diffusion, respectively. Starting from 180 nodes, we obtain only one connected component. In this case, all the sensors are connected to the sink. We know according to Gupta and Kumar (1998) that a network is strongly √ connected if

As depicted in Figure 10(a), we see clearly that the constant diffusion ensures the highest coverage. This can be explained by the fact that the sensing capability of the network is fully utilised in the constant diffusion where sensors are uniformly distributed in the RoI. The continuous diffusion approaches the constant diffusion in terms of coverage rate; moreover for a number of nodes higher than 600, the difference between them is 2%. The continuous diffusion obtains the highest coverage rate in comparison to those of other simple random node placement strategies. This is explained by the fact that the continuous diffusion is a combination of the Uniform and the Gaussian distributions. When considering the relationship between the number of deployed nodes and the coverage rate, we notice a threshold, in each model, from which the coverage variation is very small. In the constant diffusion, the coverage rate is 95% for 300 nodes and 100% for 600 nodes. Thus, 300 additional nodes were deployed to cover the remaining 5%. The first drawn conclusion is that full area coverage is very costly. This explains the fact that many researchers currently investigate partial coverage (Li et al., 2011)

, C> the following condition is satisfied: Rc ≥ a C ln(n) n 1 , where R is the communication radius, a is the length c π of the RoI, C is a constant, and n is the number of nodes. In our case, a network of 300 × 300 m, a number of nodes greater than 180, and Rc of 40 m represents a network with a good connectivity which explains the existence of only one connected component. The Exponential diffusion always gives a higher number of connected components compared to the other distributions and it never reaches one. We can say that it is the worst model in terms of connectivity.

Random deployment of wireless sensor networks: a survey and approach

139

Figure 10 Achieved k-coverage by different strategies: (a) 1-coverage; (b) 2-coverage and (c) 3-coverage

Figure 11 Achieved connectivity by different strategies: (a) the number of connected components; (b) the % of sensors connected to the sink and (c) connected coverage

5.2.2 The rate of nodes connected to the sink

5.2.3 Connected coverage

According to Figure 11(b), the rate of nodes connected to the sink varies proportionally with the number of deployed nodes. For all placement strategies, we find that they provide an acceptable rate of nodes connected to the sink, especially after the deployment of 80 nodes as the worst among them exceeds 60%. This enables us to say that most of the deployed nodes are connected to the sink.

From Figure 11(c), for 100 nodes deployed, we notice that the connected coverage rate has decreased significantly compared to the coverage rate, (–14%) for the constant diffusion, (–15%) for R-random, (–7.5%) for the simple diffusion, (–7%) for the continuous diffusion, and (–11%) for the Exponential diffusion. These results show that there are some sensors covering parts of the RoI and they are unable to communicate

140

M.R. Senouci et al.

their readings, they are isolated sensors and therefore useless. Increasing the number of nodes reduces the gap between coverage and connected coverage. The first stage simulations results are summarised in Table 2. We use the following notation: Very Bad (−−), Bad (−), Fair (±), Good (+), and Very Good (++). Table 2 Summary of first stage simulations results

Constant diffusion Continuous diffusion R-random diffusion Simple diffusion Exponential diffusion

Coverage

Connectivity

Connected coverage

++

++

++

+

+

+

±

±

±







−−

−−

−−

case of a constant diffusion, therefore the routing protocol generates few control packets which leads to reduced energy consumption. Obtained results show that the simple diffusion has the smallest end-to-end delay; the other two models have almost the same delay. This is explained by the fact that in the simple diffusion when approaching the sink, each node has more routing neighbour’s candidates which allows the routing protocol to have a larger set of routing candidates and thus a better selection which reduces the end-to-end delay.

6.2 Fault-tolerance analysis In this section, we evaluate fault-tolerance related to detection errors, transient and global errors. In the simulations, 1600 targets are generated periodically in the RoI in random positions. A target is detected if its position is within the detection range of a sensor. Each sensor that detects a target sends a detection message to the sink. In what follows, we consider a target as detected if it was detected by at least one sensor and at least one detection message was received by the sink.

6.2.1 Fault-tolerance related to detection errors 6 Functional metrics analysis 6.1 Routing-related metrics analysis We generate nodes-to-sink CBR traffic, such as sources are randomly selected, and we evaluate routing-related metrics. Table 3 summarises the obtained results. We notice that the three considered models provide a high delivery rate (over 94%). It should be pointed out that for the same number of sensors the three models provide a high connected coverage rate. Table 3 Routing-related simulations results

Delivery rate Consumed energy Consumed energy per packet End-to-end delay

Constant diffusion

Simple diffusion

R-random

97.65 171.62 0.046

96.46 318.72 0.073

94.29 279.04 0.078

214.03

187.19

208.61

As the energy is considered as a critical resource in WSNs, research in this area tends to minimise energy consumption to increase the network lifespan. The consumed energy by a uniformly deployed network is much less than in the case of a simple diffusion or R-random, these results do not determine the model that consumes more energy. To get a clear idea, we calculate the energy consumed per packet. According to the obtained results, we can say that the constant diffusion uses less energy than the other two models to route a packet to the sink. Simple diffusion and R-random consume almost twice the energy consumed by the constant diffusion to route a packet to the destination. This is explained by the fact that each node has more routing neighbour’s candidates in the

We vary the probability of detection errors from 0.2 to 0.8, Figure 12 shows the variation of detection rate as a function of detection errors. The R-random provides a higher detection rate than other distributions. By increasing the probability of detection errors, we notice that the detection rate decreases, i.e., the detection rate is inversely proportional to the probability of detection errors. At first sight, these results may not seem very logical, since the constant diffusion provides the highest coverage and connected coverage in comparison to the other two models, so it should provides the highest detection rate. To explain this result, a very important parameter is involved which is the problem of the non-uniform energy consumption also known as the energy hole problem (Wu et al., 2008). Over time, sensors deplete their energy as they send and route messages toward the sink. Nodes surrounding the sink consume more energy than the others, thus they quickly exhaust their energy; this is called the energy hole problem. Due to this problem, the signalling messages sent by other nodes will be transmitted to the sink with a delivery rate that decreases with time, until the sink becomes isolated from the network because of the death of all its neighbouring nodes. Recall that R-random and simple diffusion have a high density around the sink which allow them to be more resistant to the problem of the non-uniform energy consumption; consequently, they ensure a greater detection rate in comparison to the other models.

6.2.2 Fault-tolerance related to transient and global errors Regarding fault-tolerance related to global errors, we simulate the random failure of 12% of the nodes after a time t (after the generation of 200 targets). For fault-tolerance related to transient errors, we simulate the temporary failure of random nodes for a specific time intervals. More precisely, we simulate

Random deployment of wireless sensor networks: a survey and approach the temporary failure of 5 nodes during the time of generation of 100 targets; this scenario will be repeated after each generation of 200 targets. We compare the obtained results with those of the reference case (no node falls down). Table 4 shows the differences due to different types of errors compared to the reference case. We notice that the impact of errors on the constant diffusion is much larger than the other two models. The simple diffusion has a greater robustness against different types of errors and the R-random provides results that are quite similar to those of the simple diffusion. Table 4 The differences due to different errors

Constant diffusion Simple diffusion R-random

Transient errors (%)

Global errors (%)

18.10 7.10 10.50

17.60 9.80 9.30

141

which ensure the k-coverage of this area. On the other side, when nodes are uniformly distributed, nodes death expands the not-covered area quickly. In terms of network coverage, both models R-random and simple diffusion provide a longer network lifespan than that offered by the constant diffusion.

6.3.2 Network lifespan based on connectivity Ensuring a quite good coverage during network operation does not mean providing a high detection rate, the absence of paths to the sink makes the signalisation of detected targets impossible. We evaluate the network lifespan based on connectivity. As depicted in Figure 13(b), at the beginning, the three models provide a high connectivity rate. Over time, the number of nodes connected to the sink decreases rapidly in the constant diffusion to reach 23% at the end of the simulation. The other two models guarantee a high connectivity rate that exceeds 70%. In terms of network connectivity, both models R-random and simple diffusion provide a longer network lifespan compared to that offered by the constant diffusion.

Figure 12 Fault-tolerance related to detection errors

6.3.3 Network lifespan based on the quality of surveillance

With 130 deployed nodes, the k-coverage rate is reduced (according to the results obtained in the first simulation stage), so the failures of nodes impact the functioning of the network. It reduces the coverage, so fewer targets are detected, and reduces connectivity, so fewer packets are successfully routed toward the sink. The importance of the k-coverage and kconnectivity appears clearly in the case of errors that disrupt the proper functioning of the network. Depending on the application, a redundancy in coverage and connectivity must be provided to ensure fault-tolerance.

6.3 Network lifespan analysis 6.3.1 Network lifespan based on coverage First, we consider network lifespan based on coverage. When setting up the network, the coverage rate of the constant diffusion is superior to those of other models, which coincides with the previous results. After generating a number of targets, nodes’ battery exhaustion affects the network coverage; the constant diffusion is highly affected (see Figure 13(a)). The coverage rate decreases less strongly in R-random and simple diffusion due to the high density of nodes around the sink,

Figure 13(c) illustrates the number of targets reported to the sink against generated targets (referred to as the quality of surveillance). The curve of constant diffusion tends to a constant value. In this case, the network has reached a saturation mode; no detection message can reach the isolated sink. In the other two models, the number of reported targets continues to grow, even with the death of some nodes surrounding the sink. R-random model provides a better detection rate than the simple diffusion. This could be justified by the fact that in the simple diffusion 68.3% of the nodes are at a distance d ≤ σ, so the density in the rest of the network will be less than that of the R-random diffusion which places nodes uniformly according to the polar coordinates (r, θ). The R-random ensures the longest network lifespan in terms of quality of surveillance, followed by simple diffusion, and finally constant diffusion. The second stage simulations results are summarised in Table 5.

7 A practical random deployment strategy According to the results discussed in Sections 5 and 6, in one hand, the constant diffusion guarantees a high coverage and connectivity rates during the deployment of sensors. However, after a while these performances reach a lower level compared to the other models such as simple diffusion or R-random. The constant diffusion reaches quickly the saturation mode, where no message could be transmitted to the sink located at the centre of the RoI. In this case, the sink becomes isolated from the rest of the WSN. In the other hand, simple diffusion and R-random place a large number of nodes around the sink which prolongs the network lifespan in terms of detection rate and connected coverage rate. We propose a hybridisation of the simple diffusion model that places a large number of nodes around the sink

142

M.R. Senouci et al.

Figure 13 Network lifespan for different strategies: (a) network lifespan based on coverage; (b) network lifespan based on connectivity and (c) network lifespan based on the quality of surveillance

Table 5 Summary of second stage simulations results

Delivery rate Consumed energy per packet End-to-end delay Fault-tolerance related to detection errors Fault-tolerance related to transient errors Fault-tolerance related to global errors Network lifespan based on coverage Network lifespan based on connectivity Network lifespan based on the quality of surveillance

Constant diffusion

Simple diffusion

R-random

+ + ± − − − ± − −

+ ± + ± + + + + ±

+ ± ± + ± + + + +

and the constant diffusion that provides high coverage and connectivity rates. We chose the simple diffusion instead of the R-random whereas the latter ensures a greater detection rate, because the simple diffusion is characterised by the standard deviation σ which makes it possible to control the density of the nodes around the sink. We call such hybridisation the hybrid diffusion.

of other deployment strategies. In what follows, we present the obtained results for a circular RoI of a radius R = 150m, α = 13/20, β = 7/20, and σ = 0.3 × (R/2).

The hybrid diffusion of N nodes is defined as the deployment of α.N nodes according to the simple diffusion strategy and β.N nodes according to the constant diffusion model where 0 < α, β < 1 and α + β = 1. Figure 14 shows an example of a hybrid diffusion of 300 nodes in a circular RoI of a radius R = 150 m.

7.1.1 Simple coverage

We conduct extensive simulations to evaluate the performance of the hybrid diffusion in comparison to those

7.1 Deployment metrics analysis

We vary the number of deployed nodes (N ) from 100 to 600. Figure 15 shows the coverage rate of the hybrid diffusion in comparison to those of simple diffusion, constant diffusion, and R-random. According to the obtained results, the hybrid diffusion approaches the constant distribution in terms of coverage and outperforms the other two models.

Random deployment of wireless sensor networks: a survey and approach Figure 14 An example of hybrid diffusion

143

The performances of our hybrid model are close to those of other models. Indeed, the performance of hybrid diffusion is almost similar to that of R-random in the case of transient errors. In the case of global errors, hybrid diffusion achieves the best performance. Table 6 The differences due to different errors

Hybrid diffusion Simple diffusion R-random

Transient errors (%)

Global errors (%)

10.8 7.1 10.5

8.6 9.8 9.3

7.2.3 Network lifespan based on the quality of surveillance

7.1.2 The rate of nodes connected to the sink Figure 16 shows the rate of nodes connected to the sink while using four deployment strategies, namely: hybrid diffusion, simple diffusion, constant diffusion, and R-random. For 60 nodes, hybrid diffusion ensures better performance in comparison to the constant diffusion. We know from previous results that the constant diffusion gives the lowest rate of nodes connected to the sink, our model improves this metric (a difference of 32% in this case). That’s because our model has the properties of simple diffusion which provides a high rate of nodes connected to the sink even with a reduced number of nodes. When increasing the number of nodes, our model gives a similar rate of nodes connected to the sink to those of other models. Based on these results, we can deduce that to ensure a high connected coverage rate (over 90%), our model is the best candidate after the constant diffusion.

7.2 Functional metrics analysis 7.2.1 Fault-tolerance related to detection errors Regarding fault-tolerance related to detection errors, we compare our hybrid model to R-random model which gave the highest detection rate in the previous simulations (Section 6.2.1). According to the results depicted in Figure 17, for small detection errors probabilities both models have the same results. When increasing detection errors probability our hybrid model outperforms the R-random model.

Figure 18 presents the number of targets reported to the sink against generated targets. The obtained results are compared to those of R-random which gave the highest detection rate compared to other models. We notice that our hybrid model achieves a higher detection rate than the R-random model and ensures a greater network lifespan by providing a better quality of surveillance.

7.2.4 Network lifespan based on the number of alive nodes Figure 19 shows the evolution of the number of the dead nodes as a function of time. In the hybrid model, the number of dead nodes is slightly higher than that of R-random. That’s due to the fact that our hybrid model gives the greatest detection rate, therefore a greater number of packets routed to the sink, which causes the exhaustion of a great number of nodes especially those around the sink. According to the definition of the network lifespan based on the number of alive nodes, the hybrid model tends to exhaust a greater number of nodes compared to the other models, and thus presents a slightly shorter network lifespan.

7.2.5 Network lifespan based on coverage According to the results depicted in Figure 20, initially, our hybrid model provides a lower coverage rate in comparison to that of constant diffusion and slightly higher coverage rate in comparison to those of simple diffusion and R-Random. Over time, our hybrid model outperforms the three other models. We also notice that our hybrid model keeps a nearly constant coverage, which is an interesting property.

7.2.6 Network lifespan based on connectivity 7.2.2 Fault-tolerance related to transient and global errors We compare our hybrid model to simple diffusion and Rrandom in terms of fault-tolerance related to transient and global errors with the same scenarios as those of Section 6.2.2. Table 6 shows the differences due to different types of errors in comparison to the reference case.

Figure 21 shows the rate of nodes connected to the sink as a function of time. We notice that the hybrid model provides the highest rate of nodes connected to the sink, which explains the highest detection rate obtained previously. According to the definition of the network lifespan based on connectivity, the hybrid diffusion ensures the longest network lifespan.

144

M.R. Senouci et al.

Figure 15 Achieved coverage by different strategies

Figure 16 Achieved connectivity by different strategies

Figure 17 Fault-tolerance related to detection errors

8 Conclusion and future work In this paper, we have presented a survey and taxonomy of random node placement in WSNs. An empirical study has been carried out yielding a detailed analysis of intrinsic properties of random node placement in WSNs and provides a holistic view of random deployment properties. The obtained results give helpful design guidelines in using random deployment strategies. Based on our findings, we have further proposed

a hybridisation of the simple diffusion model that places a large number of nodes around the sink and the constant diffusion that provides high coverage and connectivity rates. We have also investigated the performance of the hybrid model and we show that such hybridisation ensures better performance. In the future, we plan to assess all the considered metrics using other realistic connectivity models (e.g., the log-normal shadowing path loss model) and sensor coverage models

Random deployment of wireless sensor networks: a survey and approach

145

Figure 18 Network lifespan based on the quality of surveillance

Figure 19 Network lifespan based on the number of alive nodes

Figure 20 Network lifespan based on coverage

(e.g., the probabilistic model (Onur et al., 2007) and the evidence-based model (Senouci et al., 2012b)). We also plan

to investigate other possible hybridisations (e.g., continuous diffusion and R-random).

146

M.R. Senouci et al.

Figure 21 Network lifespan based on connectivity

References Akyildiz, I.F., Su, W., Sankarasubramaniam, Y. and Cayirci, E. (2002) ‘A survey on sensor networks’, IEEE Communications Magazine, Vol. 40, No. 8, pp.102–114. Breslau, L., Estrin, D., Fall, K., Floyd, S., Heidemann, J., Helmy, A., Huang, P., McCanne, S., Varadhan, K., Xu, Y. and Yu, H. (2000) ‘Advances in network simulation’, Computer, Vol. 33, No. 5, pp.59–67. Gupta P. and Kumar P.R. (1999) ‘Critical power for asymptotic connectivity in wireless networks’, in William, M., McEneaney, G., Yin, G. and Zhang, Q. (Eds.): Stochastic Analysis, Control, Optimization and Applications, Systems & Control: Foundations & Applications, Birkhuser Boston, pp.547–566. He, T., Stankovic, J.A., Lu, C. and Abdelzaher, T. (2003) ‘Speed: a stateless protocol for real-time communication in sensor networks, Proceedings of the 23rd International Conference on Distributed Computing Systems, ICDCS ’03, Washington, DC, USA, pp.46–55. Ishizuka, M. and Aida, M. (2004a) ‘Performance study of node placement in sensor networks’, Distributed Computing Systems Workshops, International Conference on, Vol. 5, Los Alamitos, CA, USA, pp.598–603. Ishizuka, M. and Aida, M. (2004b) ‘The reliability performance of wireless sensor networks configured by power-law and other forms of stochastic node placement’, IEICE Trans. on Communications, Vol. E87-B, No. 9, pp.2511–2520. Li, Y., Vu, C., Ai, C., Chen, G. and Zhao, Y. (2011) ‘Transforming complete coverage algorithms to partial coverage algorithms for wireless sensor networks’, IEEE Transactions on Parallel and Distributed Systems, Vol. 22, pp.695–703. Onur, E., Ersoy, C., Delic, H. and Akarun, L. (2007) ‘Surveillance wireless sensor networks: deployment quality analysis’, Network, IEEE, Vol. 21, No. 6, pp.48–53. Senouci, M., Mellouk, A., Oukhellou, L. and Aissani, A. (2011) Uncertainty-aware sensor network deployment’, IEEE Global Telecommunications Conf. GLOBECOM’11, Houston, Texas, USA, pp.1–5. Senouci, M., Mellouk, A., Oukhellou, L. and Aissani, A. (2012a) ‘Efficient uncertainty-aware deployment algorithms for wireless sensor networks’, IEEE Wireless Communications and Networking Conf. WCNC’2012, Paris, France, pp.2163–2167.

Senouci, M., Mellouk, A., Oukhellou, L. and Aissani, A. (2012b) ‘An evidence-based sensor coverage model’, Communications Letters, IEEE Vol. 16, No. 9, pp.1462–1465. Sergiou, C. and Vassiliou, V. (2010) ‘Energy utilization of HTAP under specific node placements in wireless sensor networks’, EW2010, Lucca, Italy, pp.482–487. Sinha, A. and Pal, B. (2007) ‘Stensor: a novel stochastic algorithm for placement of sensors in a rectangular grid’, Presented in paperpresentation competition “Eureka” in Kshitij, IIT Kharagpur, Available at http://www.princeton.edu/ carch/sinha/stensor.pdf Vassiliou, V. and Sergiou, C. (2009) ‘Performance study of node placement for congestion control in wireless sensor networks’, 3rd International Conference on New Technologies, Mobility and Security (NTMS), Piscataway, NJ, USA, pp.173–180. Wan, P. and Yi, C. (2006) ‘Coverage by randomly deployed wireless sensor networks’, IEEE/ACM Trans. Netw., Vol. 14, No. SI, pp.2658–2669. Wang, D., Xie, B. and Agrawal, D. (2008) ‘Coverage and lifetime optimization of wireless sensor networks with gaussian distribution’, IEEE Transactions on Mobile Computing, Vol. 7, No. 12, pp.1444–1458. Wu, X., Chen, G. and Das, S. (2008) ‘Avoiding energy holes in wireless sensor networks with nonuniform node distribution’, IEEE Trans. Parallel Distrib. Syst., Vol. 19, No. 5, pp.710–720. Younis, M. and Akkaya, K. (2008) ‘Strategies and techniques for node placement in wireless sensor networks: a survey’, Ad Hoc Networks, Vol. 6, No. 4, pp.621–655. Zhang, H. and Hou, J. (2004) ‘On deriving the upper bound of α-lifetime for large sensor networks’, Proceedings of the 5th ACM International Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc ’04, New York, NY, USA, pp.121–132.

Notes 1

The probability of a sensor being within the RoI = {x1 ≤ X ≤ x2 , y1 ≤ Y ≤ y2 } can be written in ∫ xterms ∫ y of PDF as follows: P (x1 ≤ X ≤ x2 , y1 ≤ Y ≤ y2 ) = x12 y12 f (x, y)dxdy.