Floating boundary particle swarm optimization algorithm

2 downloads 522 Views 2MB Size Report
Adaptive range · Active search domain · Floating boundary. 1 Introduction. The particle swarm optimization (PSO) algorithm is known to be a powerful tool for.
Optim Lett DOI 10.1007/s11590-012-0502-8 ORIGINAL PAPER

Floating boundary particle swarm optimization algorithm Aleksey Y. Galan · Ronan Sauleau · Artem V. Boriskin

Received: 9 August 2011 / Accepted: 16 May 2012 © Springer-Verlag 2012

Abstract A new modification to the particle swarm optimization (PSO) algorithm is proposed aiming to make the algorithm less sensitive to selection of the initial search domain. To achieve this goal, we release the boundaries of the search domain and enable each boundary to drift independently, guided by the number of collisions with particles involved in the optimization process. The gradual modification of the active search domain range enables us to prevent particles from revisiting less promising regions of the search domain and also to explore the areas located outside the initial search domain. With time, the search domain shrinks around a region holding a global extremum. This helps improve the quality of the final solution obtained. It also makes the algorithm less sensitive to initial choice of the search domain ranges. The effectiveness of the proposed Floating Boundary PSO (FBPSO) is demonstrated using a set of

A. Y. Galan · A. V. Boriskin Institute of Radiophysics and Electronics of the National Academy of Sciences of Ukraine (IRE NASU), Kharkov, Ukraine Present Address: A. Y. Galan ATLAS Group, Labortório de Instrumentação e Física Experimental de Partículas, Lisbon, Portugal R. Sauleau Institut d’Electronique et de Télécommunications de Rennes (IETR), UMR CNRS 6164, Université de Rennes 1, Rennes, France Present Address: A. V. Boriskin (B) Institut d’Electronique et de Télécommunications de Rennes (IETR), UMR CNRS 6164, Université de Rennes 1, Rennes France e-mail: [email protected]

123

A.Y. Galan et al.

standard test functions. To control the performance of the algorithm, new parameters are introduced. Their optimal values are determined through numerical examples. Keywords Global optimization · Adaptive algorithms · Particle swarm optimization · Adaptive range · Active search domain · Floating boundary

1 Introduction The particle swarm optimization (PSO) algorithm is known to be a powerful tool for global optimization, e.g. [1–4]. The key concept behind PSO is a swarm of moving particles whose positions and velocities are iteratively updated depending on three factors, namely: inertia, personal experience, and position of the best solution found by the swarm up to a current iteration. Even for a few particles involved in the search process such a model guarantees good balance between a global exploration during the initial stage and local search in the neighborhood of the best found solution in the final stage, when all particles gather around a potential global extremum. In a classical PSO model the swarm moves inside a multi-dimensional search domain (or design space) whose dimensionality and size are defined by the number of optimization parameters and ranges of their variation, respectively. The search domain range is defined at the initial step and remains unchanged during optimization. In some cases, the space is surrounded by non-transparent boundaries whose properties are defined in line with mechanical analogies [5]. The latter is often used for computationally heavy problems such as electromagnetic synthesis [6,7]. The general scenarios when classical PSO may experience difficulties are: (i) design space is large and its landscape is defined by an objective function with many local extrema, (ii) the global extremum is located outside the initial design space. Both scenarios are often met in the computational electromagnetics because real world problems are characterized by multi-parameter and highly-oscillating functions with difficult-to-predict behavior, e.g. [7–10]. This letter presents a simple way to overcome these difficulties by introducing a floating boundary principle. The proposed Floating Boundary PSO (FBPSO) is capable of handling the aforementioned difficulties via gradual modification of the design space, guided by the swarm collective behavior. More precisely, FBPSO manipulates with search domain boundaries aiming to first encircle the global extremum and than to shrink the domain as much as possible in order to boost the optimization process. Note that such an ‘active search domain’ strategy is different from adaptive schemes typically implemented in PSO, e.g. [2,3,11,12]; the latter only affects the algorithm control parameters, such as weights in the velocity formula, and does not vary the search domain ranges. The main advantage provided by FBPSO is flexibility in selecting ranges of the initial search domain. Unlike the classical PSO, the active search domain strategy implemented in FBPSO enables one to find extrema even if they occur outside the initial search domain. Because of this, the algorithm becomes less dependent on the ‘initial conditions’, namely the size and positioning of the initial search domain. As a result, a speed-up and better stability in finding the global extrema can be achieved.

123

Floating boundary particle swarm optimization algorithm

A possible weak point of FBPSO is that the modification of the design space is an iterative process, which is carried out based on the information gathered during a short period of time (usually a few iterations). Because of this, the stochastic nature of PSO may negatively affect the decision making mechanism; this can be especially critical in the initial stage, when particles are seeded randomly. In some cases, the search process can be misguided due to a premature shrinking or displacement of the search domain boundaries, erroneously taken into effect due to limited information available. To prevent such an unwanted effect, threshold control parameters can be introduced, as explained in Sect. 3. To our best knowledge, there exists only one paper, recently published by Kitayama et al. [13], which describes an adaptive range PSO (ARPSO) capable of directly modifying the search domain. The main differences between the proposed FBPSO and ARPSO reported in [13] are: (i) different trigger mechanism for the active search domain modification, based on the collision counter instead of using mean and standard deviation of design variables; (ii) inertness of the active domain modification provided thanks to introduction of the threshold parameters; and (iii) simplicity of implementation. This letter provides a detailed description of the FBPSO algorithm, first presented in [14], and defines the optimal values of its control parameters. The letter is organized as follows. The idea behind FBPSO and its implementation details are discussed in Sects. 2 and 3, respectively. The numerical tests are reported in Sect. 4. This includes: illustration of the adaptive range optimization concept, definition of the optimal values for the control parameters, and benchmarking study. The conclusions are drawn in Sect. 5. 2 Idea behind FBPSO Consider a PSO with a global neighborhood [2] applied to find the global minimum of a real valued objective function f (x), ∀x ∈ S, where S ⊂  D . Let us also assume that the search domain is surrounded by a non-transparent boundary whose properties are defined by some boundary condition [5]. According to [2], any particle moving in the defined N -dimensional search domain S is characterized by two N -dimensional vectors, namely coordinate x = (x1 , x2 , . . . , x N ) and velocity: v = (v1 , v2 , . . . , v N ). The best previously visited position of a particle is denoted as p = ( p1 , p2 , . . . , p N ) and the best position found by the swarm is denoted as g = (g1 , g2 , . . . , g N ). Velocity and position of each particle at any iteration are determined as follows:     vni+1 = wvni + c1r1 pni − xni + c2 r2 gni − xni ,

xni+1 = xni + vni+1 ,

(1)

where w is called inertia weight; c1 , c2 are two positive acceleration constants, called cognitive and social parameters, respectively; r1 , r2 are random numbers, uniformly distributed in [0, 1]; i = 0, 1, 2, . . . determines iteration number; and n = 1, 2, . . ., N indicates the dimension number. During optimization particles move inside a given multi-dimensional search domain. Aiming to find a global extremum among multiple local ones, the particles

123

A.Y. Galan et al.

revisit many times different regions of the search domain, including those where the probability of finding an optimal solution is low. During the initial stage, this helps improve uniformity of the search domain sampling, needed to identify the optimal solutions basins. In the later stages, this may lead to stagnation due to wasting many efforts on exploring the non-perspective regions. To boost the optimization process, the active search domain concept has been proposed [13,14]. In [13], the adaptive mechanism was implemented based on the mean and average deviation of each design variable. On the contrary, in the proposed FBPSO the adaptive mechanism is realized in a much simpler manner. While moving inside the search domain, particles periodically hit the boundaries. The presence of the social term in the velocity formula affects movement of all particles. Indeed, at any step the global best (GB) position serves as a center of gravity for the particles. Although particles do not move straightforward towards GB, thanks to the inertia term and stochastic factors, the movement of all particles is affected by the attraction from GB. Thus a “collective respond” of the swarm to the objective function landscape appears. In particular, if GB is located close to a boundary, a larger number of collisions with this boundary occur. The opposite is also valid: particles rarely reach boundaries located far from a current GB. Thus, a difference in “pressure” on different boundaries appears indirectly evidencing the peculiarities of the objective function landscape. This difference can be considered as a guideline for the gradual modification of the search domain ranges aimed to encircle the area holding a potential global extremum. 3 Implementation details The floating boundary (FB) principle can be easily implemented into any PSO variant. In this letter, we aim to illustrate the new capabilities offered by FB and do so by an example of a classical PSO. Whereas implementation of the FB principle to other PSO variants and relevant benchmarking studies are out of scope for this letter. The FBPSO algorithm can be built from a classical PSO by adding the following modifications: (i) counter for the number of collisions with each domain boundary and (ii) subroutine, which updates positions of the search domain boundaries depending on the number of collisions experienced by each boundary during a given number of iterations, P. The former can be combined with a subroutine used for updating particles positions after collision with boundaries, whereas the latter can be added as a separate independent block (Table 1). A flowchart of the FBPSO is shown in Fig. 1. Note that FBPSO is compatible with any type of BC. Moreover, no specific action is required to control over a particle behavior in case if it occurs outside the search domain as a result of shifting its boundary position. Indeed, during the next iteration such a particle is automatically treated the same manner as if it has just crossed this boundary; its new position is determined according to the predefined BC. To control the performance of FBPSO, four new parameters are introduced: – Period of boundary position updates, P[number of iterations], – Threshold values: lower T1 and upper T2 [number of collisions per iteration], – Transformation coefficient, α. This coefficient defines mobility of the active search domain boundaries according to the following formula:

123

Floating boundary particle swarm optimization algorithm Table 1 FBPSO algorithm FBPSO algorithm Step 1. Initialize the swarm (position and velocities). Step 2. While (Stopping Criterion not met) Do, i = i + 1 Step 3. Do (number of particle, k = 1 . . . K ) Step 4.

Do (number of dimension, n =1…N)

Step 5.

Update velocity and position of k-th particle in n-th dimension

Step 6.

Check if a new position of the particle belongs the allowed search domain range: min , x max ]) then If (.not. xi,n ∈ [xi,n i,n

update the particle position by applying BC update the collision counter value for the corresponding boundary by +1 Endif Step 7.

Enddo, n

Step 8.

Compute the cost function value for the new position

Step 9.

Compare with personal and global best values; update if needed

Step 10.

Enddo: k

Step 11.

Check if it is the time to update the boundaries positions: If (reminder (i / P) = 0) call Subroutine UPDATE_BOUNDARY

Step 12. Endwhile, i Step 13. Report the results. Subroutine UPDATE_BOUNDARY Do (number of dimension, n =1… N) min Check lower boundary: If (threshold exceeded) update xi,n max Check upper boundary: If (threshold exceeded) update xi,n

Enddo, n Modifications with respect to the classical PSO are bold emphasised

  min = x min + α x max − x min , M Low /P ≤ T xi+1,n 1 n i,n i,n i,n   − Lower boundary min = x min − α x max − x min , M Low /P ≥ T xi+1,n 2 n i,n i,n i,n   max = x max + α x max − x min , M U p /P ≥ T xi+1,n 2 n i,n i,n i,n   − Upper boundary U p max = x max − α x max − x min , M xi+1,n n /P ≤ T1 i,n i,n i,n

(2a)

(2b)

min and x max are the lower or upper bounds of the search domain range, where xi,n i,n Low,U p

respectively; n is a dimension number; i is a number of iteration; Mn are the total numbers of collisions with the lower and upper boundaries of the search domain detected during P iterations. Note that each boundary moves independently, thus the search domain can drift (without changing size and shape) and/or transform (by changing size or shape). Our preliminary studies show that P = 10 and α = 0.1 fit well various test functions and optimization scenarios. These values are used for all simulations presented in Sect. 4.

123

A.Y. Galan et al.

Fig. 1 Flowchart of the FBPSO algorithm. The insets illustrate the change of a particle trajectory due to a damping BC and a shift of the search domain boundaries

(a)

(b)

Fig. 2 a The cross-section of the test functions defined by Eqs. (3–5). Inset provides a zoom on the global extrema. b Schematic illustration of different optimization scenarios used for the FBPSO assessment. The color rectangles show the initial positions of the search domain boundaries. The star mark depicts position of the global extremum (Color figure online)

4 Assessment of the FBPSO algorithm 4.1 Illustration of the adaptive range optimization strategy The performance of the FBPSO algorithm is assessed using the following test functions, whose global extrema are located at x = 0 (Fig. 2):   N  sin xn /xn , F1 = 10 · N − (3) n=1

123

Floating boundary particle swarm optimization algorithm

F2 = 10N +

N 

|xn | − 10 cos



 10xn ,

(4)

n=1

F3 =

N 

xn2 /10 + 5 · |xn | · (1 − cos (π xn /5)) ,

(5)

n=1

where N is the number of parameters under optimization. The performance is assessed under the following three scenarios (Fig. 2b): Scenario A (x ∈ [−100, 100]): GB is located in the center of the search domain, Scenario B (x ∈ [−20, 180]): GB is located close to the domain boundary, Scenario C (x ∈ [20, 220]): GB is located outside the initial search domain. As a reference solution, we use a classical PSO with global neighborhood and linearly decreasing inertia weight [4]. The velocity formula used in both PSO and FBPSO algorithms is defined by Eq. (1). The boundary condition implemented in both algorithms is the so-called hybrid damping boundary condition, known to be the most effective for a wide range of optimization problems [15,16]. Finally, the maximum allowed velocity is defined for both algorithms as   max min (6) − x0,n vnmax = β x0,n max,min where β is the maximum velocity constant and x0,n are the upper and lower bounds of the initial search domain in the corresponding dimension. For the preliminary simulations (Sects. 4.1, 4.2), we use β = 0.5, in line with recommendation given in [4] for the classical PSO. Whereas the impact of this parameter on the FBPSO performance is investigated in Sect. 4.3. The active search domain concept is illustrated in Fig. 3, using a 2-D (N = 2) test function defined by Eq. (5). The optimization is carried out under scenario C (Fig. 2b). The swarm consists of 5 particles, the stopping criterion is defined as 100 iterations, positions of the search domain boundaries are updated each ten iterations (P = 10), and the threshold constants are set equal to arbitrary values T1 = 1 and T2 = 2. As we can see in Fig. 3a, the optimization process runs non-monotonically, namely it has a stagnation period of about 60 iterations. The number of collisions with each boundary counted during the update periods is shown in Fig. 3b. Let us consider in detail the first boundary update event, indicated by mark #1. As we can see, the number of collisions with upper boundaries is below the threshold T1 , thus the upper bounds of the search domain range must be reduced, according to Eq. 2b. At the same time, the number of collisions detected for the lower boundaries exceeds the threshold T2 , thus the lower boundaries must be expanded, according to Eq. 2a. The corresponding modification of the search domain range is depicted in Fig. 3c. During the next few steps (2–6), the upper boundaries still experience very few collisions and thus continue to shrink. On the opposite, the situation for the lower boundaries varies. For instance, for the variable x1 , the lower boundary expands twice, then stays unchanged for another two steps and finally expands one more time. Whereas for the variable x2 , the lower boundary expands three times in a row and then remains unchanged. These steps are depicted in Fig. 3d. The later domain transformations are illustrated in Fig. 3e, f.

123

A.Y. Galan et al.

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 3 Illustration of the active search domain optimization strategy: a optimization process in terms of gbest objective function value; b number of collisions detected for each boundary: boundary update period P = 10 iterations, threshold constants are T1 = 1, T2 = 2; c–f iterative transformation of the active search domain. The 2-D objective function is defined by Eq. (4); its landscape is shown by grey contour lines. The initial position of the search domain boundaries is shown by a black rectangle; it corresponds to scenario C (see Fig. 2b). The color rectangles depict positions of the search domain boundaries at the selected iterations. In addition, the square marks indicate corners of the search domain at each intermediate step (Color figure online)

As seen, in the final stage the boundaries tightly surround the global extremum. Because of this the optimization process is boosted, although it has been stagnating for a certain period of time (see Fig. 3a). 4.2 Definition of the optimal values for the FBPSO control parameters Although FBPSO is based on the same operation principles as a classical PSO, the particles movement serves a twofold purpose: (i) search for the global extremum, (ii) pushing the boundaries of the search domain. Thus a correction for the control parameters (velocity weight coefficients) may be required. To determine the optimal values of the control parameters and also to estimate the sensitivity of FBPSO for the control parameters deviations, we consider three optimization problems, with objective functions defined by Eqs. (3–5) with N = 20. To compensate for large dimensionality of the test problems, we use a swarm of 40 particles and define the stopping criterion as 500 iterations. The results obtained for each combination of parameters are averaged over 100 trials. The simulations are carried out in the following order. First, we determine the optimal initial and final values of the inertia weight; during simulations it linearly

123

Floating boundary particle swarm optimization algorithm

Not applicable

Not applicable

(a)

(b)

Fig. 4 Final solution and mean average deviation versus acceleration constants: a PSO, b FBPSO. The objective function is defined by Eq. (3) with N = 20. The inertia weights and the maximum velocity constant used for both algorithms are w1 = 0.5, w2 = 1.5, β = 0.5. The threshold constants used for FBPSO are T1 = T2 = 2. The data is average by 100 trials

decreases in the following range w ∈ [w2 , w1 ]. Then, we determine the optimal values of the acceleration constants, c1 and c2 . Finally, we estimate the impact of the threshold constants, T1 and T2 .

123

A.Y. Galan et al.

Not applicable

Not applicable

(a)

(b)

Fig. 5 Final solution and mean average deviation versus inertia weight: a PSO, b FBPSO. The objective function is defined by Eq. (3) with N = 20. The acceleration constants are selected based on data shown in Fig. 4: (PSO) c1 = 2.3, c2 = 1.5; (FBPSO) c1 = 2.5, c2 = 1.8. Other parameters are the same as in Fig. 4. The black color is used to indicate the forbidden areas of parameter combinations (i.e. w1 > w2 )

Figures 4 and 5 depict results obtained for the objective function F1 when optimized under the three aforementioned scenarios (see Fig. 2b). The pairs of contour graphs shown in Fig. 4 are for the average final solution (FS) and its mean average

123

Floating boundary particle swarm optimization algorithm

deviation (MAD) found using each set of acceleration constants (c1 , c2 ). At this step, the inertia weight and maximum velocity constant are defined in line with general recommendations: w ∈ [1.5, 0.5] and β = 0.5. The impact of the inertia weight is then studied in Fig. 5. In both cases, the threshold constants set equal to an arbitrary value T1 = T2 = 2. The dashed white lines indicate the values of the control parameters which best fit all the optimization scenarios. As we can see, for the selected set of the control parameters, the classical PSO outperforms FBPSO in scenarios A and B. At the same time, it is obviously not applicable for the scenario C, when the global extremum is located outside of the initial search domain. On the contrary, FBPSO demonstrates a stable performance in all three scenarios. Note that this test problem is quite tricky for FBPSO because of the specific ‘flat’ landscape of the selected test function (see Fig. 2). Indeed, under such conditions the swarm usually requires more time to estimate the overall slope of the objective function landscape and to finally detect the global extrema basin. Because of this, in the initial stage the search domain drifting direction may be affected by the uniformity of the swarm seeding, rather than its respond to the objective function landscape. As it will shown in Sect. 4.3, a remedy to this problem can be found in tuning the values of the maximum velocity and threshold constants. In addition, we expect that this weak point can be eliminated by introducing a non-uniform schedule of the boundary position updates with a longer initial update period (out of scope for this letter). Finally, it is important to highlight once again the main advantage gained thanks to releasing the search domain boundaries: FBPSO is able to identify extrema located outside the initial search domain that is contrary to all other optimization algorithms working with non-transparent boundaries. This capability, as well as stability of the FBPSO performance when applied for solving other test problems, is further illustrated in Figs. 6, 7, 8, 9. The peculiarities of the test functions studied in Figs. 6, 7, 8, 9 are the following: a gently slope and very sharp global extremum for F2 and a steep slope with many local minima for F3 . As one can see in Figs. 6 and 7, for F2 test function two algorithms demonstrate very similar performance with a minor advantage of PSO in Scenario A counterbalanced by a very stable performance of FBPSO observed for all three scenarios. In addition, FBPSO is less sensitive to inertia weight constant values (Fig. 7). The results obtained for F3 (Figs. 8, 9) are very similar to those represented in Figs. 6 and 7; they evidence that FBPSO can successfully deal with multi-extremum functions as well. An important comment to Figs. 4, 5, 6, 7, 8, 9 is the following: FBPSO is rather sensitive to selection of the accelerating coefficients. In particular, it is observed that the best performance is achieved when c1 > c2 ; this corresponds to the enhanced gravity towards the personal best solution. Note that the same recommendation is valid for the classical PSO, although this results differs from a general recommendation for using equal weights c1 = c2 = 2, e.g. [4]. The average best values for the control parameters for both algorithms are summarized in Table 2. Finally, the impact of the threshold constants on the FBPSO performance, when applied for solving the selected test functions under the same three scenarios. The same control parameter values are used for all simulations; they are defined as shown in Table 2.

123

A.Y. Galan et al.

Not applicable

Not applicable

(a)

(b)

Fig. 6 The same as in Fig. 4 for the objective function defined by Eq. (4)

As one can see in Fig. 10, FBPSO algorithm is quite sensitive to threshold constant values. Moreover, the optimal threshold values vary for different test problems. In general, the lower threshold should be defined as T1 ≤ 3 in order to prevent a fast premature shrinking of the search domain (the lower T1 value, the more rare shrinking occurs). The upper threshold can be selected in the range of T2 ∈ [1,4]; this helps to

123

Floating boundary particle swarm optimization algorithm

Not applicable

Not applicable

(a)

(b)

Fig. 7 The same as in Fig. 5 for the objective function defined by Eq. (4). The acceleration constants are selected as shown in Fig. 6: (PSO) c1 = 1.8, c2 = 1.5, (FBPSO) c1 = 2.2, c2 = 1.2. The white color is used when the function value is higher than the maximum value of the scale bar

control speed of the search range expansion. Note that for current study the threshold constants are defined as the number of collisions detected per iteration. To soften this rule, the thresholds can be defined with respect to the total number of collisions

123

A.Y. Galan et al.

Not applicable

Not applicable

(a)

(b)

Fig. 8 The same as in Fig. 4 for the objective function defined by Eq. (5)

per period. This can be especially useful for problems requiring a low threshold values, like F1 (Fig. 10a). If one deals with an unknown objective function with hard-to-predict behavior, the threshold constants can be selected as T1 = 1 and T2 = 2.

123

Floating boundary particle swarm optimization algorithm

Not applicable

Not applicable

(a)

(b)

Fig. 9 The same as in Fig. 5 for the objective function defined by Eq. (5). The acceleration constants are selected according to Fig. 8: (PSO) c1 = 2.2, c2 = 1.2, (FBPSO) c1 = 2.2, c2 = 1.1

4.3 Benchmarking between FBPSO and classical PSO In this section a benchmarking study between a classical PSO and FBPSO is reported for six test problems, namely three ones defined by Eqs. (3–5) and three additional

123

A.Y. Galan et al. Table 2 The recommended values for the control parameters of the PSO and FBPSO algorithms determined based on the data shown in Figs. 4–9 c1

c2

w1

w2

PSO

2.1

1.4

0.5

1.3

FBPSO

2.3

1.4

0.5

1.3

(a)

(b)

(c)

Fig. 10 Final solution and mean average deviation produced by FBPSO algorithm versus threshold parameters. The identical control parameters are used for all simulations (see Table 2). The black color is used to shade the forbidden areas of parameter combinations (i.e. T1 > T2 )

ones defined by Eqs. (7–9). In all cases, the dimensionality of the problems equals ten (N = 10), the swarm consists of 20 particles, and the stopping criterion is defined as 500 iterations. The results are averaged over 500 trials.

123

Floating boundary particle swarm optimization algorithm

Fig. 11 The 2-D maps of the objective functions defined by Eqs. (3–5) and Eqs. (7–9). The color rectangles depict the boundaries of the search domains used for different optimization scenarios (Color figure online)

F4 =

N  

xn2 /6−10 cos xn +10

n=1



− Rastrigin



N 1  2 F5 = 22.71 − 20 exp ⎝−0.2 xn N n=1

 0.5 ⎞  N  ⎠ − exp 1 cos xn N

(7)

− Ackley

n=1

(8) F6 =

1 N2

N −1  

(1−axn )2 +10(axn+1 −(axn )2 )2



a = 0.1, −Rosenbr ock

(9)

n=1

The 2-D landscapes of the test functions and optimization scenarios used for the benchmarking studies are shown in Fig. 11. For FBPSO the scenarios once again correspond to the cases when global extrema are located inside (Scenario I) and outside (Scenario IIb) the initial search domain. Whereas for PSO we select two search domains of different sizes (Scenarios I and IIa). These scenarios imitate a situation when one deals with an unknown objective function and has to choose between FBPSO, which can be applied with a smaller search domain having arbitrary selected ranges, and PSO, which is to be applied with a larger search domain range in order to (hopefully) ensure that the global extremum is inside this range. In such a way, we assess the advantages provided by FBPSO thanks to a less strict requirement to the initial optimization conditions.

123

A.Y. Galan et al.

Fig. 12 Average FS value found by PSO and FBPSO algorithms under scenarios B and C versus maximum velocity constant. The objective functions are those shown in Fig. 11 with N = 20. The threshold constants used for all runs of FBPSO are defined as T1 = T2 = 2

To answer this question, we use the control parameters defined in Table 2. In addition, we increase the maximum velocity constant upto β = 1.0; this is done based on the data reported in Fig. 12. It is important to notice that, as revealed in Fig. 12, FBPSO is more sensitive to the maximum velocity constant value1 . In particular, in all cases the best performance of FBPSO is achieved for β ∼ 1, whereas for PSO it is observed for a twice smaller value, β ∼ 0.5. Note, that a proper selection of the β constant, makes the difference between the results provided by PSO (Figs. 4, 5, 6, 7, 8, 9, 10) and FBPSO (Fig. 12), applied in both B and C scenarios, very small. To be consistent with the results shown in Sect. 4B, threshold constants are selected as T1 = T2 = 2. Finally, Table 3 shows the statistical data obtained for the test problems defined in Fig. 11, using the classical PSO and FBPSO algorithms. As we can see, correction of the maximum velocity constant improves performance of FBPSO for standard optimization scenarios with gbest inside the initial search domain (scenario I), so both algorithms demonstrate quite similar performance. Note that FBPSO always explores a wider domain to assure that there are no better optima in the neighborhood of the initial search domain; therefore some minor loss in the final solution quality is possible. At the same time, when applied under uncertain initial conditions (Scenarios IIa and IIb, when it is hard to predict position of gbest, which is common for many engineering 1 To facilitate comparison with the data reported in Sect. 4.2, the impact of the maximum velocity constant

is investigated for the scenarios shown in Fig. 2.

123

Floating boundary particle swarm optimization algorithm Table 3 The final solutions produced by PSO and FBPSO algorithms when applied for solving the problems defined in Fig. 11 (N = 10) Scenario I

Scenario IIa (PSO) and IIb (FBPSO)

Best

Worst

Average

MAD

Best

Worst

Average

MAD

PSO

2E–8

9.54

0.15

0.26

3E–7

19.3

2.33

3.58

FBPSO

2E–5

9.84

0.18

0.34

1E–5

19.6

1.46

2.49

F1

F2 PSO

27.1

38.7

34.5

1.50

24.9

58.5

36.7

3.35

FBPSO

20.9

47.1

34.3

1.82

20.3

50.3

34.6

1.78

PSO

3E–9

6E–5

1E–6

1E–6

1E–7

39.8

1.35

2.34

FBPSO

2E–6

9.91

0.19

0.39

4E–6

9.90

0.26

0.50

F3

F4 PSO

2E–7

22.7

0.34

0.65

1E–5

25.6

5.70

4.49

FBPSO

4E–5

19.1

1.98

2.88

8E–5

19.1

1.77

2.62

PSO

1E–4

0.01

1E–3

7E–4

3E–4

0.02

4E–3

2E–3

FBPSO

2E–3

0.09

0.02

0.01

1E–3

0.09

0.02

0.01

PSO

4E–6

0.02

5E–3

2E–3

2E–6

0.04

6E–3

4E–3

FBPSO

6E–6

0.05

0.01

0.01

2E–4

0.05

0.01

0.01

F5

F6

The control parameters of both algorithms are selected according to Table 2. Other parameters: T1 = 1, T2 = 2, β = 1.0. Swarm size K = 20. The data is averaged over 500 trials

problems) FBPSO efficiently handles this challenge, whereas the quality of the PSO solutions rapidly degrades due to increase on the search domain size. Furthermore, we remind once again that in all cases when gbest occurs outside the initial search domain the classical PSO and all its variants are not able to find gbest in principle, whereas the proposed FBPSO can handle this problem with almost no penalty on the solution quality. This capability constitutes the main added value of the proposed FBPSO. 5 Conclusions A new variant of the adaptive range PSO algorithm has been proposed based on the floating boundary principle. As demonstrated through numerical examples, introduction of the ‘floating boundaries’ provides a classical PSO with a new capability, namely the freedom in selecting initial search domain ranges. This enables one to deal with smaller search domains and still obtain a fast and accurate solution for an arbitrary optimization problem. What is important, these new capabilities are gained by almost no additional cost. The advantages of FBPSO make it a favorable choice for solving computationally heavy optimization problems, like those met in the computational electromagnetic, where calculation of the objective function is a very time consuming

123

A.Y. Galan et al.

operation, and thus a compromise between the search domain size, computational time, and quality of the final solution is always an issue. Acknowledgments This work was supported jointly by Ministry of Science and Education, Ukraine and Ministère des Affaires Étrangères et Européennes, France under program DNIPRO.

References 1. Eberhart, R.C., Shi Y.: Particle swarm optimization: developments, applications and resources. In: Proc. Congr. Evolutionary Computation. vol. 1, pp. 81–86 (2001) 2. Parsopoulos, K.E., Vrahatis, M.N.: Recent approaches to global optimization problems through Particle Swarm Optimization. Nat. Comput. 1, 235–306 (2002) 3. Clerc, M., Kennedy, J.: The particle swarm: explosion, stability and convergence in a multi-dimensional complex space. IEEE Trans. Evol. Comput. 6, 58–73 (2002) 4. Schutte, J.F., Groenwold, A.A.: A study of global optimization using particle swarms. J. Glob. Optim. 31, 93–108 (2005) 5. Xu, S., Rahmat-Samii, Y.: Boundary conditions in particle swarm optimization: revisited. IEEE Trans. Antennas Propag. 55(3), 760–765 (2007) 6. Hoorfar, A.: Evolutionary programming in electromagnetic optimization: a review. IEEE Trans. Antennas Propag. 55(3), 523–537 (2007) 7. Robinson, J., Rahmat-Samii, Y.: Particle swarm optimization in electromagnetic. IEEE Trans. Antennas Propag. 52(2), 397–407 (2004) 8. Boriskin, A.V., Sauleau, R.: Numerical investigation into the design of shaped dielectric lens antennas with improved angular characteristics. Prog. Electromagn. Res. B 30, 279–292 (2011) 9. Rolland, A., Ettorre, M., Boriskin, A.V., Le Coq, L., Sauleau, R.: Axisymmetric resonant lens antenna with improved directivity in Ka-band. IEEE Antennas Wirel. Propag. Lett. 10(1), 37–40 (2011) 10. Rolland, A., Boriskin, A.V., Person, C., Quendo, C., Le Coq, L., Sauleau, R.: Lens-corrected axis-symmetrical shaped horn antenna in metallized foam with improved bandwidth. IEEE Antennas Wirel. Propag. Lett. 11(1), 57–60 (2012) 11. Zhan, Z., Xiao, J., Zhang, J., Chen, W.: Adapting control of acceleration coefficients for particle swarm optimization based on clustering analysis. In: Proc. Congr. Evolutionary Computation, pp. 3276–3282 (2007) 12. Li, C., Yang, S.: An adaptive learning particle swarm optimizer for function optimization. In: Proc. Congr. Evolutionary Computation, pp. 381–388 (2009) 13. Kitayama, S., Yamazaki, K., Arakawa, M.: Adaptive range particle swarm optimization. Optim. Eng. 10, 575–597 (2009) 14. Galan, A.Yu., Boryskina, O.P., Sauleau, R., Boriskin, A.V.: Particle swarm optimization algorithm with moving boundaries as a powerful tool for exploration research. In: Proc. 5th Eur. Conf. Antennas Propag. (EuCAP), Rome (Italy), pp. 2081–2084 (2011) 15. Huang, T., Mohan, A.S.: A hybrid boundary condition for robust particle swarm optimization. IEEE Antennas Wirel. Propag. Lett. 4, 112–117 (2005) 16. Galan, A.Yu., Sauleau, R., Boriskin, A.V.: Parameter selection in particle swarm optimization algorithm for synthesis of linear arrays with flat-top beams. J. Telecommun. Radio Eng. 70(16), 1415–1428 (2011)

123