enhanced particle swarm optimization algorithms with ...

4 downloads 0 Views 6MB Size Report
LIM WEI HONG ... Tatt Hee, Khang Siang, Shue Siong, Chen Hee, and Sheng Hong. ... Eleanor, Qaspiel, Earn Tzeh, Yeong Chin, Wee Chuen, Pei Nian, Nick, ...
ENHANCED PARTICLE SWARM OPTIMIZATION ALGORITHMS WITH ROBUST LEARNING STRATEGY FOR GLOBAL OPTIMIZATION

LIM WEI HONG

UNIVERSITI SAINS MALAYSIA 2014

ENHANCED PARTICLE SWARM OPTIMIZATION ALGORITHMS WITH ROBUST LEARNING STRATEGY FOR GLOBAL OPTIMIZATION

by

LIM WEI HONG

Thesis submitted in fulfillment of the requirements for the degree of Doctor of Philosophy

September 2014

ACKNOWLEDGEMENT

This thesis is dedicated to everyone in the field of computational intelligence who embarks the journey of expanding the collection of knowledge and transcendent passion for computational intelligence. It is an honor for me to thank those who made this thesis possible. First, I am grateful to Associate Professor Dr. Nor Ashidi Mat Isa, my thesis advisor and project supervisor, for seeing the promise of this thesis and achieving research conducted under his watchful eyes. His guidance and patience throughout the tumultuous time of conducting scientific investigations related to this project are much appreciated. Besides, his invaluable support and insightful suggestions, not to mention all the hardwork and extra time poured in has resulted in the completion of this project. Further thanks go to the Universiti Sains Malaysia (USM) Postgraduate Fellowship Scheme and Postgraduate Research Grant Scheme (PRGS) entitled “Development of PSO Algorithm with Multi-Learning Frameworks for Application in Image Segmentation”, which provided financial support for my study. I would like to take the opportunity to thank those people who spent their time and share their knowledge for helping me to improve my research work with the best results, especially the current and former members of Imaging and Intelligent System Research Team (ISRT): Eisha, Jing Rui, Naim, Sazali, Suheir, Jason, Manjur, Rawia, Helmi, Kenny, Tatt Hee, Khang Siang, Shue Siong, Chen Hee, and Sheng Hong. My special thank reached out to Abdul Latiff Abdul Tawab, Ahmand Ahzam Latib, Nor Azhar Zabidin, and Amir Hamid for their technical supports. In additions, applauds and appreciations are dedicated to my friends: Li Lian, Eleanor, Qaspiel, Earn Tzeh, Yeong Chin, Wee Chuen, Pei Nian, Nick, Susan, Jing Huey, Wei Zeng, Wooi Keong, Mae Yin, Man Jia, and Joyc for their supports and friendships. Their kindness, generousness, and help made an easy life for me in the journey of pursuing my PhD study.

ii

Specially, I want to appreciate the unconditional support from my family, my parents, my brothers and my sister. Their appreciation cannot be expressed in words. Without their unconditional love, encouragement, understanding, and faith, all the things I have are impossible. Lastly, I also place on record, my sense of gratitude to some oversea researches: Dr. Changhe Li, Dr. Mengqi Hu, Dr. Marco A. Montes de Oca, Dr. Joaquin Derrac Rus, Prof. P. N. Suganthan, and Dr. Abdul Ghani Abro, who directly and indirectly have lent their helping hand in this research.

iii

TABLE OF CONTENTS ACKNOWLEDGEMENTS…………………………………………………………………. ii TABLE OF CONTENTS…………………………………………………………………… iv LIST OF TABLES………………………………………………………………………….. xi LIST OF FIGURES……………………………………………………………………….. xvii LIST OF ABBREVIATIONS…………………………………………………………….... xx ABSTRAK……………………………………………………………………………….. xxiii ABSTRACT……………………………………………………………………………..... xxv

CHAPTER 1 - INTRODUCTION 1.1 Concept of Global Optimization……………………………………………………….... 1 1.2 Global Optimization Algorithm………………………………………………………..... 2 1.2.1 Deterministic Algorithm…………………………………………………………… 3 1.2.2 Probabilistic Algorithm……………………………………………………………. 5 1.3 Particle Swarm Optimization……………………………………………………………. 8 1.4 Challenges of PSO in Global Optimization……………………………………………. 11 1.5 Research Objectives……………………………………………………………………. 13 1.6 Research Scopes………………………………………………………………………... 13 1.7 Thesis Outline…………………………………………………………………………... 14

CHAPTER 2 - LITERATURE REVIEW 2.1 Introduction…………………………………………………………………………….. 16 2.2 Particle Swarm Optimization and Its Variants…………………………………………. 16 2.2.1 Basic Particle Swarm Optimization……………………………………………… 17 2.2.2 Variants of Particle Swarm Optimization………………………………………... 22 2.2.2(a) Parameter Adaptation…………………………………………………… 23 2.2.2(b) Modified Population Topology…………………………………………. 28

iv

2.2.2(c) Modified Learning Strategy…………………………………………….. 35 2.2.2(d) Hybridization with Orthogonal Experimental Design Technique……… 40 2.2.2(e) Remarks………………………………………………………………… 46 2.3 Teaching and Learning Based Optimization…………………………………………... 47 2.4 Test Functions………………………………………………………………………….. 50 2.4.1 Benchmark Problems………………………………………………...………….. 50 2.4.1(a) Conventional Problems………………………………………………….. 52 2.4.1(b) Rotated Problems………………………………………………………... 54 2.4.1(c) Shifted Problems………………………………………………………… 55 2.4.1(d) Complex Problems………………………………………………………. 56 2.4.1(e) Hybrid Composition Problems………………………………………….. 56 2.4.2 Real-World Problems………………………………………………………….. 57 2.4.2(a) Gear Train Design Problem……………………………………………... 58 2.4.2(b) Frequency Modulated Sound Synthesis Problem……………………….. 58 2.4.2(c) Spread Spectrum Radar Polyphase Code Design Problem……………… 59 2.5 Performance Metrics…………………………………………………………………… 60 2.5.1 Mean Error……………………………………………………………………….. 60 2.5.2 Success Rate……………………………………………………………………… 61 2.5.3 Success Performance……………………………………………………………... 61 2.5.4 Algorithm Complexity…………………………………………………………… 62 2.5.5 Non-Parametric Statistical Analyses……………………………………………... 62 2.6 Summary……………………………………………………………………………….. 64

CHAPTER

3



TEACHING

AND

PEER-LEARNING

PARTICLE

SWARM

OPTIMIZATION 3.1 Introduction…………………………………………………………………………….. 66 3.2 Teaching and Peer-Learning PSO……………………………………………………… 67 3.2.1 Research Ideas of TPLPSO………………………………………………………. 67 v

3.2.2 General Description of TPLPSO…………………………………………………. 69 3.2.3 Teaching Phase…………………………………………………………………… 70 3.2.4 Peer-Learning Phase……………………………………………………………… 72 3.2.5 Stochastic Perturbation-Based Learning Strategy………………………………... 75 3.2.6 Complete Framework of TPLPSO……………………………………………….. 77 3.2.7 Comparison between TPLPSO and TLBO………………………………………. 78 3.3 Simulation Results and Discussions……………………………………………………. 80 3.3.1 Experimental Setup………………………………………………………………. 80 3.3.2 Parameter Sensitivity Analysis…………………………………………………… 82 3.3.2(a) Effect of the Parameter Z……………………………………………….. 82 3.3.2(b) Effect of the Parameter R……………………………………………….. 84 3.3.3 Comparison of TPLPSO with Other Well-Established PSO Variants…………… 85 3.3.3(a) Comparison of the Mean Error Results………………………………… 86 3.3.3(b) Comparison of the Non-Parametric Statistical Test Results……………. 90 3.3.3(c) Comparison of the Success Rate Results……………………………….. 93 3.3.3(d) Comparison of the Success Performance Results………………………. 95 3.3.3(e) Comparison of the Algorithm Complexity Results………………..…… 99 3.3.4 Effect of Different Proposed Strategies…………………………………………. 100 3.3.5 Comparison with Other State-of-The-Art Metaheuristic Search Algorithms…... 103 3.3.5(a) Comparison between TPLPSO and TLBO……………………………. 104 3.3.5(b) Comparison between TPLPSO and Other MS Algorithms…………… 108 3.3.6 Comparison in Real-World Problems…………………………………………... 110 3.3.7 Discussion………………………………………………………………………. 113 3.4 Summary……………………………………………………………………………… 116

CHAPTER 4 – ADAPTIVE TWO-LAYER PARTICLE SWARM OPTIMIZATION WITH ELITIST LEARNING STRATEGY 4.1 Introduction…………………………………………………………………………… 118 vi

4.2 Adaptive Two-Layer PSO with Elitist Learning Strategy…………………………….. 119 4.2.1 Research Ideas of ATLPSO-ELS……………………………………………….. 119 4.2.2 General Description of ATLPSO-ELS ……………………………….……….. 122 4.2.3 Diversity Metrics………………………………………………………………... 124 4.2.3(a) PSD Metric……………………………………………………………... 125 4.2.3(b) PFSD Metric…………………………………………………………… 126 4.2.3(c) Remarks………………………………………………………………... 127 4.2.4 Current Swarm Evolution……………………………………………………….. 128 4.2.5 Memory Swarm Evolution……………………………………………………… 131 4.2.5(a) Adaptive Task Allocation in Memory Swarm Evolution……………... 132 4.2.5(b) Exploitation Section in Memory Swarm Evolution…………………… 133 4.2.5(c) Exploration Section in Memory Swarm Evolution……………………. 133 4.2.5(d) Complete Framework of ATAmemory Module………………………….. 134 4.2.6 Elitist Learning Strategy Module……………………………………………….. 136 4.2.6(a) Orthogonal Experiment Design-Based Learning Strategy…………….. 136 4.2.6(b) Stochastic Perturbation-Based Learning Strategy. ……………………..139 4.2.7 Complete Framework of ATLPSO-ELS………………………………………... 140 4.3 Simulation Results and Discussions…………………………………………………... 142 4.3.1 Experimental Setup……………………………………………………………... 142 4.3.2 Parameter Sensitivity Analysis………………………………………………….. 144 4.3.3 Comparison of ATLPSO-ELS with Other Well-Established PSO Variants…… 149 4.3.3(a) Comparison of the Mean Error Results……………………………….. 149 4.3.3(b) Comparison of the Non-Parametric Statistical Test Results…………... 153 4.3.3(c) Comparison of the Success Rate Results……………………………… 155 4.3.3(d) Comparison of the Success Performance Results……………………... 158 4.3.3(e) Comparison of the Algorithm Complexity Results…………………… 161 4.3.4 Effect of Different Proposed Strategies…………………………………………. 162 4.3.5 Comparison with Other State-of-The-Art Metaheuristic Search Algorithms…... 165 vii

4.3.6 Comparison in Real-World Problems…………………………………………... 167 4.3.7 Discussion………………………………………………………………………. 170 4.4 Summary……………………………………………………………………………… 173

CHAPTER 5 – PARTICLE SWARM OPTIMIZATION WITH ADAPTIVE TIMEVARYING TOPOLOGY CONNECTIVITY 5.1 Introduction…………………………………………………………………………… 175 5.2 PSO with Adaptive Time-Varying Topology Connectivity…………………………... 176 5.2.1 Research Ideas of PSO-ATVTC………………………………………………… 177 5.2.2 General Description of PSO-ATVTC ………..………………………………… 179 5.2.3 ATVTC Module………………………………………………………………… 180 5.2.4 Proposed Learning Framework…………………………………………………. 188 5.2.4(a) Derivation of The Cognitive Exemplar and The Social Exemplar……. 188 5.2.4(b) Proposed Velocity Update Mechanism……………………………….. 190 5.2.4(c) Proposed Neighborhood Search Operator…………………………….. 192 5.2.5 Complete Framework of PSO-ATVTC…………………………………………. 195 5.3 Simulation Results and Discussions…………………………………………………... 197 5.3.1 Experimental Setup……………………………………………………………... 197 5.3.2 Parameter Sensitivity Analysis………………………………………………….. 198 5.3.3 Comparison of PSO-ATVTC with Other Well-Established PSO Variants…….. 201 5.3.3(a) Comparison of the Mean Error Results……………………………….. 201 5.3.3(b) Comparison of the Non-Parametric Statistical Test Results…………... 205 5.3.3(c) Comparison of the Success Rate Results……………………………… 207 5.3.3(d) Comparison of the Success Performance Results……………………... 210 5.3.3(e) Comparison of the Algorithm Complexity Results…………………… 214 5.3.4 Effect of Different Topology Connectivity Modification Strategies………… 215 5.3.5 Comparison with Other State-of-The-Art Metaheuristic Search Algorithms….. 219 5.3.6 Comparison in Real-World Problems………………………………………... 220 viii

5.3.7 Discussion……………………………………………………………………. 223 5.4 Summary……………………………………………………………………………… 226

CHAPTER 6 – PARTICLE SWARM OPTIMIZATION WITH DUAL-LEVEL TASK ALLOCATION 6.1 Introduction. ………………………………………………………………………...... 227 6.2 PSO with Dual-Level Task Allocation………………………………………………... 228 6.2.1 Research Ideas of PSO-DLTA………………………………………………….. 228 6.2.2 General Description of PSO-DLTA ……………………………………………. 230 6.2.3 Dimension-Level Task Allocation Module……………………………………... 231 6.2.3(a) Metric and Rules of DTA Module in Performing Task Allocations….. 231 6.2.3(b) Relocation Search……………………………………………………... 233 6.2.3(c) Exploitation Search……………………………………………………. 234 6.2.3(d) Exploration Search…………………………………………………….. 236 6.2.3(e) Crossover Operation…………………………………………………... 237 6.2.3(f) Complete Implementation of DTA Module…………………………… 239 6.2.4 Individual-Level Task Allocation Module……………………………………… 239 6.2.5 Complete Framework of PSO-DLTA…………………………………………... 242 6.3 Simulation Results and Discussions…………………………………………………... 243 6.3.1 Experimental Setup……………………………………………………………... 244 6.3.2 Parameter Sensitivity Analysis………………………………………………….. 245 6.3.3 Comparison of PSO-DLTA with Other Well-Established PSO Variants………. 247 6.3.3(a) Comparison of the Mean Error Results……………………………….. 248 6.3.3(b) Comparison of the Non-Parametric Statistical Test Results…………... 252 6.3.3(c) Comparison of the Success Rate Results……………………………… 253 6.3.3(d) Comparison of the Success Performance Results……………………... 256 6.3.3(e) Comparison of the Algorithm Complexity Results…………………… 259 6.3.4 Effect of Different Proposed Strategies………………………………………… 260 ix

6.3.5 Comparison with Other State-of-The-Art Metaheuristic Search Algorithms…... 263 6.3.6 Comparison in Real-World Problems…………………………………………... 264 6.3.7 Discussion………………………………………………………………………. 267 6.4 Comparative Study of the Proposed PSO Variants…………………………………… 269 6.4.1 Comparison of the Mean Error Results………………………………………… 270 6.4.2 Comparison of the Performance Improvement Gains………………………….. 272 6.4.3 Comparison of the Non-Parametric Statistical Test Results…………………… 274 6.4.4 Comparison of the Success Rate Results………………………………………. 276 6.4.5 Comparison of the Success Performance Results……………………………… 278 6.4.6 Comparison of the Algorithm Complexity Results…………………………….. 282 6.4.7 Comparison in Real-World Problems………………………………………….. 283 6.4.8 Remarks………………………………………………………………………… 285 6.5 Summary……………………………………………………………………………… 289

CHAPTER 7 - CONCLUSION AND FUTURE WORKS 7.1 Conclusion…………………………………………………………………………….. 290 7.2 Future Works………………………………………………………………………….. 293 7.2.1 Development of Fully Self-Adaptive Framework………………………………. 293 7.2.2 Applicability in Different Classes of Optimization Problems…………………... 294 7.2.3 Hybridization with Other Metaheuristic Search Algorithms………………….... 295

REFERENCES……………………………………………………………………………. 296 APPENDICES…………………………………………………………………………….. 308 APPENDIX A - Parameter Sensitivity Analyses for the Compared PSO Variants….. 309 APPENDIX B - Case Study to Investigate the Capability of the Proposed Orthogonal Experiment Design-Based Learning Strategy………..…………… 312 LIST OF PUBLICATIONS……………………………………………………………….. 316 LIST OF RESEARCH GRANT…………………………………………………………... 318 x

LIST OF TABLES Page 27

Table 2.1

Comparison of the simple rule-based and adaptive parameter adaptation strategies

Table 2.2

Comparison of the static and dynamic topologies

34

Table 2.3

Comparison of the framework of single learning strategy and multiple learning strategies

40

Table 2.4

Vegetable yield experiment with three factors and two levels per factor

43

Table 2.5

Deciding the best combination levels of the vegetable yield experimental factors using an OED technique

44

Table 2.6

Benchmark functions used. (Note: M denotes the orthogonal matrix; o denotes the shifted global optimum; f biasj ,j[1,16]

51

denotes the shifted fitness value applied to the corresponding functions) Table 2.7

Experimental details and features of the 30 benchmark functions (Note: “Md” denotes modality; “U” denotes unimodal; “M” denotes multimodal; “Sp” denotes separable; “Rt” denotes rotated; “Sf” denotes shifted; “Y” denotes yes; “N” denotes no)

53

Table 3.1

Parameter settings of the involved PSO variants

81

Table 3.2

Effects of the parameter Z on TPLPSO in 10-D

83

Table 3.3

Effects of the parameter Z on TPLPSO in 30-D

83

Table 3.4

Effects of the parameter Z on TPLPSO in 50-D

83

Table 3.5

Effects of the parameter R on TPLPSO in 10-D

85

Table 3.6

Effects of the parameter R on TPLPSO in 30-D

85

Table 3.7

Effects of the parameter R on TPLPSO in 50-D

85

Table 3.8

The Emean, SD, and Wilcoxon test results of TPLPSO and six compared PSO variants for the 50-D benchmark problems

87

Table 3.9

Wilcoxon test for the comparison of TPLPSO and six other PSO variants

90

xi

Table 3.10

Average rankings and the associated p-values obtained by the TPLPSO and six other PSO variants via the Friedman test

91

Table 3.11

Adjusted p-values obtained by comparing the TPLPSO with six other PSO variants using the Bonferroni-Dunn, Holm, and Hochberg procedures

92

Table 3.12

The SR and SP values of TPLPSO and six compared PSO variants for the 50-D benchmark problems

93

Table 3.13

AC Results of the TPLPSO and six other PSO variants in D = 50

100

Table 3.14

Comparison of TPLPSO variants with BPSO in 50-D problems

102

Table 3.15

Summarized comparison results of TPLPSO variants with BPSO in each problem category

103

Table 3.16

The Emean, SD, and h values of TPLPSO and TLBO in 30-D problems

105

Table 3.17

Maximum fitness evaluation number (FEmax) of the compared MS algorithms in 30-D problems

109

Table 3.18

Population size (S) of the compared MS algorithms in 30-D problems

109

Table 3.19

Comparisons between TPLPSO and other tested MS algorithms in 30-D problems

110

Table 3.20

Experimental settings for the three real-world engineering design problem

111

Table 3.21

Simulation results of TPLPSO and six other PSO variants in the gear train design problem

111

Table 3.22

Simulation results of TPLPSO and six other PSO variants in the FM sound synthesis problem

112

Table 3.23

Simulation results of TPLPSO and six other PSO variants in the spread spectrum radar polyphase code design problem

112

Table 4.1

Parameter settings of the involved PSO variants

143

Table 4.2

Parameter tunings of K1, K2, Z, and m for functions F17, F18, F25, F26, F27, and F29 in 10-D

146

xii

Table 4.3

Parameter tunings of K1, K2, Z, and m for functions F17, F18, F25, F26, F27, and F29 in 30-D

147

Table 4.4

Parameter tunings of K1, K2, Z, and m for functions F17, F18, F24, F25, F27, and F29 in 50-D

148

Table 4.5

The Emean, SD, and Wilcoxon test results of ATLPSO-ELS and six compared PSO variants for the 50-D benchmark problems

150

Table 4.6

Wilcoxon test for the comparison of ATLPSO-ELS and six other PSO variants

154

Table 4.7

Average rankings and the associated p-values obtained by the ATLPSO-ELS and six other PSO variants via the Friedman test

154

Table 4.8

Adjusted p-values obtained by comparing the ATLPSO-ELS with six other PSO variants using the Bonferroni-Dunn, Holm, and Hochberg procedures

155

Table 4.9

The SR and SP values of ATLPSO-ELS and six compared PSO variants for the 50-D benchmark problems

156

Table 4.10

AC Results of the ATLPSO-ELS and six other PSO variants in D = 50

162

Table 4.11

Comparison of ATLPSO-ELS variants with BPSO in 50-D problems

164

Table 4.12

Summarized comparison results of ATLPSO-ELS variants with BPSO in each problem category

165

Table 4.13

Comparisons between ATLPSO-ELS and other OED-based MS variants in optimizing 30-D functions

167

Table 4.14

Simulation results of ATLPSO-ELS and six other PSO variants in the gear train design problem

168

Table 4.15

Simulation results of ATLPSO-ELS and six other PSO variants in the FM sound synthesis problem

168

Table 4.16

Simulation results of ATLPSO-ELS and six other PSO variants in the spread spectrum radar polyphase code design problem

168

Table 5.1

Parameter settings of the involved PSO variants

198

Table 5.2

Effects of the parameter Z on PSO-ATVTC in 10-D

200

Table 5.3

Effects of the parameter Z on PSO-ATVTC in 30-D

200

xiii

Table 5.4

Effects of the parameter Z on PSO-ATVTC in 50-D

200

Table 5.5

The Emean, SD, and Wilcoxon test results of PSO-ATVTC and six compared PSO variants for the 50-D benchmark problems

202

Table 5.6

Wilcoxon test for the comparison of PSO-ATVTC and six other PSO variants

206

Table 5.7

Average rankings and the associated p-values obtained by the PSO-ATVTC and six other PSO variants via the Friedman test

206

Table 5.8

Adjusted p-values obtained by comparing the PSO-ATVTC with six other PSO variants using the Bonferroni-Dunn, Holm, and Hochberg procedures

207

Table 5.9

The SR and SP values of PSO-ATVTC and six compared PSO variants for the 50-D benchmark problems

208

Table 5.10

AC Results of the PSO-ATVTC and six other PSO variants in D = 50

215

Table 5.11

Comparison of PSO-ATVTC variants with BPSO in 50-D problems

217

Table 5.12

Summarized comparison results of PSO-ATVTC variants with BPSO in each problem category

218

Table 5.13

Comparisons between PSO-ATVTC and other tested MS variants in optimizing 30-D functions

220

Table 5.14

Simulation results of PSO-ATVTC and six other PSO variants in the gear train design problem

221

Table 5.15

Simulation results of PSO-ATVTC and six other PSO variants in the FM sound synthesis problem

221

Table 5.16

Simulation results of PSO-ATVTC and six other PSO variants in the spread spectrum radar polyphase code design problem

221

Table 6.1

Parameter settings of the involved PSO variants

244

Table 6.2

Effects of the parameter Z on PSO-DLTA in 10-D

246

Table 6.3

Effects of the parameter Z on PSO-DLTA in 30-D

246

Table 6.4

Effects of the parameter Z on PSO-DLTA in 50-D

246

xiv

Table 6.5

The Emean, SD, and Wilcoxon test results of PSO-DLTA and six compared PSO variants for the 50-D benchmark problems

249

Table 6.6

Wilcoxon test for the comparison of PSO-DLTA and six other PSO variants

253

Table 6.7

Average rankings and the associated p-values obtained by the PSO-DLTA and six other PSO variants via the Friedman test

253

Table 6.8

Adjusted p-values obtained by comparing the PSO-DLTA with six other PSO variants using the Bonferroni-Dunn, Holm, and Hochberg procedures

253

Table 6.9

The SR and SP values of PSO-DLTA and six compared PSO variants for the 50-D benchmark problems

254

Table 6.10

AC Results of the PSO-DLTA and six other PSO variants in D = 50

260

Table 6.11

Comparison of PSO-DLTA variants with BPSO in 50-D problems

262

Table 6.12

Summarized comparison results of PSO-DLTA variants with BPSO in each problem category

263

Table 6.13

Comparisons between PSO-DLTA and other tested MS variants in optimizing 30-D functions

264

Table 6.14

Simulation results of PSO-DLTA and six other PSO variants in the gear train design problem

265

Table 6.15

Simulation results of PSO-DLTA and six other PSO variants in the FM sound synthesis problem

265

Table 6.16

Simulation results of PSO-DLTA and six other PSO variants in the spread spectrum radar polyphase code design problem

265

Table 6.17

The Emean and SD results of four PSO variants for the 50-D benchmark problems

271

Table 6.18

Comparison of four proposed PSO variants with BPSO in 50-D problems

273

Table 6.19

Summarized comparison results of the four proposed PSO variants with BPSO in each problem category

274

Table 6.20

Wilcoxon test for the comparison of PSO-ATVTC and three other proposed PSO variants

275

xv

Table 6.21

Average rankings and the associated p-values obtained by the four proposed PSO variants via the Friedman test

276

Table 6.22

Adjusted p-values obtained by comparing the PSO-ATVTC with three other proposed PSO variants using the BonferroniDunn, Holm, and Hochberg procedures

276

Table 6.23

The SR and SP values of four proposed PSO variants for the 50-D benchmark problems

277

Table 6.24

AC Results of the four proposed PSO variants in D = 50

283

Table 6.25

Simulation results of the four proposed PSO variants in the gear train design problem

284

Table 6.26

Simulation results of the four proposed PSO variants in the FM sound synthesis problem

284

Table 6.27

Simulation results of the four proposed PSO variants in the spread spectrum radar polyphase code design problem

284

Table A1

Effects of the acceleration rate  on APSO

310

Table A2

Effects of the elitist learning rate  on APSO

310

Table A3

Effects of the refreshing gap m on CLPSO

310

Table A4

Effects of the inertia weight  2 on FLPSO-QIW

311

Table A5

Effects of the acceleration coefficients ( ̂ and ̂ ) on FLPSOQIW

311

Table A6

Effects of the reconstruction gap G on OLPSO-L

311

Table B1

Deciding the best combination levels of Pgold and improved Pi using the OEDLS

314

xvi

LIST OF FIGURES Page 3

Figure 1.1

Categorization of global optimization algorithms (Li, 2010, Weise, 2008).

Figure 1.2

Fitness landscape with sufficient gradient information (Liberti, 2008, Li, 2010).

4

Figure 1.3

Different properties of difficult fitness landscapes: (a) multimodal, (b) deceptive, (c) neutral, and (d) needle-in-ahaystack (Blum et al., 2008, Weise, 2008).

5

Figure 1.4

The collective and collaborative behaviors of (a) bird flocking and (b) fish schooling in searching for foods.

9

Figure 1.5

Number of research publications per year for PSO in Science Direct database.

10

Figure 2.1

Topological structures of PSO with (a) the fully-connected topology and (b) the local ring topology. Each circle represents one particle, while each line represents the connection of one particle to others in the population (del Valle et al., 2008).

18

Figure 2.2

Particle i's trajectory in the two-dimensional fitness landscape (Li, 2010).

19

Figure 2.3

Basic PSO algorithm.

21

Figure 2.4

The classification scheme of state-of-art PSO variants in global optimization.

22

Figure 2.5

Other static topologies: (a) wheel topology, (b) von Neumann topology with 2-D configuration, and (c) von Neumann topology with 3-D configuration (del Valle et al., 2008).

29

Figure 2.6

Population structures of the clan topologies where the leaders’ conference occur in (a) fully-connected topology and (b) ring topology (Carvalho and Bastos-Filho, 2008).

29

Figure 2.7

Acceleration coefficient heuristic proposed in FlexiPSO (Kathrada, 2009).

31

Figure 2.8

Search mechanism of DMS-PSO (Liang and Suganthan, 2005).

33

xvii

Figure 2.9

The algorithm used to construct a two-level OA for N factor.

42

Figure 2.10

TLBO algorithm.

49

Figure 2.11

Procedures to calculate the AC value of an algorithm (Suganthan et al., 2005).

62

Figure 3.1

Teaching phase of the proposed TPLPSO.

71

Figure 3.2

Exemplar selection in the peer-learning phase of the TPLPSO.

73

Figure 3.3

Peer-learning phase in the TPLPSO.

74

Figure 3.4

SPLS in the TPLPSO.

76

Figure 3.5

Complete framework of the TPLPSO.

77

Figure 3.6

Convergence curves of 50-D problems: (a) F2, (b) F7, (c) F12, (d) F13, (e) F17, (f) F18, (g) F23, (h) F26, (i) F28, and (j) F30.

96

Figure 3.7

Convergence curves of 30-D problems: (a) F7, (b) F8, (c) F9, (d) F12, (e) F18, (f) F20, (g) F25, (h) F26, (i) F27, and (j) F29.

106

Figure 4.1

Exemplar index selection in the current swarm’s ATAcurrent module.

129

Figure 4.2

Overall framework of the ATAcurrent module adopted in the current swarm evolution of ATLPSO-ELS.

130

Figure 4.3

Overall framework of the ATAmemory module adopted in the memory swarm evolution of ATLPSO-ELS.

135

Figure 4.4

ELS module in ATLPSO-ELS.

136

Figure 4.5

OEDLS in the ELS module.

138

Figure 4.6

Complete framework of the ATLPSO-ELS algorithm.

141

Figure 4.7

Convergence curves of 50-D problems: (a) F1, (b) F7, (c) F11, (d) F14, (e) F17, (f) F19, (g) F23, (h) F25, (i) F28, and (j) F29.

159

Figure 5.1

Possible topology connectivity of each PSO-ATVTC particle during the initialization of ATVTC module.

182

Figure 5.2

Graphical illustration of the Increase strategy performed by the ATVTC module when scenario 1 is met.

183

xviii

Figure 5.3

Graphical illustration of the Decrease strategy performed by the ATVTC module when scenario 2 is met.

183

Figure 5.4

Graphical illustration of the Shuffle strategy performed by the ATVTC module in scenario 3 when (a) TCk = TCmin and (b) TCn = TCmax.

184

Figure 5.5

Graphical illustration of the ATVTC module mechanism.

186

Figure 5.6

ATVTC module of the PSO-ATVTC.

187

Figure 5.7

Derivation of the cexp,i and sexp,i exemplars in the PSO-ATVTC.

190

Figure 5.8

EKE of the PSO-ATVTC.

192

Figure 5.9

Derivation of the oexp,i exemplar in the PSO-ATVTC.

193

Figure 5.10

NS operator of the PSO-ATVTC.

194

Figure 5.11

Complete framework of the PSO-ATVTC.

195

Figure 5.12

Convergence curves of 50-D problems: (a) F1, (b) F7, (c) F9, (d) F12, (e) F18, (f) F22, (g) F23, (h) F25, (i) F28, and (j) F30.

211

Figure 6.1

IF-THEN rule employed by the DTA module in performing the task allocation in each dimension.

232

Figure 6.2

Crossover operation in the DTA module.

238

Figure 6.3

DTA module of the proposed PSO-DLTA.

240

Figure 6.4

ITA module of the proposed PSO-DLTA.

242

Figure 6.5

Complete framework of the PSO-DLTA.

243

Figure 6.6

Convergence curves of 50-D problems: (a) F6, (b) F7, (c) F10, (d) F12, (e) F19, (f) F22, (g) F25, (h) F26, (i) F27, and (j) F30.

257

Figure 6.7

Convergence curves of 50-D problems: (a) F1, (b) F7, (c) F9, (d) F12, (e) F15, (f) F18, (g) F25, (h) F26, (i) F27, and (j) F29.

279

xix

LIST OF ABBREVIATIONS ACL-PSO

PSO with Aging Leader and Challengers

ACO

Ant Colony Optimization

AFPSO

Adaptive Fuzzy PSO

AI

Artificial Intelligence

ANN

Artificial Neural Network

APSO

Adaptive PSO

APTS

Adaptive Population Tuning Scheme

APV

Adjusted p-value

ATA

Adaptive Task Allocation

ATLPSO-ELS

Adaptive Two-Layer Particle Swarm Optimization with Elitist Learning Strategy

ATVTC

Adaptive Time-Varying Topology Connectivity

BBO

Biogeography-based Optimization

BPSO

Basic PSO

CES

Classical Evolution Strategy

CI

Computational Intelligence

CLPSO

Comprehensive Learning PSO

CMAES

Covariance Matrix Adaptation Evolution Strategy

CRO

Chemical Reaction Optimization

DE

Differential Evolution

DMS-PSO

Dynamic Multi-Swarm PSO

DNLPSO

Dynamic Neighborhood Learning-based PSO

DNSCLPSO

Diversity Enhanced CLPSO with Neighborhood Search

DNSPSO

Diversity Enhanced PSO with Neighborhood Search

DTA

Dimension-level Task Allocation

EA

Evolutionary Algorithm

EC

Evolutionary Computation

EKE

Elitist-based Knowledge Extraction

ELPSO

Example-based PSO

ELS

Elitist-based Learning Strategy

ENT

Expanding Neighborhood Topology

EO

Extremal Optimization

EP

Evolutionary Programming

EPUS

Efficient Population Utilization Strategy

ES

Evolution Strategy

xx

ESE

Evolutionary State Estimation

FA

Factor Analysis

FAPSO

Fuzzy Adaptive PSO

FE

Fitness Evaluation

FIPS

Fully-Informed PSO

FlexiPSO

Flexible PSO

FLPSO-QIW

Feedback Learning PSO with Quadratic Inertia Weight

FM

Frequency-Modulated

FPSO

Frankenstein PSO

FWER

Family Wise Error Rate

G3PCX

Generalized Generation Gap Module with Generic Parent-Centric Crossover Operator

GA

Genetic Algorithm

GEP

Gene Expression Programming

GP

Genetic Programming

GSO

Group Search Optimization

HPSO-TVAC

Hierarchical PSO with Time-Varying Acceleration Coefficient

IMM

Intelligent Move Mechanism

IPSO

Improved PSO

ITA

Individual-level Task Allocation

MC

Memetic Computing

MO

Multiple Objectives

MoPSO

Median-Oriented PSO

MPSO-TVAC

Mutation PSO with Time-Varying Acceleration Coefficient

MS

Metaheuristic Search

NS

Neighborhood Search

OA

Orthogonal Array

OCABC

Orthogonal Learning-based Artificial Bee Colony

ODEPSO

Extrapolated PSO based on tOrthogonal Design

ODPSO

PSO based on Orthogonal Design

OED

Orthogonal Experiment Design

OEDLS

OED-based Learning Strategy

OLPSO

Orthogonal Learning PSO

OPSO

Orthogonal PSO

OTLBO

Orthogonal Teaching Learning based Optimization

OT-PSO

Orthogonal Test-based PSO

OXBBO

Biogeography-based Operator

xxi

Optimization

with

Orthogonal

Crossover

OXDE

Differential Evolution with Orthogonal Crossover Operator

PAE-QPSO

Phase Angle-Encoded Quantum PSO

PFSD

Population’s Fitness Spatial Diversity

PS

Producer-Scrounger

PSD

Population Spatial Diversity

PSO

Particle Swarm Optimization

PSO-ATVTC

Adaptive Time-Varying Topology Connectivity

PSODDS

PSO with Distance-based Dimension Selection

PSO-DLTA

Particle Swarm Optimization with Dual Level Task Allocation

PSO-MAM

PSO with Multiple Adaptive Method

RCBBO

Real-Coded Biogeography-based Optimization

RCCRO

Real-Coded Chemical Reaction Optimization

RPPSO

Random Position PSO

SA

Simulation Annealing

SALPSO

Self-Adaptive Learning-based PSO

SI

Swarm Intelligence

SLPSO

Self-Learning PSO

SO

Single Objective

SPLS

Stochastic Perturbation-based Learning Strategy

TCM

Topology Connectivity Modification

TLBO

Teaching and Learning Based Optimization

TPLPSO

Teaching and Peer-Learning Particle Swarm Optimization

TS

Tabu Search

TVAC

Time-Varying Acceleration Coefficient

UPSO

Unified PSO

xxii

ALGORITMA PENGOPTIMUMAN KAWANAN ZARAH DIPERTINGKATKAN DENGAN STRATEGI PEMBELAJARAN TEGUH UNTUK PENGOPTIMUMAN GLOBAL

ABSTRAK

Pengoptimuman Kawanan Zarah (PSO) merupakan satu algoritma pencarian metaheuristik (MS) yang diinspirasi oleh interaksi sosial kumpulan burung atau kawanan ikan semasa pencarian sumber makanan. Walaupun PSO asal adalah satu teknik pengoptimuman yang berkesan bagi menyelesai masalah pengoptimuman global, algoritma ini mengalami beberapa kelemahan dalam penyelesaian masalah yang berdimensi tinggi dan kompleks, seperti kadar penumpuan yang lambat, kecenderungan yang tinggi untuk terperangkap dalam optima setempat dan kesulitan dalam penyeimbangan penjelajahan/penyusupan. Untuk mengatasi kelemahan-kelemahan tersebut, penyelidikan ini telah mencadangkan empat variasi PSO yang dipertingkatkan, iaitu, PSO dengan Pengajaran and Pembelajaran Sebaya (TPLPSO), PSO Dua Lapis Adaptif dengan Strategi Pembelajaran Elit (ATLPSO-ELS), PSO dengan Sambungan Topologi Melalui Perubahan Masa Adaptif (PSO-ATVTC) dan PSO dengan Peruntukan Tugas Secara Dua Peringkat (PSO-DLTA). Satu fasa pembelajaran alternatif telah dicadangkan dalam TPLPSO dengan menawarkan arah pencarian baharu kepada zarah-zarah yang gagal untuk meningkatkan kecergasannya dalam fasa pembelajaran sebelumnya. Dua mekanisme penyesuaian untuk peruntukan tugas pula telah dicadangkan dalam ATLPSO-ELS bagi meningkatkan keupayaan algoritma dalam penyeimbangan penjelajahan/penyusupan semasa proses pengoptimuman. Sebagai satu variasi PSO yang dilengkapi dengan pelbagai strategi pembelajaran, PSO-ATVTC mempunyai satu mekanisme yang berkesan dan cekap bagi menyesuaikan kekuatan penjelajahan/penyusupan bagi zarah-zarah yang berbeza dengan memanipulasikan struktur kejiranan mereka secara sistematik. Berbeza dengan kebanyakan variasi-variasi PSO yang sedia ada, PSO-DLTA mempunyai kemampuan untuk melaksanakan peruntukan tugas secara peringkat dimensi. xxiii

Secara khususnya, modul peruntukan tugas peringkat dimensi (DTA) yang dicadangkan dalam PSO-DLTA berkeupayaan untuk memperuntukkan tugas-tugas pencarian yang berbeza kepada komponen dimensi zarah yang berlainan berdasarkan ciri-ciri jarak yang unik di antara sesuatu zarah dan zarah global terbaik dalam setiap dimensi. Prestasi keseluruhan bagi keempat-empat variansi PSO yang dicadangkan telah dibandingkan dengan variasi-variasi PSO dan algoritma-algoritma MS yang sedia ada. 30 fungsi penanda aras yang mempunyai ciri-ciri berbeza dan tiga masalah reka bentuk kejuruteraan dalam dunia sebenar telah digunakan. Keputusan eksperimen yang dicapai oleh setiap variasi PSO yang dicadangkan juga dinilai secara menyeluruh dan disahkan melalui analisis statistik bukan parametrik. Berdasarkan keputusan eksperimen, TPLPSO mempunyai kerumitan pengiraan yang paling rendah dan algoritma ini menunjukkan kejituan pencarian, kepercayaan pencarian dan kecekapan pencarian yang baik dalam penyelesaian fungsi penanda aras yang mudah. ATLPSO-ELS mencapai kemajuan prestasi yang ketara, dari segi kejituan pencarian, kepercayaan pencarian dan kecekapan pencarian, dalam penyelesaian fungsi penanda aras yang lebih mencabar, namun dengan peningkatan kerumitan pengiraan. Sementara itu, PSOATVTC dan PSO-DLTA berjaya menyelesai fungsi-fungsi penanda aras yang mempunyai ciri-ciri berbeza dengan kejituan pencarian, kepercayaan pencarian dan kecekapan pencarian yang memuaskan, tanpa menjejaskan kerumitan rangka kerja algoritma. Antara kempatempat variansi PSO yang dicadangkan, PSO-ATVTC telah dibuktikan sebagai variasi yang berprestasi terbaik, memandangkan algoritma ini menghasilkan kemajuan prestasi yang paling baik, dengan kerumitan pengiraan yang kedua terendah.

xxiv

ENHANCED PARTICLE SWARM OPTIMIZATION ALGORITHMS WITH ROBUST LEARNING STRATEGY FOR GLOBAL OPTIMIZATION

ABSTRACT

Particle Swarm Optimization (PSO) is a metaheuristic search (MS) algorithm inspired by the social interactions of bird flocking or fish schooling in searching for food sources. Although the original PSO is an effective optimization technique to solve the global optimization problem, this algorithm suffers with several drawbacks in solving the high dimensional and complex problems, such as slow convergence rate, high tendency to be trapped into the local optima, and difficulty in balancing the exploration/exploitation. To mitigate these drawbacks, this research has proposed four enhanced PSO variants, namely, Teaching and Peer-Learning PSO (TPLPSO), Adaptive Two-Layer PSO with Elitist Learning Strategy (ATLPSO-ELS), PSO with Adaptive Time-Varying Topology Connectivity (PSO-ATVTC), and PSO with Dual-Level Task Allocation (PSO-DLTA). An alternative learning phase is proposed into the TPLPSO to offer the new search direction to the particles which fail to improve its fitness during the previous learning phase. Two adaptive mechanisms of task allocation are proposed into the ATLPSO-ELS to enhance the algorithm’s capability in balancing the exploration/exploitation during the optimization process. Being a PSO variant equipped with multiple learning strategies, PSO-ATVTC has an effective and efficient mechanism to adaptively adjust the exploration and exploitation strengths of different particles, by systematically manipulating their respective neighborhood structures. Unlike most existing PSO variants, PSO-DLTA has the capability of performing the dimension-level task allocation. Specifically, the dimension-level task allocation (DTA) module proposed into the PSO-DLTA is able to assign different search tasks to different dimensional components of a particle, based on the unique distance characteristics between the particle and the global best particle in each dimension. The overall performances of the four proposed PSO variants have been compared with a number of existing PSO variants and other MS algorithms on 30 xxv

benchmark functions with different characteristics and three real-world engineering design problems. The experimental results obtained by each proposed PSO variant are also thoroughly evaluated and verified via the non-parametric statistical analyses. Based on the experiment results, TPLPSO is observed to have the lowest computational complexity and this algorithm exhibits excellent search accuracy, search reliability, and search efficiency in solving simpler benchmark functions. ATPLSO-ELS achieves significant performance improvement, in terms of search accuracy, search reliability, and search efficiency, in solving more challenging benchmark functions, with the cost of increasing computational complexity. Meanwhile, PSO-ATVTC and PSO-DLTA successfully solve the benchmark functions with different characteristics with promising search accuracy, search reliability, and search efficiency, without severely compromising the complexities of algorithmic frameworks. Among the four proposed PSO variants, PSO-ATVTC is concluded as the best performing variant, considering that this algorithm yields the most significant performance improvement, by incurring the second lowest computational complexity.

xxvi

CHAPTER 1 INTRODUCTION

1.1 Concept of Global Optimization Global optimization is a branch of applied mathematics and numerical analysis that deals with the optimization of a function or a set of functions (Li, 2010, Liberti, 2008, van den Bergh, 2002). It is a process of finding the best solution of a given problem that would have either maximized or minimized the problem objective and to satisfy all criteria associated with the problem objective (Lam et al., 2012, Chetty and Adewumi, 2013, van den Bergh, 2002). This concept is widely used by humankind in solving various problems, ranging from the development of cutting-edge technologies to human’s daily life. For instance, geneticists are interested in designing the optimal sequences of deoxyribonucleic acid (DNA) to achieve the maximum reliability of molecular computation (Shin et al., 2005, Zhang et al., 2007, Blum et al., 2008). Meanwhile, economists desire to minimize their prediction error for more accurate prediction of the stock market trends (Yu et al., 2009, Majhi et al., 2009, Singh and Borah, 2014). From the mathematical perspective, the aim of global optimization is to determine the optimal (or best) solution x out of a set of solutions D , where x  [ x1 , x2 ,..., xD ] and D denote a D-dimensional vector and the D-dimensional problem search space, respectively (Lam et al., 2012, Chetty and Adewumi, 2013, van den Bergh, 2002). The optimality of the solution vector x is assessed through the objective function ObjV of a given problem, where ObjV is used to characterize the landscape of search space D . The outcome of this assessment is scalar and it is represented by an objective function value ObjV (x). A global optimization can be subjected to M constrains, i.e. C1(x), C2 (x), … , CM (x) to determine if the solution vector x is a feasible solution to the search space D . For an unconstrained minimization problem, the global optimum solution x * is defined as (Chetty and Adewumi, 2013):

1

ObjV ( x)  {x *   D : ObjV ( x * )  ObjV ( x) for all x   D }

(1.1)

As shown in Equation (1.1), the global optimum solution x * of a given minimization problem yields the lowest objective function value in the entire search space D . On the other hand, the global optimum solution x * of an unconstrained maximization problem produces the highest objective function value in the entire search space and it is stated as (Chetty and Adewumi, 2013): ObjV ( x)  {x *   D : ObjV ( x * )  ObjV ( x) for all x   D }

(1.2)

Global optimization is a fast growing and significant research field, considering that it plays important role in various practical application fields such as business, science, engineering, finance, and many other fields. Nevertheless, it has become a more challenging task, attributed to the escalating complexities of the problem search spaces. A wide variety of optimization techniques are developed to find the global optima of these challenging problems. In the following section, the global optimization algorithms that are used to solve the global optimization problems are presented. Without loss of generality, this thesis will focus on the global minimization problems in the following chapters. Specifically, these global minimization problems have single objective and without constraints in the search space D , except the constraint of the search domain.

1.2 Global Optimization Algorithm Global optimization involves the searching of the best possible solution to a given problem within a reasonable time limit. There are numerous global optimization algorithms developed to deal with this task. One of the factors that determine the ability of a global optimization algorithm in finding the global optimum of a given problem is the complexity of the search space. For example, it is more likely for a global optimization algorithm to find the global optimum of a simple unimodal function than a hybrid composition benchmarks. In general, the global optimization algorithms which are used to tackle the global optimization

2

Deterministic State Space Search

Branch and Bound

Algebraic Geometry

Gradient Search

Probabilistic Artificial Intelligence (AI) Soft Computing

Monte Carlo Algorithms

Evolutionary Computation (EC)

Simulated Annealing (SA) Tabu Search (TS)

Evolutionary Algorithms (EAs)

Parallel Tempering

Swarm Intelligence (SI) Genetic Algorithms (GAs)

Stochastic Tunneling Extremal Optimization (EO)

Computational Intelligence (CI)

Ant Colony Optimization (ACO)

Evolutionary Programming (EP)

Particle Swarm Optimization (PSO)

Gene Expression Programming (GEP)

Teaching and Learning Based Optimization (TLBO)

Evolution Strategy (ES) Genetic Programming (GP)

Standard GP Linear GP Grammar Guided GP

Figure 1.1: Categorization of global optimization algorithms (Li, 2010, Weise, 2008).

problems can be categorized into two basic classes, namely deterministic and probabilistic algorithms (Li, 2010, Chetty and Adewumi, 2013, Weise, 2008), as illustrated in Figure 1.1.

1.2.1 Deterministic Algorithm As illustrated in Figure 1.1, deterministic algorithms include the state space search, branch and bound, algebraic geometry, gradient search, and others (Weise, 2008). These algorithms share a common characteristic, i.e. they employ the exact methods to solve the global optimization problems (Chetty and Adewumi, 2013, Li, 2010). These exact methods perform

3

ObjV(x)

x

Figure 1.2: Fitness landscape with sufficient gradient information (Liberti, 2008, Li, 2010).

the exhaustive search of solution space to obtain the global optimum of a given problem. These exhaustive searches, however, are only feasible when there is sufficient gradient information of the objective function (Li, 2010, del Valle et al., 2008). For example, the fitness landscape of a unimodal function, as illustrated in Figure 1.2, consists of clear relation between the possible solutions and the objective function. This characteristic enables the deterministic algorithms to exhaustively explore and evaluate every possible solution in the search space of unimodal function, and therefore obtain the global optimum. On the other hand, it is impractical to use the deterministic algorithms to find the global optimum when the objective function of a given problem is too difficult, or has insufficient or no gradient information for the exhaustive search. Generally, an objective function is considered difficult to solve if it is not differentiable, not continuous, or has excessive amount of local optima in the fitness landscape (Li, 2010). The fitness landscapes of some difficult objective functions are presented in Figure 1.3. For example, the fitness landscape in Figure 1.3(a) has too many local optima and the deterministic algorithms do not know the right direction during the search process. Meanwhile, the fitness landscape as shown in Figure 1.3(b) exhibits deceptiveness and it tends to mislead the deterministic algorithms away from the global optimum. Figures 1.3(c) and 1.3(d) show that the global optima of objective functions are located on the plateaus and the fitness functions do not provide any meaningful gradient information to the deterministic algorithms to guide the search.

4

ObjV(x)

ObjV(x)

Region with misleading gradient information

Multiple local optima x

x

(b)

ObjV(x)

ObjV(x)

(a)

Neutral area

Neutral area or area without much information

x

x

(c)

(d)

Figure 1.3: Different properties of difficult fitness landscapes: (a) multimodal, (b) deceptive, (c) neutral, and (d) needle-in-a-haystack (Blum et al., 2008, Weise, 2008).

1.2.2 Probabilistic Algorithm As depicted in Figure 1.3, it can be observed that the objective functions with difficult fitness landscapes impose severe challenges to the deterministic algorithms and this inevitably leads to the poor optimization results. The inferior performance of these exhaustive approaches eventually bring the era of the stochastic-based probabilistic algorithms. Unlike the deterministic approach, the probabilistic algorithms are derivative-free and they tend to exhibit relatively resilient search performance in various types of optimization problems, including those with multimodal, deceptive, or noncontinuous fitness landscapes. Most of the probabilistic algorithms are Monte Carlo-based (Krauth, 1996), considering that these algorithms employ the randomization in determining the solutions of global optimization problems (Chetty and Adewumi, 2013).

5

Metaheuristic (Bianchi et al., 2009, Blum and Roli, 2003) is another important element that could be found in the probabilistic algorithms. Specifically, metaheuristic helps the probabilistic algorithms to decide which candidate solutions to be processed and how to generate the new candidate solutions based on the currently gathered information. This process is performed stochastically by employing the statistical information obtained from the samples in the search space or based on an abstract model inspired from a natural phenomenon or a physical process (Li, 2010, Weise, 2008). For instance, simulated annealing (SA) (Kirkpatrick et al., 1983) utilizes the Boltzmann probability factor of atom configurations of solidifying metal melts to determine which candidate solutions to be processed next. On the other hand, the extremal optimization (EO) (Boettcher and Percus, 1999) takes the inspiration from the metaphor of thermal equilibria in physics. An important class of probabilistic Monte Carlo metaheuristic is the evolutionary computation (EC) (De Jong, 2006), which is also a class of soft computing as well as a part of the artificial intelligence, as illustrated in Figure 1.1. EC-based probabilistic algorithms rely on the concept of a population of individuals to solve a given problem, where each individual represents a candidate solution in the problem search space. The probabilistic search operators of EC algorithms are used to iteratively refine the multiple candidate solutions, in order to ensure these individuals evolve towards the increasingly promising solutions. Two of the most important members in EC class are evolutionary algorithm (EA) (Back, 1996) and swarm intelligence (SI) (Bonabeau et al., 1999). The developments of the EA-based probabilistic algorithms are inspired by the natural evolution in the biology world (Back, 1996). The probabilistic search operators of EAs that are used to generate the new candidate solutions are mimicked from the nature evolution processes such as natural selection and survival of the fittest. Genetic algorithm (GA) (Goldberg and Holland, 1988, Weise, 2008) is a subclass of EA and this algorithm mimics the metaphor of natural biological evolution via the mechanisms of mutation, crossover, and selection. Evolutionary programming (EP) and evolutionary search (ES) are another two important members of EA (De Jong, 2006, Back, 1996). Both of these 6

algorithms share many similarities in term of search mechanism, except that EP is not equipped with the recombination operator. Another difference that distinguishes these two EAs is the characteristic of their respective selection operators (Li, 2010). Specifically, EP employs a soft selection mechanism, known as the stochastic-based tournament selection, to offer the individuals with inferior solutions a probabilistic opportunity to survive in the next generation. On the other hand, ES uses the deterministic selection (Weise, 2008), i.e. a hard selection mechanism that inhibits the survival of worst individual in the next generation. Meanwhile, both of the genetic programming (GP) (Koza, 1992) and gene expression programming (GEP) (Ferreira, 2001, Ferreira, 2004) are used to evolve the computer programs. Unlike GP where each individuals are encoded as nonlinear entities of different sizes and shapes (parse trees), the individuals in GEP are first expressed as linear strings of fixed length (genome), and then translated as nonlinear entities of different sizes and shapes (simple diagram representation of expression trees) (Ferreira, 2001, Ferreira, 2004). GEP is more versatile than GP, considering that the former successfully creates the separate entities of genome (genetype) and expression tree (phenotype) (Ferreira, 2001, Ferreira, 2004). SI is another important class of probabilistic Monte Carlo metaheuristic in EC. Generally, SI takes inspiration from the natural and artificial systems composed of population of simple agents that coordinate using decentralized control and self-organization (Bonabeau et al., 1999). Compared to the EA that primarily focuses on the competitive behavior in biological evolution, SI studies on the collective behaviors exhibited by the local interactions of the individuals with each other and with the environments, which could eventually lead to the emergence of intelligent global behavior (Bonabeau et al., 1999). One example of SI-based global optimization algorithm is the ant colony optimization (ACO) (Dorigo and Blum, 2005, Dorigo et al., 1996) that is inspired by the foraging behavior of ants. This algorithm is initially proposed to search for an optimal path in graph with a set of software agents called “artificial ant”. Particle swarm optimization (PSO) (Kennedy and Eberhart, 1995, Banks et al., 2007, del Valle et al., 2008) is another well-known member of SI and it is inspired by the collaborative behavior of a swarm of fishes or birds in searching 7

for foods. Recently, a new form of SI, namely the Teaching and Learning Based Optimization (TLBO) (Rao et al., 2011, Rao et al., 2012) is proposed. Unlike the ACO and PSO that emulate the collective behaviors of insects and animals, the development of TLBO is motivated by the human teaching and learning paradigm in school. Besides these three algorithms, more inspiring SI-based algorithms have been proposed in the past decade to capitalize the benefits of decentralized and self-organizing behaviors of the SI systems in tackling various types of challenging optimization problems. Considering that this thesis focuses on developing the new PSO algorithms, the following section in this chapter will discuss the basic concept of PSO in detail.

1.3 Particle Swarm Optimization As explained in the previous subsection, the development of PSO is motivated by the collective and collaborative behaviors of bird flocking and fish schooling in searching for foods (Kennedy and Eberhart, 1995, Eberhart and Shi, 2001, Banks et al., 2007, del Valle et al., 2008), as illustrated in Figure 1.4. This SI-based algorithm was first proposed by Kennedy and Eberhart in 1995. As a population-based probabilistic Monte Carlo metaheuristic, PSO employs a set of software agents called particles that fly through the multidimensional problem hyperspace with given velocity to simultaneously evaluate many points in the search space. Specifically, the position of each particle in the search space represents a potential solution of a given optimization problem. Meanwhile, the location of the food source where these particles are searching for is regarded as the global optimum of problem. Compared to most of the EC-based algorithm, the PSO particles have more effective memory capability, considering that these particles are able to remember their previous best positions (self-cognitive experience) as well as the neighborhood best position (social experience). These two experiences are the vital components in PSO learning strategy and they are used to adjust the flying direction of each PSO particle in the search space (Kennedy and Eberhart, 1995, Eberhart and Shi, 2001).

8

(a)

(b)

Figure 1.4: The collective and collaborative behaviors of (a) bird flocking and (b) fish schooling in searching for foods.

During the search process, all particles have a degree of freedom or randomness in their movements. This characteristic allows each individual in the particle swarm to scatter around and move independently in the problem search space. Besides navigating through the problem search space independently and stochastically, these particles also interact with its neighborhood members via the information sharing mechanism. Specifically, the best performing particle in a particular neighborhood structure will announce its location in the search space to its neighborhood members via some simple rules. The social interaction exist between the particles in the problem search space enables the PSO population gradually moves towards the promising regions from different directions, thereby leads to the swarm convergence (Kennedy and Eberhart, 1995, Eberhart and Shi, 2001). Commonly, swarm convergence is attained when the PSO swarm is no longer able to find new solutions or the algorithm keeps searching on a small subset region of the search space (Li, 2010). PSO has captured much attention in the research arena of computational intelligence since its inception, due to its effectiveness and simple implementation in solving optimization problems. For example, a quick browse on IEEE Xplore with a simple query “particle swarm optimization” returns more than 12,000 hits for papers published after year 2000. The current relevance of PSO can also be shown through the visibility of this topic at the Science Direct database. Figure 1.5 illustrates an important number of PSO-related

9

research publications per year in Science Direct database. It demonstrates a growing trend despite the item related to PSO first appeared in 1995, implying that PSO is still a research subject of great interest. To further emphasize the importance of PSO in the research community, many scientists and engineers have capitalized this algorithm to solve many real-world engineering design problems because PSO has fast convergence speed and requires less parameter tunings (del Valle et al., 2008, Banks et al., 2007). Some of these engineering design problems involve power system design (del Valle et al., 2008, AlRashidi and El-Hawary, 2009, Neyestani et al., 2010, Wang et al., 2011, Wang et al., 2013a, Zhang et al., 2012), artificial neural network (ANN) training (Mirjalili et al., 2012, Yaghini et al., 2013), data clustering (Shih, 2006, Yang et al., 2009, Kiranyaz et al., 2010, Sun et al., 2012), data mining (Wang et al., 2007, Özbakır and Delice, 2011, Sarath and Ravi, 2013), parameter estimation and identification of systems (Liu et al., 2008, Modares et al., 2010, Sakthivel et al., 2010), and many other engineering problems (Huang et al., 2009, Lin et al., 2009, Sharma et al., 2009, Yan et al., 2013, Sun et al., 2011).

PSO-related research publications

250 200 150 100 50 0 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013

Year

Figure 1.5: Number of research publications per year for PSO in Science Direct database.

10

1.4 Challenges of PSO in Global Optimization Although PSO is a popular choice of optimization technique in solving the global optimization problems, earlier research reveals that this SI-based algorithm suffers with some drawbacks. These drawbacks could jeopardize the search performance of PSO and thus restrict the application of this algorithm in solving the real-world problems. This section aims to cover the challenges faced by PSO in global optimization and how these challenges affect the algorithm’s optimization capability. One of the main concerns on PSO is that this algorithm and most of its existing variants do not offer the alternative learning phase to the particles when the latter fails to improve the quality of their solution (i.e., fitness) during the optimization process. Each PSO particle updates its new solutions by referring to its self-cognitive experience and the social experience. Considering that some random movements are involved during the search process, there is a probabilistic opportunity for the particle to produce a new solution with less promising quality (i.e., fitness) as compared to its previous one. Under this scenario, the particle’s convergence speed towards the global optimum solution might slow down, considering this particle is not on the right trajectory to locate the global optimum. Another challenging issue of PSO is that, although the neighborhood best particle (social experience) is crucial in guiding the PSO swarm during the search process, the neighborhood best particle has the poorest learning strategy to update itself (Kiranyaz et al., 2010). For the neighborhood best particle, its personal and neighborhood best positions are same and this similarity inevitably nullifies the self-cognitive and social components of particle during the velocity update (Kiranyaz et al., 2010). As compared to other population members, the neighborhood best particle suffers with higher risk to stagnate at the local optimum or any arbitrary point in the search space because of the zero velocity produced by the nullified effect. Consequently, the poor optimization results are delivered (Clerc and Kennedy, 2002, van den Bergh, 2002, Ozcan and Mohan, 1999). PSO also suffers with the intense conflict between exploration and exploitation searches (Shi and Eberhart, 1998). Specifically, exploration encourages the algorithm to 11

wander around the entire search space to cover the unvisited regions, whereas exploitation emphasizes on the local refinement of the already found near-optimal solutions. Neither of these two contradict strategies should be overemphasized because excessive exploration tends to consume more computation resources, whereas excessive exploitation could lead the PSO towards the premature convergence. In general, the premature convergence is undesirable and it is identified when the PSO converges to a local optimum while there are other better locations existing in the fitness landscape than the currently searched area (Ozcan and Mohan, 1999, Clerc and Kennedy, 2002, van den Bergh and Engelbrecht, 2004, Liang et al., 2006, van den Bergh and Engelbrecht, 2006). The universality and robustness of the PSO and most of its variants in tackling the diverse set of global optimization problems with different properties are also questionable. The inability of the PSO to best cope with all problems is attributed to the fact that different problems have differently shaped fitness landscape. The problem’s difficulty is further compounded by the fact that in a certain benchmark, such as the composition function (Suganthan et al., 2005), the shape of the local fitness landscape in different subregions may be significantly different (Li, 2010, Li et al., 2012). To effectively solve these complex problems, different PSO particles should play different roles (i.e., perform different learning strategies) in different locations of fitness landscape and different search stages. However, most of the PSO variants that have been proposed so far use only one type of learning strategy and thus have limited choices of exploration/exploitation strengths to perform the search in different subregions of the search space (Wang et al., 2011). Finally, it can also be observed that the original PSO and most of its variants tend to restrict the PSO particle to perform one type of learning strategy at the population level or the individual level. For population level, all particles in the population need to perform one type of learning strategy. Meanwhile, for the individual level, each particle can choose the desired learning strategy based on some decision making mechanisms. In these two approaches, the particle performs the same learning strategy in all dimensional components, without considering the particle’s characteristics in each dimension of the search space. 12

According to the “two step forward, one step back” phenomenon as explained by van den Bergh and Engelbrecht (2004), different particles in PSO could have different characteristics in different dimension of the search space. These unique characteristics should be capitalized to assign the appropriate learning strategy to each dimensional component of particle.

1.5 Research Objectives As discussed in the previous subsection, there are several main issues encountered by the original PSO and some of its existing variants, which tend to restrict their optimization capabilities. This thesis aims to alleviate the aforementioned issues by developing few enhanced PSO algorithms with robust learning strategies for global optimization problems. The objectives of this research work are presented as follows: 1. To devise an alternative learning phase to the particle as well as to introduce a unique learning strategy to the neighborhood best particle. 2. To develop two adaptive task allocation mechanisms to the PSO population for achieving better regulations of the exploration/exploitation searches of particle without significantly compromising the algorithm’s convergence speed. 3. To propose an innovative mechanism that enables the particles to adaptively choose the appropriate learning strategies for the robust searching in various types of optimization problems. 4. To develop a dimension-level task allocation mechanism to the PSO for enabling each dimensional component of the PSO particle to select an appropriate learning strategy based on its characteristics in each dimension of the search space.

1.6 Research Scopes The scope of this research focuses on the design and development of the enhanced PSO algorithms with robust learning strategies. Specifically, these enhanced PSO variants are confined to solve the single objective and unconstrained global minimization problems with static and non-noisy fitness landscapes. 13

In this thesis, all proposed PSOs are tested on a total of 30 benchmarks with different types of fitness landscapes to conclusively evaluate the algorithm’s performance. Considering that each benchmark problem is specifically designed to evaluate certain properties of an algorithm, they are useful to verify the viability of fundamental concepts introduced into proposed PSO variants. To investigate the feasibility of the proposed algorithms in real-world applications, three engineering design problems are also employed for the performance evaluations. Finally, the proposed algorithms, alongside with numerous published PSO variants, are coded and tested in the MATLAB® R2012b with Intel ® Core ™ i7-2600 CPU @ 3.40GHz and 4GB RAM environment.

1.7 Thesis Outline This chapter briefly introduced the research background and some preliminary knowledge regarding the global optimization and the algorithms used to solve this task, particularly on the PSO. The problem statements, research objectives, and research scope of this research are included in this chapter. The rest of this thesis is structured as follows. In Chapter 2, a comprehensive review of the existing PSO variants is presented. The mechanism of the recently proposed TLBO is also described, considering that this algorithm plays an important role in the next chapter. The advantages and limitations of these PSO variants and TLBO are reviewed to gain a deeper understanding on their conceptual successes and shortcomings. The 30 benchmarks problems and three engineering design problems used in the performance evaluations are also discussed in this section. Finally, the details of the performance metrics and the statistical analyses used in the performance comparisons are provided. In Chapter 3, the first proposed PSO algorithm, namely the Teaching and PeerLearning Particle Swarm Optimization (TPLPSO) is introduced. This chapter starts with the research ideas that lead to the development of TPLPSO, followed by the main modifications

14

introduced. Simulation results of TPLPSO in solving the benchmark and engineering design problems are obtained and compared with those from the state-of-art algorithms. Chapter 4 proposes the second enhanced PSO algorithms, namely the Adaptive Two-Layer Particle Swarm Optimization with Elitist Learning Strategy (ATLPSO-ELS). The adaptive task allocation mechanisms of ATLPSO-ELS are described in sufficient detail. At the end of Chapter 4, the comparative studies on the performances of ATLPSO-ELS and its peer algorithms are conducted based on their simulation results. Chapter 5 presents the technical details of the third enhanced PSO algorithms, namely the Particle Swarm Optimization with Adaptive Time-Varying Topology Connectivity (PSO-ATVTC). Several design issues of PSO-ATVTC are carefully addressed. Finally, the experimental results are presented, analyzed, compared, and discussed. The fourth proposed PSO algorithm, namely the Particle Swarm Optimization with Dual-Level Task Allocation (PSO-DLTA), is described in Chapter 6. The research idea that inspires the development of PSO-DLTA is first explained, followed by the methodology of this algorithm. At the end of this chapter, the effectiveness of the proposed PSO-DLTA is investigated through an extensive amount of simulations. The overall performances of the four PSO algorithms proposed in this research, i.e., TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA, are also compared and discussed. Finally, Chapter 7 draws the conclusions and highlights the contributions of this research. A number of interesting directions to be pursued are detailed as future works.

15

CHAPTER 2 LITERATURE REVIEW

2.1 Introduction Particle swarm optimization (PSO) has emerged as a promising optimization tool for solving various types of global optimization problems and real-world engineering design problems since its inception. A demanding yet stimulating undertaking of PSO-based optimization technique is to solve a given optimization problem with the best search accuracy and the fastest convergence speed, while incurring the least computational complexity. These contradictory goals have led to the advancement of PSO-based optimization techniques because various innovative approaches have been proposed in the past decades to improve the algorithm’s performance. This chapter starts with a comprehensive review on the basic PSO algorithm and the recent proposed PSO variants. Specifically, Section 2.2 offers an in-depth treatment of prevalent subject matters popularly discussed in PSO literature. The recently proposed Teaching and Learning Optimization (TLBO) is briefly reviewed in Section 2.3 for an insight into its theoretical and methodological fundamentals. The 30 benchmark problems and the three engineering design problems used for the PSO performance evaluation are introduced in Section 2.4. In Section 2.5, the performance metrics and the statistical analyses employed in the performance comparison of algorithms are presented. Finally, Section 2.6 concludes this chapter.

2.2 Particle Swarm Optimization and Its Variants This section discusses the mechanism of the basic PSO (BPSO). In what follows, the comprehensive reviews of several state-of-arts PSO variants will be provided. The advantages and limitations of these proposed PSO variants are also summarized in this section to gain a deeper understanding on their conceptual successes and shortcomings.

16

2.2.1 Basic Particle Swarm Optimization In PSO, the PSO swarm is modeled as a group of particles with negligible mass and volume that navigate through the D dimensional hyperspace. D denotes the dimension of search space and it is interpreted as the number of variables being optimized in a given problem. In the context of optimization, the location of each particle in the hyperspace represents a potential solution of a given problem. Meanwhile, the fitness value of each particle determines the solution quality. It must not be confused with the concept of fitness value and the objective function value (ObjV) as mentioned in Section 1.1. In the framework of evolutionary optimization terminology, “fitness” is a measure of the goodness of each solution. All optimization techniques are based on the fitness optimization, which leads to problem-dependent objective function minimization/maximization at the end of the optimization problems. As one of the scopes of this research work, this thesis considers the minimization problems because the benchmark and real-world engineering problems used to evaluate the algorithm’s search performance have the global optima at the valleys of the fitness landscapes instead of the peaks. Thus, to state that solution A is better or fitter than solution B, the ObjV of the former must always be lower than the that of the latter, i.e., ObjV(A) < ObjV(B). In general, the current state of each particle is associated with two vectors, namely the position vector Xi = [Xi1, Xi2, …, XiD] and the velocity vector Vi = [Vi1, Vi2, …, ViD], where i denotes the particle’s index in the search space. Unlike the other CI-based algorithms, each PSO particle i has the capability of memorizing the personal best experience (i.e., selfcognitive experience) that it ever attained and this experience is represented by the personal best position vector of Pi = [Pi1, Pi2, …, PiD]. Another notable best experience that is tracked by particle i is the group best experience (i.e., social experience) obtained by any particle in the neighbors of the particle i. This experience is expressed as the neighborhood best position vector of Pn = [Pn1, Pn2, … , PnD] and its value depends on the particle’s topological structure. For instance, the topology of global version PSO as illustrated in Figure 2.1(a) is fullyconnected, given that each particle takes all the population as its topological neighbors. In 17

i

i i-1

(a)

i+1

(b)

Figure 2.1: Topological structures of PSO with (a) the fully-connected topology and (b) the local ring topology. Each circle represents one particle, while each line represents the connection of one particle to others in the population (del Valle et al., 2008).

the fully-connected topology, a particle i uses the best experience of the entire swarm as its neighborhood best experience. This best value is known as the global best position and it is denoted as Pg = [Pg1, Pg2, … , PgD]. On the other hand, Figure 2.1(b) demonstrates a local version PSO with the ring topology, where each particle i only considers two of its most adjacent particles (i.e., the particles with indexes of i - 1 and i + 1) as its neighborhood members. For ring topology, the best neighborhood experience of particle i is selected from the personal best experiences among the particles i – 1, i, and i + 1. The selected best value is known as the local best position and it is expressed as Pl = [Pl1, Pl2, … , PlD]. During the search process, the velocity vector of each particle i is stochastically adjusted according to its self-cognitive experience Pi and the social experience Pn (Kennedy and Eberhart, 1995, Eberhart and Shi, 2001). The inclusion of social experience during the velocity updating process implies the collective and collaboration behaviors in PSO swarm, given that the most successful particle shares its useful information to its neighborhood members to guide the search. The new position of particle i in the search space is subsequently computed based on the updated velocity vector. Mathematically, at the (t + 1)th iteration of the search process, the d-th dimension of particle i's velocity, Vi,d (t + 1), and position Xi,d (t + 1) are updated as follows: 18

d2

Xi (t + 1)

Vi (t) Pi (t)

Xi (t)

Pn (t) c2 r2 (Pn (t) – Xi (t))

d1

Figure 2.2: Particle i's trajectory in the two-dimensional fitness landscape (Li, 2010).

Vi,d (t  1)  Vi,d (t )  c1r1,d ( Pi,d (t )  X i,d (t ))  c2 r2,d ( Pn,d (t )  X i,d (t )) X i ,d (t  1)  X i ,d (t )  Vi ,d (t  1)

(2.1) (2.2)

where i = 1, 2, …, S; S is the population size of particle swarm; c1 and c2 are the acceleration coefficients that control the influences of self-cognitive (i.e., Pi) and social (i.e., Pn) components of particle, respectively; r1,d and r2,d are two random numbers in the range of [0, 1] with uniform distribution; and  is the inertia weight used to determine how much the previous velocity of a particle is preserved (Shi and Eberhart, 1998). Figure 2.2 demonstrates the trajectory of particle i in the two-dimensional fitness landscape. As shown in the right of Equation (2.1) and Figure 2.2, the velocity component of each particle is decomposed into three components (del Valle et al., 2008, Li, 2010). The first component is called the inertia component, given that this component models the tendency of the particle i to persist its previous search direction and to enable it searches for more unexplored regions. Meanwhile, the second and third components are known as the self-cognitive and social components of particle i, respectively. The self-cognitive component treats the particle i as an isolated being and adjust the particle’s behavior according to its own experience. In contrary, the social component suggests each particle i to

19

ignore its own experience and adjust its trajectory according to the best particle in the neighborhood. Both of the inertia and self-cognitive components governs the exploration search capability of particle i, while the particle i’s exploitation behavior is influenced by the social components. Once the updated position of particle i is obtained, the fitness of this new solution is evaluated. Specifically, the objective function value of the updated position is computed as ObjV [Xi(t+1)] and then compared with ObjV [Pi(t)], i.e., the objective function value of the personal best position of particle i. For minimization problem, the updated Xi(t+1) is considered to have better fitness than Pi(t) if the former has lower objective function value than the latter, i.e., ObjV [Xi(t+1)] < ObjV [Pi(t)]. In this scenario, the personal best position of particle i is replaced by the updated Xi(t+1) at iteration (t + 1), as illustrated in Equation (2.3). On the other hand, if ObjV [Xi(t+1)] > ObjV [Pi(t)], it implies that the updated Xi(t+1) of particle i has worse fitness than its Pi(t). Thus, the personal best position of particle i is not replaced at iteration (t + 1), as shown in Equation (2.3).

 X i (t  1), Pi (t  1)   Pi (t ),

if ObjV [ X i (t  1)]  ObjV [ Pi (t )] otherwise

(2.3)

The neighborhood best position (i.e., Pn) of each particle P, on the other hand, is identified from the personal best position vectors of all particles that are located in the same neighborhood. At each iteration t, the neighborhood best position of a particle swarm is identified as follows:

Pn (t ) 

arg

i[1,Sns ]

min[ObjV ( Pi (t ))]

(2.4)

where Sns denotes the neighborhood size of the particle swarm and this value depends on the topological structure of the PSO. For example, the value of Sns in the fully-connected topology [as illustrated in Figure 2.1(a)] is equal to the population size, i.e., Sns = S, considering that each particle in this topology takes all the population as its topological neighbors. For ring topology [as depicted in Figure 2.1(b)], each particle i only considers two of its most adjacent particles as its neighborhood members and therefore Sns = 3.

20

BPSO Input: Population size (S), dimensionality of problem space (D), objective function (F), the initialization domain (RG), problem’s accuracy level (  ) 1: Generate initial swarm and set up parameters for each particle; 2: while the termination criterion is not satisfied do 3: for each particle i do 4: Update the velocity Vi and position Xi using Equations (2.1) and (2.2), respectively; 5: Perform fitness evaluation on the updated Xi; 6: if ObjV(Xi)  ObjV(Pi) then 7: Pi = Xi, ObjV(Pi) = ObjV(Xi); 8: if ObjV(Xi)  ObjV(Pg) then 9: Pg = Xi, ObjV(Pg) = ObjV(Xi); 10: end if 11: end if 12: end for 13: end while Output: The best found solution, i.e. the global best particle’s position (Pg) Figure 2.3: Basic PSO algorithm.

Without loss of generality, the remaining section of this thesis refers the BPSO as the global version of PSO, as illustrated in Figure 2.1(a). In other words, the neighborhood best position Pn of BPSO refers to the global best position Pg. The implementation of the BPSO is illustrated in Figure 2.3. Several stopping criteria had been proposed in literature to terminate the BPSO. Specifically, the BPSO can be terminated when (1) the predefined maximum number of iteration or the function evaluation is reached, (2) the predefined accuracy of the solution has been achieved, (3) the fitness improvement of the swarm becomes insignificant, and (4) the normalized radius of swarm is close to zero, implying the sufficient convergence of swarm. In this thesis, the maximum number of fitness evaluation (FEmax) is selected as the termination criterion because the fitness evaluation process consumes more computational resources than other PSO mechanisms during the optimization process (Feng et al., 2013, Mezura-Montes and Coello, 2005). It must not be confused with the concept of number of iteration and number of fitness evaluation (FE). The former is updated when all particles in the population have updated and evaluated the fitness of their respective new positions. Meanwhile, FE is updated when a particular particle has updated and evaluated its new position in the search space. Intuitively, the FE number consumed in an optimization

21

problem is higher than the iteration number and thus it serves as a better indicator to measure the computation cost required by an algorithm to solve a given optimization problem.

2.2.2 Variants of Particle Swarm Optimization As mentioned in the previous chapter (Section 1.5), the original PSO suffers with some demerits which could degrade its optimization capability and restrict its wider application in real-world problems (Eberhart and Shi, 2001, Banks et al., 2007, del Valle et al., 2008). Numerous research works have been proposed in the past decades to address the aforementioned drawbacks and to improve the performance of PSO. In order to provide a comprehensive review and broader sight on the state-of-art PSO variants in global optimization, a classification scheme as depicted in Figure 2.4 is used to pool the PSO variants that are modified by similar approach into the same category. Specifically, the modification and improvement performed on the PSO are categorized into four major approaches, namely the (1) parameter adaptation, (2) modified population topology, (3) modified learning strategy, and (4) hybridization with orthogonal experiment design (OED) technique (Montgomery, 1991, Hedayat, 1999). The diverse ideas of scholar who contributed to the improvement of PSO in each major approach are reviewed comprehensively in the following subsections.

PSO Variants in Global Optimization

Parameter Adaption

Modified Population Topology

Modified Learning Strategy

Hybridization with Orthogonal Experiment Design (OED) Technique

Figure 2.4: The classification scheme of state-of-art PSO variants in global optimization.

22

2.2.2(a) Parameter Adaptation Parameter adaptation is the one of the earliest research directions attempted by the researchers to improve the PSO. This approach studies the effects of PSO parameters on the dynamical behaviors of swarm, followed by the tuning of these parameters to alter the particle’s movement behavior. The thorough convergence analysis and stability studies of PSO also lead to the introduction of new parameter that is useful to achieve the better optimization outcomes. Shi and Eberhart (1998) proposed a parameter called inertia weight  to balance the exploration and exploitation capabilities of PSO swarm. Various strategies have been developed to tune the parameter  since then. In their earlier work, Shi and Eberhart (1998) suggested that the parameter  with a fixed value lying between 0.8 and 1.2 is able to achieve a good convergence behavior of swarm. Later, a time-varying scheme that linearly decreases  with the iteration number was introduced by Shi and Eberhart (1999). Accordingly, the value of  is initially set to a larger value (i.e.,  = 0.9) to allow the particles explore the search spaces in the early stage of optimization. Once the optimal region is located,  is gradually decreased to 0.4 to refine the optimal search area in the latter stage of optimization. Chatterjee and Siarry (2006), and Cai et al. (2008), on the other hand, proposed to vary the  in nonlinear manner. As compared to the linear variation approach, the nonlinear variation of  enables the particle swarm to explore the search space in more aggressively manner during the early stage of optimization, in order to locate the optimal region with faster rates. Clerc and Kennedy (2002) performed thorough theoretical studies on the PSO convergence properties and subsequently proposed a similar parameter known as the constriction factor  . Accordingly, the parameter  could prevent the swarm explosion by providing the damping effect on the particle’s trajectory. Experimental study revealed that the parameters  and  are algebraically equivalent when the condition of

    0.729 is fulfilled (Eberhart and Shi 2000).

23

Acceleration coefficient (c1 and c2) is another subject of great interest in the parameter adaptation approach, considering that c1 and c2 govern the exploitation and exploration capabilities of PSO swarm, respectively. According to the studies performed by Ozcan and Mohan (1999), the PSO particle is observed to oscillate around a sinusoidal path when the value of c = c1 + c2 is set between 0 and 4.0. The oscillation frequency and complexity of the sinusoidal path increase with the value of c. When c is set larger than 4.0, the particle’s trajectory starts to diverge and swarm explosion occurs. Based on their experimental studies, Ozcan and Mohan concluded that the maximum value for c should be 4.0 (Ozcan and Mohan, 1999). However, the values of c1 and c2 need not to be always equal to each other, given that the influences of self-cognitive and social components of PSO swarm should be different based on the nature of problem. Suganthan (1999) attempted to improve the PSO performance by linearly decreasing both c1 and c2 with time. However, they observed that the PSO with fixed c1 and c2 (i.e., 2.0) yields better solutions than the linearly decreasing scheme. Ratnaweera et al. (2004) continued to investigate the feasibility of dynamic c1 and c2 in improving the PSO performance. They revised the swarm behavior and found out that the self-cognitive component is more important during the early stage of optimization, considering that particles need to wander through the unexplored search region. In the latter stage, the influence of social component becomes more significant to encourage the PSO swarm converges towards the already found optimal regions. Based on these observations, Ratnaweera et al. (2004) proposed a time-varying acceleration coefficient (TVAC) strategy to linearly decrease and increase the values of c1 and c2 with time, respectively. Two PSO-TVAC variants, namely the PSO-TAVC with mutation (MPSOTVAC) and self-organizing hierarchical PSO-TVAC (HPSO-TVAC), are developed in their work. Both of these variants employ the mutation and velocity re-initialization strategies, respectively, to alleviate the premature convergence issue. Although the aforementioned works improve the PSO, some notable issues could be identified from these earlier reported simple rule-based parameter tuning strategies. First, most of the simple rule-based parameter tuning strategies are time-varying and they tend to 24

adjust the PSO parameters in the non-reversible manners, i.e., they either increase or decrease the target parameters with time until the predefined limits are met. Consequently, the exploration nature of PSO swarm is decaying with time and it is impossible to recover the swarm diversity if the swarm stagnation occurs in the latter stage of optimization. Second, the simple rule-based parameter strategies do not fully utilize the information obtained from the search process to adaptively tune the PSO parameters. These information include the population diversity, the best fitness value found in a particular stage, and etc. Intuitively, the inclusion of these extra information could lead to the development of more robust and adaptive tuning strategies. Finally, it is also observed that all particles in the population are assigned with the same parameter values, despite the fact that these particles perform the search in different locations of the search space. As mentioned earlier, some complicated optimization problems comprise of different shapes of the local fitness landscape in different subregions of the search space. Intuitively, different parameter values need to be adaptively assigned to different particles in order for the PSO to perform robustly in the difficult problems. To alleviate the potential drawbacks of the simple rule-based parameter tuning strategies, numerous amounts of adaptive parameter tuning strategies have been proposed. Xu (2013) investigated the relationship between the particle velocity and the performance of PSO. Based on the experimental studies, an adaptive parameter strategy was developed to vary the  based on the feedback of nonlinear ideal velocity information. Shi and Eberhart (2001) capitalized the promising ability of fuzzy set theory in modelling the PSO search process via the linguistic description and subsequently designed a fuzzy system to adjust the

 . The proposed PSO variants, namely Fuzzy Adaptive PSO (FAPSO), selected two variables, i.e., the current best performance evaluation and current  as the system inputs to decide the change of the  (system output). Experimental studies reveal that the FAPSO has more promising search accuracy than the PSO with linearly decreasing  . Later, Juang et al. (2011) proposed an Adaptive Fuzzy PSO (AFPSO) that utilized fuzzy set theory to adaptively adjust c1 and c2. Unlike FAPSO, AFPSO takes the difference value of ObjV (Pg) 25

at two consecutive iterations as the input of fuzzy system. Tang et al. (2011) proposed a Feedback Learning PSO with Quadratic Inertia Weight (FLPSO-QIW), where a fitness feedback mechanism is introduced into the TVAC strategy to improve the algorithm’s performance. Unlike the MPSO-TVAC and HPSO-TVAC, the FLPSO-QIW can adaptively assign the c1 and c2 of each particle according to their respective fitness values. In addition, the parameter  of FLPSO-QIW is updated according to a quadratic function instead of decreased linearly with time. An Adaptive PSO (APSO) with more sophisticated and robust parameter adaptation strategy is developed by Zhan et al. (2009). Unlike the previous approaches, the parameters

 , c1 and c2 of each APSO particle are adaptively tuned according to the population distribution. An evolutionary state estimation (ESE) module is incorporated in the APSO to identify four evolutionary states, i.e., exploration, exploitation, convergence, and jumpingout states. The output of the ESE module, namely the evolutionary factor f is used to adaptively tune the parameters of each APSO particle. Specifically, the value of  is tuned using a sigmoid mapping, while a set of fuzzy membership functions are employed to adaptively tune the values of c1 and c2. Leu and Yeh (2012) proposed two parameter adaptation strategies in their proposed Grey PSO, by capitalizing the ability of grey relational analysis (Deng, 1989) to compare the grey relational grade between a given reference sequence (i.e., global best particle Pg) and a given set of comparative sequences (i.e., other particles). In general, the particle with larger grey relational grade is closer to the Pg particle and therefore is regarded in the exploitation state and vice versa. In Grey PSO, the parameters  , c1 and c2 of each particle are adaptively adjusted in different iterations, considering that the grey relational grade of each particle is varied when the search process is progressing. Hu et al. (2013) proposed an innovative approach to adaptively tune the parameters  , c1 and c2. Accordingly, the parameter control can be formulated as a convex optimization problem, which aims to minimize the distance between a randomly selected particle and the Pg particle. The updated parameters of the randomly selected particle could be obtained by solving the convex optimization problem via the subgradient method (Nedic, 26

2002). It is noteworthy that the adaptive parameter control mechanism developed by Hu et al. (2013) explicitly directs the parameter changes instead of providing fuzzy guideline as observed in the FAPSO, AFPSO, and APSO. In general, the PSO variants equipped with the adaptive parameter adaptation strategies tend to exhibit better search performances than those with the simple rule-based parameter adaptation strategies. Nevertheless, the complexities and computing times of the former PSO variants are usually much higher than the latter ones, considering that more complicated decision making mechanisms are instilled in these adaptive strategies to achieve the proper tuning of parameter values. For example, the fuzzy membership functions are implemented in the FAPSO, AFPSO, and APSO to adaptively tune the parameters  or c1 and c2. Another notable drawback of the adaptive parameter adaptation strategies is that more additional parameters are needed for the proposed adaptive mechanisms. This drawback could restrict the applicability of the PSO variants in solving the real -world problems, considering that laborious works are required to obtain the optimal values of these

Table 2.1 Comparison of the simple rule-based and adaptive parameter adaptation strategies Type Simple rule-based (Shi and Eberhart, 1998, Suganthan, 1999, Clerc and Kennedy, 2002, Ratnaweera et al., 2004, Chatterjee and Siarry, 2006, Cai et al., 2008)



Adaptive  (Shi and Eberhart, 2001, Zhan et al., 2009, Juang et al., 2011, Tang et al., 2011, Leu and Yeh, 2012, Hu et al., 2013, Xu, 2013)

Description Parameter values  are fixed values or varied with  time in the nonreversible manner based on linear or nonlinear func-  tion.

Advantages Simpler implem-  entation. Very few or no additional para-  meters are introduced. Requires lower computing time.

Disadvantages Parameter values of all particles are the same. Unsatisfactory performance on a broader spectrum of problems.

Parameter values  are adaptively varied with time in the reversible manners based on the particle’s current search info-  rmation.

Parameter values  of all particles are different and  adaptively varied with different search stages.  Robust performances in solving various types of problems.

Complex implementation. More additional parameters are introduced. Requires higher computing time.

27

newly introduced parameters. The comparisons of the simple rule-based and adaptive parameter adaptation strategies are summarized in Table 2.1.

2.2.2(b) Modified Population Topology Population topology is another area of focus to improve the PSO’s performance. Specifically, it decides the information flow rate of the best solution within the swarm, considering that different topological structures establish different communication and information sharing mechanisms in the population (Kennedy, 1999, Kennedy and Mendes, 2002). The existing population topologies of PSO, in general, are divided into two major types, i.e., the static and dynamic topologies. Two of the most commonly used static topologies in the PSO are the fully-connected topology and the ring topology, as illustrated in Figures 2.1(a) and 2.1(b), respectively. As mentioned earlier, the neighborhood of a particle in fully-connected topology consists of the particles in the whole swarm and this enables each particle to fully access the information of all other members in the topology (Kennedy, 1999, Kennedy and Mendes, 2002). In contrary, the particles in the ring topology have more restricted access to other population members, given that each particle is only allowed to interact with its two immediate neighbors (Kennedy, 1999, Kennedy and Mendes, 2002). Other static topologies proposed in the earlier literature include the wheel topology, and the von Nuemann topologies with two dimensional (2-D) and three dimensional (3-D) configurations (del Valle et al., 2008, Banks et al., 2007). For wheel topology as shown in Figures 2.5(a), all particles in the population are isolated from one another and their information are fully accessed by a leader called the focal particle. Meanwhile, as depicted in Figures 2.5(b) and 2.5(c), both of the 2-D and 3-D configurations of von Neumann topologies are represented as a rectangular lattice that folded like a torus and a 3-D wire-frame pyramid, respectively. Inspired by the social behavior of tribes, Carvalho and Bastos-Filho (2008) proposed a clan topology in their Clan PSO, where each clan is formed by a group of particles that communicate with each other through fully-connected topology. Figure 2.6 depicts the clan 28

Focal Particle

(a)

(b)

(c)

Figure 2.5: Other static topologies: (a) wheel topology, (b) von Neumann topology with 2-D configuration, and (c) von Neumann topology with 3-D configuration (del Valle et al., 2008).

(a)

(b)

Figure 2.6: Population structures of the clan topologies where the leaders’ conference occur in (a) fully-connected topology and (b) ring topology (Carvalho and Bastos-Filho, 2008).

topology which comprises of four clans (A, B, C, and D) of five particles. Another notable mechanism proposed in the Clan PSO is the leader conference, which allows the information sharing between the clan leaders. Clan leader refers to the best performing particle in each clan (i.e. the particles 3, 4, 1, and 2 in clans A, B, C, and D, respectively). This information sharing mechanism allows the clan leaders to acquire the new information from other clans and then spread these information to their clan members when they return to their respective clans (Carvalho and Bastos-Filho, 2008). As shown in Figures 2.6(a) and 2.6(b), the

29

information exchange mechanism between the clan leaders could occur via the fullyconnected topology (global conference) and the ring topology (local conference), respectively. Many variants of Clan PSO (Bastos-Filho et al., 2009, Pontes et al., 2011) have been developed since the inception of clan topology, considering that this topology could improve the algorithm’s convergence speed (Carvalho and Bastos-Filho, 2008). Numerous experiments are performed to investigate the effect of different static topologies on the swarm behavior (Kennedy, 1999, Kennedy and Mendes, 2002). It is observed that the PSO with fully-connected topology performs better in the simple unimodal problems due to its rapid convergence speed. Meanwhile, the PSO with ring topology is found less susceptible to the local optima and thus it is useful to deal with the complex multimodal problems. Realizing the fact that different static topologies could perform better in different types of problems, researchers attempted to combine the advantages of different static topologies into the PSO to enhance its robustness in solving broader spectrum of problems. The Unified PSO (UPSO) developed by Parsopoulos and Vrahatis (2004) is one of the PSO variants that inspired from this idea. The velocity adjustment of the particle i in UPSO is performed as follows:

Gi  [vi  c1r1 ( Pi  X i )  c2 r2 ( Pg  X i )]

(2.5)

Li  [vi  c1r3 ( Pi  X i )  c2 r4 ( Pl  X i )]

(2.6)

U i  (1  u) Li  uGi

(2.7)

where Gi and Li represent the velocity of particle i computed from fully-connected and ring topologies, respectively; r1 to r4 denote the random numbers in the range of [0, 1]; Ui is the unified velocity of particle i; u is the unification factor used to balance the exploration (fullyconnected topology) and exploitation (ring topology) searches of UPSO. Parsopoulos and Vrahatis (2004) further enhanced the exploration and exploitation capabilities of UPSO by introducing a random number generated from a normal distribution into the Equation (2.7). Kathrada (2009) proposed an acceleration coefficient heuristic to combine the fullyconnected and ring topologies in the Flexible PSO (FlexiPSO). Both of the Pg and Pl

30

1: 2: 3: 4: 5: 6: 7:

if rand < 0.333 then c1 = 2.000, c2 = 0.000, c3 = 0.000; else if rand > 0.666 then c1 = 0.000, c2 = 2.000, c3 = 0.000; else c1 = 1.333, c2 = 1.333, c3 = 1.333; end if Figure 2.7: Acceleration coefficient heuristic proposed in FlexiPSO (Kathrada, 2009).

positions are employed to compute the velocity of particle i in FlexiPSO as shown in Equation (2.8), where r1 to r3 denotes the random numbers in the range of [0, 1]. According to the proposed acceleration coefficient heuristic (as depicted in Figure 2.7), each FlexiPSO particle has the probabilistic opportunity (determined by a random number rand in the range of [0, 1]) to be assigned as hill climber (only Pi is considered), stochastic hill climber (only Pg is considered), or to use the information obtained from both fully connected and ring topologies (all Pi, Pg, and Pl are considered).

Vi  Vi  c1r1 ( Pi  X i)  c2 r2 ( Pg  X i )  c3 r3 ( Pl  X i )

(2.8)

Although the aforementioned static topologies and the combination of these topologies are relatively easier to be implemented and able to improve the PSO with certain degrees, two main drawbacks of these approaches need to be highlighted. First, all particles in the PSO variants with static topologies maintain the same exploration/exploitation strengths in the entire optimization process. The inability of these particles to balance their exploration/exploitation searches (by changing the topological structure) in different stages of search process might lead to the poor optimization outcomes. Secondly, although the initial motivation of combining different static topologies into the PSO is to improve the algorithm’s performance by capitalizing the benefits of these topologies, the intended outcome is not always guaranteed. This is because most existing mechanisms used to combine these different static topologies are probabilistic-based (i.e., FlexiPSO) or rely on certain time-invariant coefficient (e.g., UPSO). The absence of the adaptive and systematic mechanisms in combining these static topologies might introduce the disadvantages of the

31

involved static topologies, instead of their advantages, into the PSO and subsequently leads to the algorithm’s performance degradation. Unlike the static topologies, the PSO with dynamic topologies is allowed to vary its topological structure in different stages of search process. Suganthan (1999) proposed a dynamically adjusted neighborhood structure where each particle begins the search with itself (i.e., local version of PSO). As the iteration number increases, the neighborhood size of each particle is gradually extended to include all particles (i.e., global version of PSO). Each particle in this PSO variant can select its neighborhood members based on two different ways, i.e., (1) according to the particles’ indices which is less computational intensive, and (2) according to the particles’ distances in the search space, which is more computational intensive. Marinakis and Marinaki (2013) proposed another expanding neighborhood topology (ENT) that shares similar topology adjustment mechanism. Unlike the Suganthan’s (1999) approach, the ENT increases the particle’s neighborhood according to its search status. Specifically, the neighborhood of a particle is only expanded if it is not improved for a consecutive iteration numbers. Considering that different particles exhibit different search statuses, it is anticipated that each particle in the population could have different neighborhood sizes, which subsequently leads to different exploration/exploitation strengths. Montes de Oca et al. (2009) proposed another dynamic topology into the Frankenstein PSO (FPSO). It is noteworthy that the time-varying topology in FPSO behaves in the opposite manner as compared to the ones taken by Marinakis and Marinaki (2013), as well as Suganthan (1999). Specifically, the FPSO particle begins the search with fully-connected topology. As the optimization evolves, the neighborhood size of each FPSO particle is gradually decreased until its neighborhood structure ends up as the ring topology. Liang and Suganthan (2005) proposed a Dynamic Multi-Swarm PSO (DMS-PSO) to tackle the deficiencies of static topologies. Specifically, the DMS-PSO population is divided into multiple numbers of small size swarms and each of these swarms is regarded as a local version PSO. A randomly regrouping schedule is introduced in the DMS-PSO to prevent the convergence of these multiple swarms into the local optima. Specifically, during the 32

Figure 2.8: Search mechanism of DMS-PSO (Liang and Suganthan, 2005).

regrouping process, the particles from different swarms are randomly regrouped into the new configurations of swarms and then continue their search. The randomly regrouping schedule establishes a good information exchanging mechanism among the multiple swarms and it successfully prevents the diversity loss in DMS-PSO. Figure 2.8 illustrates the mechanism of the DMS-PSO which comprises of three swarms with three particles in each swarm. Numerous variants of DMS-PSO (Zhao et al., 2010, Nasir et al., 2012) have been developed since its inception, attributed to the excellent capability of the multi-swarm topology and the randomly regrouping schedule in tackling with the complex multimodal problems. Although the dynamic topologies successfully address the drawbacks of static topologies, some deficiencies of the existing dynamic topologies need to be highlighted. First, the complexity and computing time of the PSO with dynamic topologies are much higher than the static topologies because more resources allocations are needed by the former to vary the topological structures. Second, it is observed that most existing dynamic topologies adjust the topological structures of PSO in the non-reversible manners. For example, the dynamics topology proposed by Suganthan (1999) gradually increases the neighborhood size of the particle, whereas FPSO varies the subject in the opposite manner. Consequently, the search behaviors of the former and latter PSO variants become more exploitative and explorative, respectively, as the search process is progressing. As mentioned

33

earlier, the particles in different location of the complex search space needs to perform the search with different exploration/exploitation strengths. The non-reversible topology changes could have assigned inappropriate exploration/exploitation strengths on certain particles, which lead to the performance deterioration of algorithm. Finally, the dynamic topology of DMS-POS randomly changes the neighborhood structure of all particles, without considering their respective search performance. This stochastic mechanism could dis turb the convergence of particles towards the promising region and thus jeopardize the algorithm’s convergence speed. Apart from this, it also has higher risk to mistakenly assign particles with inappropriate neighborhood structure and subsequently compromise the balance of

Table 2.2 Comparison of the static and dynamic topologies Type Static (Kennedy, 1999, Suganthan, 1999, Kennedy and Mendes, 2002, Parsopoulos and Vrahatis, 2004, Carvalho and Bastos-Filho, 2008, Kathrada, 2009, Montes de Oca et al., 2009)

Description  Topology str-  uctures are unchanged  with time.  Certain topology structures could be combined.

Dynamic (Suganthan, 1999, Liang and Suganthan, 2005, Montes de Oca et al., 2009, Marinakis and Marinaki, 2013)



Advantages Simpler impleme-  ntation. Requires lower  computing time.



Topology str-  uctures are changed with time. 

More robust perfo-  rmance due to the time-varying expl-  oration/ exploitation searches  Preservation of swarm diversity through information sharing mechanism.

34

Disadvantages The exploration/ exploitation searches are unchanged with time. The disadvantages of different static topologies could be introduced into the PSO without the proper combination. Lack of adaptive and systematic mechanism to assign the appropriate neighborhood structure to each particle.

Complex implementtation. Requires higher computing time. Lack of adaptive and systematic mechanism to assign the appropriate neighborhood structure to each particle.

algorithm’s exploration/exploitation searches. Intuitively, a robust dynamic topology needs to be able to vary the topological structure in reversible manner. It is also desirable to adaptively and systematically vary the neighborhood structure of each particle based on their respective search status and location in the search space. This adaptive and systematic approach could assign the appropriate exploration/exploitation strengths to each particle and to ensure them perform robustly in different subregions of the problems search space. The comparison between the static and dynamic topologies is summarized in Table 2.2.

2.2.2 (c) Modified Learning Strategy So far, most of the PSO variants that are improved through the parameter adaptation and modified population topology approaches rely on the personal and neighborhood best positions to update the particle’s velocity. In other words, the learning strategies of these PSO variants do not consider other non-fittest neighborhood members during the search process. Nevertheless, the overemphasis of PSO on the single fittest particle in neighborhood tends to cause the rapid diversity loss of swarm (van den Bergh and Engelbrecht, 2004, Liang et al., 2006). Moreover, there is no convincing evident to indicate that the fittest particle in the neighborhood can actually find a better region than the second or third fittest particles in the swarm (Mendes et al., 2004). In light of these facts, researchers started to develop a new class of PSO variants, namely the PSO with modified learning strategies. Specifically, the learning strategies of these PSO variants take the information contributed by other non-fittest particles in the population into account during the search process. Being one of the pioneers, Mendes et al. (2004) advocated that each particle’s movement should be influenced by all of its topological neighbors, instead of solely depending on the personal and neighborhood best positions. A Fully-Informed PSO (FIPS) was then proposed by Mendes et al. (2004), where each FIPS particle updates its velocity by employing a stochastic average of personal best position from all of its neighbors. Nine topology conditions had been tested on FIPS and it was reported that the FIPS with URing topology exhibits the best performance, where the prefix “U” 35

means the particle’s own index removed from the neighborhood. Beheshti et al. (2013) incorporated the median position of particle, as well as the worst and median fitness values of the swarm into the learning strategy of their proposed Median-Oriented PSO (MoPSO). Experimental studies show that the newly included information in the learning strategy of MoPSO successfully improves the algorithm’s convergence speed. Zhou et al. (2011) proposed a Random Position PSO (RPPSO), where a random position is used to replace the different best position in different stages of the optimization to guide the search. Specifically, a probability P(f ) inspired from the SA is used to decide if a random position needs to replace the Pg and Pi positions of particle i in the first and second half of the search process, respectively. According to Zhou et al. (2011) the presence of random position preserves the swarm diversity and improves the global search ability of RPPSO. The studies performed by Jin et al. (2013) suggested that the presence and the absence of randomness in certain dimension components of PSO guarantees a good balance between exploration/exploitation to the optimal solution. They subsequently proposed three dimension selection techniques (i.e., random, heuristic, and distanced-based selection techniques) to determine which dimensional components of a particle need to learn from the Pg (i.e., exploitation) and which of those are kept unchanged to maintain swarm diversity (i.e., exploration). It was reported that the PSO with Distance-based Dimension Selection (PSODDS) has most superior performance than the other two variants. Chen et al. (2013) developed the PSO with Aging Leader and Challengers (ACLPSO) by transplanting the biological aging concept into the learning strategy of PSO. The Pg particle is considered as the leader of ALC-PSO population, whereas other non-Pg particles represent the potential challengers of the population leader. The idea of ALC-PSO is that, when the leader is no longer effective in improving the population’s fitness, its leading power is gradually deteriorated and eventually replaced by a new emerging particle that challenges and claims the leadership. Both of the aging and challenging mechanisms in ALC-PSO ensure that the non-fittest particles are offered with the opportunities to guide the particle search. Fu et al. (2012) proposed the Phase Angle-Encoded QPSO (PAE-QPSO) by 36

incorporating two modifications on its learning strategy. First, each PAE-QPSO particle is expressed as a phase angle vector instead of the position vector. Second, unlike the BPSO which needs to update both of the velocity and position vectors, the PAE-QPSO particle only needs to update its phase angle vector due to its intrinsic quantum nature. According to the Uncertainty Principle of Quantum Mechanic, both of the phase angle increment and phase angle of the PAE-QPSO particle, which are equivalent to the particle’s velocity and position respectively, cannot be measured simultaneously (Sun et al., 2004a, Sun et al., 2004b). Another representative PSO variant with modified learning strategy is the Comprehensive Learning PSO (CLPSO) developed by Liang et al. (2006). Specifically, Liang et al. (2006) suggested that each dimension of the particle i is allowed to learn either from its Pi or from other particle’s historical best position, based on the learning probability assigned to each particle. In other words, each particle i in the CLPSO is assigned with one unique exemplar Pe,i to guide its search during the optimization process. It is reported that the modified learning strategy proposed in CLPSO provides sufficient diversity to the swarm and hence it is effective in preventing the swarm stagnation, especially in the complex multimodal problems. Due to the competitive performance of CLPSO, the underlying idea of this algorithm has now become the cornerstone to develop more state-of-the-art PSO variants. For instance, the FLPSO-QIW proposed by Tang et al. (2011) is the improved variant of CLPSO, where the former generates the potential exemplars from the first 50% of the fitter particles. Nasir et al. (2012) proposed the Dynamic Neighborhood Learning-based PSO (DNLPSO) by combining the merits of the previously proposed DMS-PSO and CLPSO. In DNLPSO, each particle selects its exemplar from its neighborhood made dynamic in nature, instead of seeking it from the entire population. Another notable variant that inspired from CLPSO is the Example-based PSO (ELPSO) proposed by Huang et al. (2012). Unlike the CLPSO and most of its descendants that discard the Pg particle, ELPSO employs an example set of multiple global best particles to update the particle’s velocity. Experimental study shows that the ELSPO with multiple different global best particles outperforms those with

37

single Pg particle and those without Pg particle, in term of diversity preservation and search efficiency. It is noteworthy that most of the PSO variants with modified learning strategies achieve the preservation of swarm diversity by reducing the influences of Pg particle during the search process. This strategy, however, tends to compromise the rapid convergence characteristic of the PSO. For example, although CLPSO exhibits its excellent capability in avoiding the local optima in the complex multimodal problems, the convergence speed of this algorithm in solving the unimodal and simple multimodal problems is significantly jeopardized (Wang et al., 2011). Another drawback of these PSO variants is that most of them consist of only one type of learning strategy and thus have limited choices of exploration/exploitation strengths. Considering that different classes of global optimization problems have differently shaped fitness landscapes, it can be anticipated that the PSO variants with single modified learning strategy can only perform well in certain classes of problems. Intuitively, each particle in the population needs to be assigned with different exploration/exploitation strengths to make the algorithm robust enough to deal with different situations independently. Following this line of thinking, the development of the PSO variants with multiple learning strategies emerges as a plausible line of research to further enhance the PSO’s universality and robustness in tackling the diverse set of global optimization problems. Li et al. (2012) proposed a Self-Learning PSO (SLPSO), where each SLPSO particle is equipped with four types of learning strategies to cope with different types of fitness landscapes in the problem search spaces. During the search process, each SLPSO particle selects an appropriate learning strategy for itself based on the selection probabilities derived from a self-adaptively improved probability model. Specifically, the learning strategy with higher selection probability tends to be selected, considering that it has more successful search history. Wang et al. (2011) and Hu et al. (2012) developed the Self-Adaptive Learning-based PSO (SALPSO) and PSO with Multiple Adaptive Method (PSO-MAM), respectively. Both of the variants of SALPSO and PSO-MAM share the similar working mechanisms as the 38

SLPSO but employ different learning strategies and adaptive selection techniques. Wang et al. (2013b) introduced the Diversity Enhanced PSO with Neighborhood Search (DNSPSO), where a total of three new learning strategies were developed to enhance the algorithm’s diversity, as well as the algorithm’s local and global search abilities. To verify the generality of their proposed strategies, Wang et al. (2013b) further integrated the proposed diversity enhancing mechanism and the neighborhood search strategy into the CLPSO to produce a new PSO variants (DNSCLPSO). Although the frameworks of multiple learning strategies successfully enhance the PSO’s robustness in dealing with different classes of problems, some design concerns of these PSO variants need to be addressed. First, the complexity and computing time of these PSO variants are much higher than those with single learning strategy, considering that extra computation resources are required by the former algorithms to perform multiple learning strategies and the strategy selection process. Second, so far no systematic studies had been performed to investigate the optimal numbers of learning strategies required in developing the PSO variants with multiple learning strategies. Although the recently proposed SLPSO, SALPSO, PSO-MAM, and DNSPSO consist of three to four learning strategies under one algorithmic framework, no clear justification had been made regarding this issue. Another drawback of the PSO with multiple learning strategies is that more additional parameters are introduced to adaptively select the appropriate learning strategy for each particle. For example, a total of six new parameters are introduced in the SLPSO to facilitate the particle search with multiple learning strategies. Laborious tuning works are needed to obtain the optimal settings of these newly introduced parameters and thus it makes the PSO variants with multiple learning strategies less attractive in the real-world applications. The comparisons between the frameworks of single learning strategy and multiple learning strategies are summarized in Table 2.3.

39

Table 2.3 Comparison of the framework of single learning strategy and multiple learning strategies Type Single learning  strategy (Mendes et al., 2004, Liang et al., 2006, Tang et al., 2011, Zhou et al.,  2011, Fu et al., 2012, Huang et al., 2012, Nasir et al., 2012, Beheshti et al., 2013, Chen et al., 2013, Jin et al., 2013)

Description Only one learn-  ing strategy is used by particle  to perform the search.  The information of non-fittest particles are considered in the learning strategy.

Advantages Simpler impleme-  ntation. Requires lower co mputing time. Very few or no additional parameters are introduce-  ed.

Disadvantages Suffers with the compromised convergence speed. Only performs well in certain class of problems. Each particle has limited choices of exploration/exploit ation strengths.

Multiple learning  strategies (Wang et al., 2011, Hu et al., 2012, Li et al., 2012, Wang et al., 2013b) 

More than one  learning strategies are used by the particle to perform the search.  The information of non-fittest particles are considered in the learning strategy. .

More robust performance in different classes of optimization problems. Each particle has the intelligence to decide the most appropriate learning strategy based on its search history.



Complex implementation. Requires higher computing time. More additional parameters are introduced. The optimal number of learning strategies required by each particle is unknown.

  

2.2.2(d) Hybridization with Orthogonal Experimental Design Technique Hybridization is another widely used strategy to improve the PSO performance. This strategy intends to capitalize the desired capabilities of the auxiliary operators to mitigate the drawbacks of PSO such as premature convergence and diversity loss (Banks et al., 2008). In literature, there are two types of auxiliary operators widely used in the hybridization process. The first type of the auxiliary operator is originated from the EAs’ search operators. These EA-based auxiliary operators include the selection, crossover operator, mutation, local search, and reinitialization operators (Angeline, 1998, Miranda and Fonseca, 2002a, Miranda

40

and Fonseca, 2002b, Juang, 2004, Liu et al., 2007, Chen et al., 2007, Shi et al., 2003, Wu, 2009, Wu et al., 2008, Epitropakis et al., 2012, Xin et al., 2012, Nguyen et al., 2014). Meanwhile, the second type of auxiliary operators used for the hybridization of PSO is inspired from the non-EA techniques such as the clustering (Cheng et al., 2012, Kenndy, 2000), signal-to-noise ratio (Lin et al., 2011), entropy (Xie et al., 2002), Kalman filter (Monson and Seppi, 2004), chaos maps (Song et al., 2007, Chuang et al., 2011, Mariani et al., 2012, Yang et al., 2012) and etc.

Orthogonal Experiment Design OED is a mathematical tool that is very effective in obtaining the best combination of factor levels in the design problems. This technique works on an orthogonal array (OA) (Montgomery, 1991, Hedayat, 1999) denoted as LM (Q M 1 ) , where L represents the OA; Q is the number of the level of factors; and M  Q 

log Q ( N 1)

 represents the total number of

test cases combination. For an N factors experiment, only the first N columns (or arbitrary N columns) of LM (Q M 1 ) are considered by the OED. For an experiment which consists of three factors (i.e., N = 3), with each factor consists of two levels (Q = 2), the corresponding OA is constructed as shown in Equation (2.9). Meanwhile, the procedures used to construct the two-level OA for N factors are presented in Figure 2.9. 1 1 L4 ( 2 3 )   2  2

1 1 2 2 1 2  2 1

(2.9)

As shown in Equation (2.9), a total of M test case combinations are generated by the LM (Q M 1 ) OA. A factor analysis (FA) is then performed based on the experimental results

of all the M combinations of the LM (Q M 1 ) OA to identify the best factor levels combination. Specifically, FA identifies the best level of each factor by independently evaluating the effect of individual factor on the overall experimental results (Hedayat, 1999).

41

Generate_OA(OA, N) Input: Number of experimental factors, N 1: n = 2 log2 ( N 1)  ; //n is the number of OA combinations required to analyze all individual // factors 2: for i = 1 to n do 3: for j = 1 to N do 4: level = 0; 5: k = j; 6: mask = n/2; 7: while k > 0 do 8: if (kmod2) and (bitwise_AND(i – 1, mask)  0) then 9: level = (level + 1)mod2; 10: end if 11: k  k / 2 ; 12: mask = mask/2; 13: end while 14: OA[i][j] = level + 1; 15: end for 16: end for /* bitwise_AND(  , mask) returns the m-th least significant bit of  , where mask = 2m-1*/ Output: Orthogonal array (OA) Figure 2.9: The algorithm used to construct a two-level OA for N factor.

To study which level has a significant effect on the corresponding factor, the main effect of factor j ( 1  j  N ) with level k ( 1  k  Q ), which is denoted as Sjk, is calculated as follows:

S jk 

mM1 f m  z mnq mM1 z mnq

(2.10)

where zmnq is set as 1 if the m-th combination is assigned with the q-th level for the n-th factor; otherwise, zmnq is set as 0. Once the values of all Sjk are computed, the best combination of the levels can be obtained by identifying the level of each factor that provides the output with the highest quality. For a maximization problem, a larger Sjk indicates better quality for the k-th level on factor j. In contrary, a smaller value of Sjk denotes the better quality for the k-th level on factor j in the minimization problem. A case study is presented as follows to illustrate the mechanism of OED in determining the best experimental combinations.

42

Case Study In order to illustrate the mechanism of OED, an experiment is designed based on the previous reported works (Zhan et al., 2011, Gao et al., 2013) to investigate the best level combination of factors to produce maximum vegetable yield. As shown in Table 2.4, three factors that affect the vegetable yield are (1) temperature, (2) fertilizer amount, and (3) pH value of soil, denoted as factors A, B, and C, respectively. In addition, each factor comprises of two possible levels of choices, according to Table 2.4. For instance, the pH values can be either 6 or 7, represented by the factor levels of 1 or 2, respectively. Considering that the vegetable yield experiment consists of three factors (i.e., N = 3) and each factor comprises of two possible levels of choice (i.e., Q = 2), the L4 (2 3 ) OA obtained from the Equation (2.9) suffice in identifying the best combination of this experiment. As illustrated in Equation (2.9), the L4 (2 3 ) OA consists of four rows (i.e., M = 4), implying that a total of four combinations of test cases are generated in the L 4 (2 3 ) OA. The number of 1 and 2 in each column of L4 (2 3 ) denotes the levels of each factor. For example, the first row of L4 (2 3 ) OA is (1, 1, 1), meaning that in this test case, factors A (temperature), B (fertilizer amount), and C (pH value) are all assigned as level 1, i.e., 20 C, 100g/m2, and pH 6 as shown in Table 2.5, where fm denotes the experimental result of m-th (1  m  M) combination of the test cases as observed from the OA. The FA is subsequently performed to identify the best combination of test case in the

L4 (2 3 ) OA. Specifically, the main effect of each factor j ( 1  j  N ) with level k ( 1  k  Q ), i.e., Sjk, is computed using Equation (2.10). For example, to calculate the effect of level 1 on

Table 2.4 Vegetable yield experiment with three factors and two levels per factor Factors Levels 1 2

Temperature (A)

Fertilizer amount (B) 2

20  C 25  C

100g/m 150g/m2

43

pH value (C) 6 7

Table 2.5 Deciding the best combination levels of the vegetable yield experimental factors using an OED technique Combinations C1 C2 C3 C4 Levels 1 2 OED Results

Temperature (A) 20  C (1) 20  C (1) 25  C (2) 25  C (2) SA1 = (f1 + f2)/2 = 20 SA2 = (f3 + f4)/2 = 60 2

Fertilizer Amount (B) 100g/m2 (1) 150g/m2 (2) 100g/m2 (1) 150g/m2 (2) Factor Analysis SB1 = (f1+f3)/2 = 46 SB2 = (f2+f4)/2 = 34 1

pH value (C) 6 (1) 7 (2) 7 (2) 6 (1)

Results f1 = 28 f2 = 12 f3 = 64 f4 = 56

SC1 = (f1+f4)/2 = 42 SC2 = (f2+f3)/2 = 38 1

pH value, i.e., SC1, only the experimental results of C1 and C4 (i.e., f1 = 28 and f4 = 56) are considered in Equation (2.10). This is because only these two test cases consist of the pH factor with level 1. According to Equation (2.10), the sum of f1 and f4 is subsequently divided by the zmnq (i.e., 2 in this case) to yield Sjk (i.e., SC1 = 42 in this case). The FA results of this case study are summarized in Table 2.5. Since the main objective of this case study is to determine the best level combination of factors that produce the maximum vegetable yield, it can be considered as a maximization problem. In other words, for each factor, the factor level that gives the larger value of Sjk has more significant effect and thus is more desirable. For example, as shown in Table 2.5, the fertilizer amount (i.e., factor B) of 100g/m2 (i.e., level 1) is identified to have more significant effect than the 150g/m2 (i.e., level 2) on the factor B because of SB1 > SB2. From Table 2.5, it can be concluded that the factors A, B, and C have the best levels of 2, 1, and, 1, respectively. In other words, to produce the maximum vegetable yield, the temperature, fertilizer amount, and pH value of soil needs to be set as 25  C, 100g/m2, and 6, respectively. One notable observation could be found on Table 2.5 is that, although the combination of (2, 1, 1) is absent in Equation (2.9), it is discovered by the FA. This reveals the excellent prediction capability of OED in discovering the best experimental combinations.

44

OED-based PSO Variants Due to its excellent prediction capability, OED technique has recently been hybridized into the PSO for different purposes. Specifically, the OED-based auxiliary operator is mainly used (1) to initialize the population of PSO, (2) to determine the optimal parameter settings of PSO, and (3) to derive the new learning strategies of PSO. Zhao et al. (2006) proposed the Improved PSO (IPSO) by hybridizing the OED into the PSO. The main purpose of OED in IPSO is to generate an initial population that is uniformly scattered over the feasible search space. It was reported that the OED-based population initialization is able to enhance the convergence speed of IPSO. Ko et al. (2007), on the other hand, employed the OED to perform the parameter tunings on PSO. Specifically, a total of three parameters are adjusted in their proposed work and these parameters indirectly influence the values of  , c1, and c2. Yang et al. (2010) introduced the PSO based on Orthogonal Design (ODPSO) by developing an OED-based multi-parent crossover operator. Accordingly, this OED-based operator could be used to perform the local search among the m randomly selected particles. Moreover, it establishes a good information exchange mechanism among different particles to preserve the swarm diversity. Wang and Chen (2009) also developed an OED-based local search operator in their proposed Orthogonal Test-based PSO (OT-PSO). Unlike Yang et al. (2010), Wang and Chen (2009) employed the OED-based local search operator to exploit the neighborhood area around the global best solution and then use these information to guide the particle towards the promising region of search space. OED has a twofold function in the Extrapolated PSO based on the Orthogonal Design (ODEPSO) proposed by Feng et al. (2012). On the one hand, the OED is used to initialize the OEDPSO population. On the other hand, the OED is used to design an orthogonal crossover operator. Unlike Yang et al. (2010), and Wang and Chen (2009), the crossover operator developed by Feng et al. (2012) is self-adaptive, considering that it is able to self-adjust the number of OA factors and the segmentation locations of the solution vector based on the similarity bounds.

45

In the Orthogonal PSO (OPSO), Ho et al. (2008) developed an Intelligent Move Mechanism (IMM) with OED to predict the next position of a particle. Unlike the conventional generate-and-go strategy, the IMM module in OPSO initially generates two temporary move vectors, i.e., the H and R vectors, which correspond to the particle’s cognitive and social components, respectively. These two vectors are then decomposed into the partial vector forms and the OED technique is used to evaluate which candidate (i.e., H and R) is better for each partial vector, according to the computed main effect values. Finally, the new position of a particle is obtained by combining all promising partial vectors. It was reported that the IMM module is effective in alleviating the “curse of dimensionality” issue (van den Bergh and Engelbrecht, 2004). Zhan et al. (2011) employed the OED to construct an effective exemplar to guide the particle search in the Orthogonal Learning PSO (OLPSO). It is noteworthy that the search mechanism of OLPSO is similar with the CLPSO, except for the procedures used to generate the particle’s exemplar. According to Zhan et al. (2011), the orthogonal learning strategy adopted in OLPSO is generic and it is applicable to any topological structures. It was reported that the OLPSO with ring topology (OLPSO-L) has better search performance than its variant with fully-connected topology.

2.2.2(e) Remarks As presented in the previous subsections, various approaches have been proposed to improve the search performance of PSO. While showing promising results, the improvements of these PSO variants are compromised with certain tradeoffs, as summarized in Tables 2.1 to 2.3. It still remains a challenge to develop a PSO variant with excellent search accuracy, without significantly impairing the convergence speed and introducing excessive parameters to the algorithms. It is also observed that most of these PSO variants do not provide the alternative learning phase to the particle when the latter fails to improve its fitness during the search process. As explained in the following subsection, the alternative learning phase could offer the new search direction of particle if its previous learning phase is no longer benefiting it to 46

seek for the promising solutions. Apart from that, the neighborhood best particles in most PSO variants are found to share the same learning strategy with other neighborhood members during the search process. Intuitively, some unique learning strategies should be developed for these neighborhood best particles in order to provide them a better guidance towards the promising regions of search space. Based on the aforementioned statements, it could be anticipated that the PSO can be further improved by including the alternative learning phase to the population and assigning the unique learning strategies to the neighborhood best particles. Furthermore, this research also aims to improve the search accuracy of PSO, without substantially jeopardize the algorithm’s convergence speed and introducing excessive parameters to the algorithm. The details of all enhanced PSO variants with robust learning strategies that proposed in this research work will be presented in the following chapters of this thesis.

2.3 Teaching and Learning Based Optimization This section presents the working mechanism of one recent proposed SI-based optimization technique, known as the Teaching and Learning Based Optimization (TLBO). The similarities and dissimilarities between BPSO and TLBO are also presented in this section. Finally, the merit and demerit in the search mechanism of TLBO are discussed. Motivated by the classical school teaching and learning process, Rao et al. (2011) proposed TLBO in 2011. Similar with the BPSO, TLBO comprises a group of individuals known as learners, where each learner Xi represents a possible solution of a given optimization problems. Meanwhile, the quality (i.e., fitness) of solution carried by each learner justifies the knowledge level of the respective learner. Unlike the BPSO and most of its variants that update the particles via a single learning phase during the search process, TLBO attempts to improve the knowledge of each learner through two learning phases, namely the teacher phase and the learner phase (Rao et al., 2011, Črepinšek et al., 2012, Rao et al., 2012).

47

For the teacher phase, each learner learns from the best individual in the population, that is, teacher Xteacher. More specifically, the position of each learner is adjusted toward Xteacher by taking into account the current mean value of learners (i.e., Xmean). Xmean represents the mainstream knowledge in the classroom and it is computed as the average over all S learners in the TLBO population. Mathematically, the new position of each learner Xnew,i is updated via the teacher phase as follows (Rao et al., 2011, Črepinšek et al., 2012, Rao et al., 2012):

X new,i  X i  r  ( X teacher  (TF  X mean ))

(2.11)

where r is a random number ranging from 0 to 1; TF is a teaching factor that is used to emphasize the importance of the learner’s average qualities (Xmean), and this factor can be either 1 or 2. The knowledge of the updated individuals will be evaluated via the objective function of a given problem. The existing learners will be replaced if the new solutions produced during the teacher phase have better knowledge. The TLBO population proceeds to the learner phase once the teacher phase is completed. Unlike the teacher phase, each learner attempts to improve its knowledge by interacting with its peer learners during the learner phase. More specifically, each learner Xi first randomly selects a peer learner Xj (where i  j ). If Xj is fitter than Xi, the latter is attracted towards the former, as shown in Equation (2.12). By contrast, Xi is repelled from Xj, if the latter has inferior fitness than the former, as shown in Equation (2.13). The knowledge of individual that is updated via the learner phase is also evaluated and compared with the existing one. The new solution with better fitness is used to replace the existing individual with inferior fitness value.

X new,i  X i  r  ( X j  X i )

(2.12)

X new,i  X i  r  ( X i  X j )

(2.13)

The implementation of TLBO is illustrated in Figure 2.10. As shown in the TLBO’s framework, the search process via the teacher and learner phases is iterated until the termination criteria are met. Specifically, the maximum number of iteration is used as the

48

TLBO Input: Population size (S), dimensionality of problem space (D), objective function (F), the initialization domain (RG), problem’s accuracy level (  ) 1: Initialize population and evaluate the fitness of each learner Xi; 2: while termination criterion is not satisfied do 3: for each learner i do 4: /*Teacher Phase*/ 5: Select the best learner as teacher Xteacher and calculate Xmean; 6: Calculate Xnew,i for learner i using Equation (2.11); 7: Perform fitness evaluation on Xnew,i; 8: Update the Xi and f(Xi) if Xnew,i has better fitness; 9: /*Learner Phase*/ 10: Randomly select a learner Xj from the population, where i  j ; 11: Calculate Xnew,i for learner i using Equations (2.12) or (2.13); 12: Perform fitness evaluation on Xnew,i; 13: Update the Xi and f(Xi) if Xnew,i has better fitness; 14: end for 15: end while Output: The best found solution, i.e. the teacher (Xteacher) Figure 2.10: TLBO algorithm.

termination criterion of the original TLBO. By observing the framework of TLBO, it is notable that one appealing feature of this algorithm is the presence of an alternative learning phase known as the learner phase to guide the learner’s search process. As shown in Equation (2.11), each learner stochastically updates its knowledge according to Xteacher and Xmean, and thus there is a probabilistic chance for the learner to produce a poor fitness solution. The learner phase plays an important role to offer a new search direction to the learner whenever the previous teacher phase is not benefiting the learner in seeking for the global optimum solution. The alterative learning phase employed by the TLBO phase successfully enhances the algorithm’s search accuracy and convergence speed. Despite exhibiting promising performance, the development of the original TLBO model is still incomplete, considering that specific mechanisms used by the original TLBO framework do not accurately reflect the actual scenario of the classical teaching-learning process. For example, in the learner phase of the original TLBO, the learner will randomly select a peer to learn from without considering the knowledge level of the selected peer. This behavior contradicts real-world experience because human tend to learn from someone better

49

than them. According to the hypothesis developed from the earlier studies (Back et al., 1997, Akhtar et al., 2013), the deviations between the modelling and the actual scenarios of the teaching and learning process may restrict the TLBO’s search performance, considering that different understanding and perspective in theorizing natural process could lead to different consequential effects. As explained in the next chapter, the advantages and disadvantages of the original TLBO have inspired the development of the first enhanced PSO variants in this research work.

2.4 Test Functions As mentioned in previous chapter (Section 1.6), the optimization capability of an optimization algorithm can be evaluated via the experimental study on benchmark problems. In the literatures, there is a number of benchmark problems have been designed to specifically evaluate certain properties of an algorithm. In the following subsection, the fitness landscapes’ characteristics of the benchmark problems employed for the algorithm’s performance evaluations are presented in details. Consecutively, the engineering design problems that is used to investigate the feasibility of an optimization algorithm in tackling the real-world problems are described.

2.4.1 Benchmark Problems In this research work, a total of 30 benchmarks problems (Suganthan et al., 2005, Yao et al., 1999, Liang et al., 2006, Zhan et al., 2009) are chosen to evaluate the optimization capability of the tested algorithms. The formulae, experimental details, and features of these employed benchmark problems are provided in Tables 2.6 and 2.7. Specifically, in Table 2.7, the values of RG, ObjVmin, and  denote the feasible search range, ObjV value of global optimum, and the accuracy level required to solve the benchmarks, respectively. In general, the 30 benchmark problems employed in this research work are divided into five categories based on their characteristics, namely (1) conventional problems (F1 to F8), (2) rotated problems

50

Table 2.6 Benchmark functions used. (Note: M denotes the orthogonal matrix; o denotes the shifted global optimum; f biasj ,j[1,16] denotes the shifted fitness value applied to the corresponding functions) No. Function Name Category I: Conventional Problems

Formulae

F1

Sphere

F1 ( X i )  dD1 X i2,d

F2

Schwefel 1.2

F2 ( X i )   (  X i, j ) 2

F3

Rosenbrock

F3 ( X i )  dD11(100( X i2,d  X i,d 1 ) 2  ( X i,d  1) 2 )

F4

Rastrigin

F4 ( X i )  dD1 ( X i2,d  10 cos(2X i,d )  10)

D

d

d 1 j 1

F5 ( X i )  dD1 (Yi2,d  10 cos(2Yi ,d )  10)

F5

F6 F7

Noncontinuous Rastrigin

 X i ,d  where Yi ,d    round(2 X i ,d ) / 2

Griewank

F6 ( X i )  dD1 X i2,d / 4000  dD1 cos( X i,d / d )  1

Ackley

, X i ,d  0.5 , X i ,d  0.5

F7 ( X i )  20 exp(0.2 dD1 X i2,d / D )  exp(dD1 cos(2X i ,d ) / D)  20  e k k k k max k F8 ( X i )  dD1(kk max 0 [a cos(2b ( X i , d  0.5))]  Dk 0 [a cos(b )]

F8

Weierstrass

a  0.5, b  3, k max  20

Category II: Rotated Problems F9 ( X i )  F2 (Z i ) , Z i  M  X i F9 Rotated Schwefel 1.2 F10 ( X i )  F3 (Z i ) , Z i  M  X i F10 Rotated Rosenbrock F11 ( X i )  F4 (Z i ) , Z i  M  X i F11 Rotated Rastrigin Rotated Noncontinuos F12 ( X i )  F5 (Z i ) , Z i  M  X i F12 Rastrigin F13 ( X i )  F6 (Z i ) , Z i  M  X i F13 Rotated Griewank F14 ( X i )  F8 (Z i ) , Z i  M  X i F14 Rotated Weierstrass Category III: Shifted Problems F15 ( X i )  F1 (Z i )  f bias1 , Z i  X i  o , f bias1  450 F15 Shifted Sphere F16 ( X i )  F2 (Z i )  f bias 2 , Z i  X i  o , f bias1  450 F16 Shifted Schwefel 1.2 F17 ( X i )  F3 (Z i )  f bias 3 , Z i  X i  o , f bias 3  390 F17 Shifted Rosenbrock F18 Shifted Rastrigin F18 ( X i )  F4 (Z i )  f bias 4 , Z i  X i  o , f bias 4  330 F19 Shifted Noncontinuos F19 ( X i )  F5 (Z i )  f bias 5 , Z i  X i  o , f bias 5  330 Rastrigin F20 Shifted Griewank F20 ( X i )  F6 (Z i )  f bias 6 , Z i  X i  o , f bias 6  180 F21 Shifted Ackley F21 ( X i )  F7 (Z i )  f bias 7 , Z i  X i  o , f bias 7  140 F22 Shifted Weierstrass F22 ( X i )  F8 (Z i )  f bias 8 , Z i  X i  o , f bias 8  90

51

Table 2.6 (Continued) No. Function Name Formulae Category IV: Complex Problems F23 Shifted Rotated F23 ( X i )  F6 (Z i )  f bias 9 , Zi  ( X i  o) * M , Griewank f bias 9  180 F24 Shifted Rotated F24 ( X i )  F7 (Z i )  f bias10 , Zi  ( X i  o) * M , Ackley f bias10  140 d 1 F25 Shifted Rotated High 6 D 1 2 D F25 ( X i )  d 1 (10 ) Z i ,d  f bias11 , Conditioned Elliptic Z i  ( X i  o) * M , f bias11  450

F26

Shifted Expanded Griewank’s plus Rosenbrock

F26  F6 ( F3 ( Z i,1 , Z i ,2 ))  F6 ( F3 ( Z i,2 , Z i,3 ))  ...  F6 ( F3 ( Z i, D 1 , Z i , D ))  F6 ( F3 ( Z i, D , Z i,1 ))  f bias12

Z i  X i  o ; f bias12  130 Category V: Hybrid Composition Problems F27 Rotated Hybrid Composition Function 1 (CF1) F28 Rotated Hybrid CF1 with Noise Details to construct the functions F27 to F30 are explained F29 Rotated Hybrid in Section 2.4.1 (e) and in Liang and Suganthan (2005) Composition Function 2 (CF2) F30 Rotated Hybrid CF2 with Narrow Basin Global Optimum

(F9 to F14), (3) shifted problems (F15 to F22), (4) complex problems (F23 to F26), and (5) hybrid composition problems (F27 to F30).

2.4.1(a) Conventional Problems For conventional problems, different features could be observed on each tested function. These features enable the researchers to investigate the capability of the tested PSO algorithm in different aspects. For example, the Sphere function (F1) is considered easy to solve and therefore it is used to test the PSO’s convergence speed. The Rosenbrock function (F3) has a global optimum located in a long narrow parabolic shaped valley. The fitness landscape of this function is used to test the algorithm’s ability in navigating the flat regions with small gradient. Rastrigin function (F4) is a multimodal problem, where the number of its local 52

Table 2.7 Experimental details and features of the 30 benchmark functions (Note: “Md” denotes modality; “U” denotes unimodal; “M” denotes multimodal; “Sp” denotes separable; “Rt” denotes rotated; “Sf” denotes shifted; “Y” denotes yes; “N” denotes no)

No.

Features



ObjVmin

RG

Category I: Conventional Problems 0 F1 [100,100]D 0 F2 [100,100]D

Md

Sp

Rt

Sf

1.0e  6

U

Y

N

N

1.0e  6

U

N

N

N

F3

[2.048,2.048]D

0

1.0e  2

M

N

N

N

F4

[5.12,5.12]D

0

1.0e  2

M

Y

N

N

F5

[5.12,5.12]

D

0

1.0e  2

M

Y

N

N

F6

[600,600] D

0

1.0e  2

M

N

N

N

0

1.0e  2

M

N

N

N

0

1.0e  2

M

Y

N

N

0

1.0e  6

U

N

Y

N

0

1.0e  2

M

N

Y

N

F7 F8

D

[32,32] [0.5,0.5] D

Category II: Rotated Problems F9 [100,100]D D

F10

[2.048,2.048]

F11

[5.12,5.12]D

0

1.0e  2

M

N

Y

N

F12

[5.12,5.12]D

0

1.0e  2

M

N

Y

N

F13

[600,600]

D

0

1.0e  2

M

N

Y

N

F14

[0.5,0.5] D

0

1.0e  2

M

N

Y

N

1.0e  6

U

Y

N

Y

1.0e  6

U

N

N

Y

f bias 3  390

1.0e  2

M

N

N

Y

F18

[5.12,5.12]

D

f bias 4  330

1.0e  2

M

Y

N

Y

F19

[5.12,5.12]D

f bias 5  330

1.0e  2

M

Y

N

Y

F20

D

f bias 6  180

1.0e  2

M

N

N

Y

f bias 7  140

1.0e  2

M

N

N

Y

f bias 8  90

1.0e  2

M

Y

N

Y

1.0e  2

M

N

Y

Y

1.0e  2

M

N

Y

Y

f bias11  450

1.0e  6

M

N

Y

Y

f bias12  130

1.0e  2

M

N

N

Y

Category III: Shifted Problems f bias1  450 F15 [100,100]D f bias 2  450 F16 [100,100]D F17

F21 F22

[100,100]

D

[600,600] D

[32,32] [0.5,0.5] D

Category IV: Complex Problems f bias 9  180 F23 [600,600] D f bias10  140 F24 [32,32] D F25 F26

[100,100]D [5,5]

D

53

Table 2.7 (Continued)

No.

RG

Features



ObjVmin

Md

Sp

Rt

Sf

1.0e  2

M

N

Y

Y

f bias14  120

1.0e  1

M

N

Y

Y

Category V: Composition Problems f bias13  120 F27 [5,5] D F28

[5,5] D

F29

[5,5]

D

f bias15  10

1.0e  1

M

N

Y

Y

F30

[5,5] D

f bias16  10

1.0e  1

M

N

Y

Y

optima increases exponentially with the number of dimensions in the search space. In general, this function may easily trap the tested algorithm into a local optimum and large amount of diversity is required by the algorithm to escape from the local optimum. Hence, the Rastrigin function is useful in evaluating the ability of PSO to maintain its diversity during the search process. The Noncontinuous Rastrigin function (F5) is similar with the Rastrigin function, except the former one is modelled as a noncontinuous problem. For Grienwank function (F6), it could be observed that linkages exist among the variables of this function and this incurs certain difficulty on the tested algorithms to locate the global optimum, especially in the lower dimension case. Nonetheless, the difficulty of this function decreases when the dimensionality of search space increases (Whitley et al., 1996). The Ackley function (F7) has one narrow global optimum basin and many shallow local optima. The presence of these shallow local optima in Ackley function makes it relatively easier to be solved. Finally, the Weierstress function (F8) is a continuous multimodal function but it is differentiable only on certain set of points.

2.4.1(b) Rotated Problems From Table 2.7, it is observed that some conventional problems, such as the Sphere (F1), Rastrigin (F4), Noncontinuous Rastrigin (F5), and Weirstrass (F8) functions, are separable. This separable characteristic enables the mentioned functions to be solved by using D onedimensional search methods, like the one used in some co-evolutionary algorithms (Li,

54

2010). To avoid this type of bias, the rotated problems (F9 to F14) are therefore developed. Specifically, in the rotated problems, the original vector X is multiplied by an orthogonal matrix M (Salomon, 1996) to produce a rotated variable Z as follows: Z M X

(2.14)

As shown in the rotation operation as described in the Equation (2.14), all dimensional components of the rotated vector Z are affected if any dimensional component of X is changed. This explains the origin of the non-separable characteristic of the rotated problems. Notably, the non-separable characteristic prevents the rotated problem to be solved by using D one-dimensional search methods and therefore this problem category is considered more challenging to solve than the conventional problem.

2.4.1(c) Shifted Problems In some conventional problems, the global optimum is located in the origin of the search space or has the same parameter value in all tested dimensions. For example, the Rastrigin function (F4) has a global optimum at the origin of search space, i.e., [0, 0,…, 0]. Meanwhile, the Rosenbrock function (F3) has a global optimum of [1, 1, …, 1]. Some researches might attempt to capitalize the simple properties observed from these global optima and engage into an unethical practice during the development of the CI-based optimization algorithms (Li, 2010). For instance, when the global optimal value of one dimension is found by the algorithm, the value of all remaining dimensions can be easily obtained by copying the obtained value into these dimensions. To prevent the unethical practice as mentioned, another class of problem, namely the shifted problems, is developed. In this problem category, a vector of o  [o1 , o2 ,...,o D ] is defined to randomly adjust the entire fitness landscape of a conventional problem, including the local of its global optimum. Mathematically, the shifting operation is defined as: Z  X o

(2.15)

where Z, X, and o are defined as the shifted position, the original position, and the random position of the fitness landscapes in the search space, respectively. It is noteworthy that the 55

vector o is randomly initialized once at the beginning of optimization and the value of this vector will be maintained for all fitness evaluations during the remaining stage of search process. Similar with rotation operation, the shifting operation also increases the complexity of the conventional problem.

2.4.1(d) Complex Problems In this subsection, the fourth problem category, known as the complex problems are presented. As shown in Tables 2.6 and 2.7, this problem category comprises of two types of functions, namely (1) the shifted rotated functions (F23 to F25) and (2) the expanded function (F26). Specifically, the shifted rotated function is formulated by integrating both of the rotating and shifting characteristics into the conventional problems as follow: Z  ( X  o)  M

(2.16)

The mathematical operation as defined in Equation (2.16) simultaneously introduces the non-separable characteristic and the randomly displaced fitness landscape into the conventional problems. Thus, it can be anticipated that the complexity of the shifted rotated function is higher than the two previous mentioned problems categories (i.e., the rotated problems and the shifted problems). Apart from the shifted rotated function, another member of the complex problem is known as the expanded function. As shown in Table 2.7, this function is generated by taking the two dimensional Rosenbrock function (F3) as the input argument of the Grienwank function (F6) (Suganthan et al., 2005).

2.4.1(e) Hybrid Composition Problems A set of hybrid composition problems is proposed by Suganthan et al. (2005) in CEC 2005 Special Session on Real Parameter Optimization to further investigate the optimization capability of an algorithm on the complicated benchmark problems. Specifically, the hybrid composition problems are constructed using some conventional benchmarks to create more challenging problems with a randomly displaced global optimum and some randomly displaced deep local optima (Suganthan et al., 2005). 56

In general, the mathematical formulation of a hybrid composition function is presented as follow (Suganthan et al., 2005): n

F ( X )   {wi  [ f i' (( X  oi ) / i  M i )  biasi ]}  f _ bias i 1

(2.17)

where F(X) is the new composition function; f i' ( X ) is the i-th conventional function used to construct the composition function; n denotes the total number of conventional functions; wi is the weight value for each conventional function; i and Mi are the stretch/compress factor and the linear transformation matrix for each conventional function, respectively. In this thesis, the selected hybrid composition functions (F26 to F30) are created by 10 different conventional problems, namely two Ackley functions (F7), two Rastrigin functions (F4), two Sphere function (F1), two Weierstrass functions (F8), and two Grienwank function (F6). Functions F27 and F28 share the similar fitness landscape, but the latter’s landscape is contaminated with Gaussian noise. Functions F29 and F30 are essentially the same, except different weights and stretch/compress factors are assigned for each contributing conventional problem of these two functions.

2.4.2 Real-World Problems In this research work, a total of three engineering design problems are employed to investigate the applicability and feasibility of an optimization algorithm in tackling the realworld problems. The employed problems are known as (1) the gear train design problem (Sandgren, 1990), (2) the frequency-modulated (FM) sound synthesis problem (Das and Suganthan, 2010), and (3) the spread spectrum radar polyphase code design problem (Das and Suganthan, 2010). The general descriptions and the mathematical models of these three engineering design problems are presented in the following subsections.

57

2.4.2(a) Gear Train Design Problem The gear train design problem attempts to obtain the optimized gear ratio for a compound gear train that consists of three gears. The objective function of this mechanical design problem is defined as follow (Sandgren, 1990):

 1 xx f ( x)    1 2  6.931 x3 x 4

  

2

(2.18)

where the number of teeth of each gear is represented by xi [12,60] , i = 1, 2, 3, 4. The bound constraint defined in this problem restricts the number of teeth of each gear in the range of 12 to 60. As shown in Equation (2.18), the gear ratio must be as close as possible to 1/6.931 in order to minimize the cost of gear ratio in the gear train.

2.4.2(b) Frequency Modulated Sound Synthesis Problem FM sound synthesis plays an important role in several modern music systems because it provides a simple and efficient method to create the complex sound timbres. Specifically, this problem attempts to generate an estimated sound [as shown in Equation (2.19)] similar to the target sound [as shown in Equation (2.20)], by optimizing the parameters of an FM synthesizer, which is defined as a six-dimensional vector X = (a1, w1, a2, w2, a3, w3). y(t )  a1 sin(w1t  a2 sin(w2 t  a3 sin(w3t )))

(2.19)

y0 (t )  1.0 sin(0.5t  1.5 sin(4.8t  2.0 sin(4.9t )))

(2.20)

where   2 / 100 and the parameters are in the range of [-6.4, 6.35]. The objective function of FM sound synthesis problems is given as follows (Das and Suganthan, 2010):

f (X ) 

100

 ( y(t )  y0 (t ))2

(2.21)

t 0

As shown in Equation (2.21), the objective of this problem is to minimize the sum of squared errors between the target sound [Equation (2.19)] and the estimated sound [Equation (2.20)]. It is also noteworthy that this problem is a highly complex multimodal problem with strong epistasis.

58

2.4.2 (c) Spread Spectrum Radar Polyphase Code Design Problem One crucial factor that needs to be considered in the radar system design that uses the pulse compression is the selection of the appropriate waveform. Various radar pulse modulation techniques have been developed to attain the proper pulse compression. The polyphase coding is one of the widely used compression techniques, attributed to its ability in producing the lower side-lobes in the compressed signal. Apart from that, this technique is also easier to be implemented and thus is more feasible for practical application. Dukic and Dobrosavljevic (1990) proposed a compression technique to synthesize the polyphase pulse code, by capitalizing the properties of the aperiodic autocorrelation function and the assumption of coherent radar pulse processing in the receiver. This compression technique can be modeled as a continuous, min-max, nonlinear, and non-convex problem which comprises of numerous local optima in the problem search space. The formal statements of this technique are defined as follows (Das and Suganthan, 2010): Global min f ( x)  max{1 ( X ),..., 2m ( X )}

(2.22)

where X  {( x1 ,..., x D )  R D 0  x j  2 } and m  2D  1 , with D

 2i 1 ( X )   cos( j i

j

i  1,2,..., D

 xk ) ,

k  2i  j 1 1

D

 2i ( X )  0.5   cos( j i 1

j

 x k ),

k  2i  j 1 1

 mi ( X )   i ( X ),

i  1,2,..., D  1

(2.23)

i  1,2,..., m

As shown in Equations (2.22) and (2.23), the objective of this problem is to minimize the module of the biggest among the samples of the autocorrelation function  . This function is related to the complex envelope of the compressed radar pulse at the optimal receiver output. The variable xk, on the other hand, represents the symmetrized phase difference observed during the compression process.

59

2.5 Performance Metrics The search performance of an algorithm can be tested over a fixed number of independent runs on a specific benchmark problem and then calculate the results via different performance metrics. In this research work, several performance metrics are employed to evaluate the overall performance of an optimization algorithm. These metrics are known as the mean error (Emean), success rate (SR), success performance (SP), and algorithm complexity (AC) (Suganthan et al., 2005). Apart from these performance metrics, a set of non-parametric statistical analyses (García et al., 2009, Derrac et al., 2011) are also employed to perform the thorough comparison between the tested algorithm and its compared peers. The following subsections provide the detail descriptions of each performance metrics and the non-parametric statistical analyses used in this research work.

2.5.1 Mean Error The mean error (Emean) is used to evaluate the search accuracy of an algorithm. Specifically, it is defined as the mean difference between the best (i.e., lowest) ObjV value found by a particular algorithm and the actual global optimum’s ObjV value (Emin) of a tested benchmark problems. To calculate the Emean value of an algorithm in solving a particular benchmark, the ObjV value of the global best solution found by an algorithm in each independent run is first recorded. The difference between each recorded ObjV value and the global optimum’s Emin value is subsequently computed. Finally, the value of Emean is obtained by calculating the average value of the computed differences over the Q independent runs as follow (Suganthan et al., 2005): E mean 

Qq1[ObjV ( Pg ,q )  E min ] Q

(2.24)

where Pg,q and ObjV(Pg,q) represent the global best solution and its corresponding ObjV value at q-th independent run; Q represents the number of independent runs performed in the experiment. For minimization problem, an algorithm with smaller Emean value has more 60

promising search accuracy, considering that the best solution achieved by this algorithm is closer to the problem’s global optimum.

2.5.2 Success Rate Success rate (SR) value is the second performance metric used in this research work to evaluate the search reliability of a tested algorithm. Specifically, it denotes the consistency of a tested algorithm to achieve the successful run when it is used to solve a particular benchmark. A successful run means that the tested algorithm solves a given problem with a solution falls within the predefined accuracy level  before the algorithm’s termination criterion, i.e., when the maximum numbers of fitness evaluations (FEmax) is met. Mathematically, the SR value is defined as the ratio of the number of successful runs over the total number of runs as follow (Suganthan et al., 2005):

SR 

Number of successful runs Q

(2.25)

As shown in Equation (2.25), it can be deduced that an algorithm with higher SR value is more reliable, considering that this algorithm is able to consistently solve the given problems within the predefined  .

2.5.3 Success Performance The third performance metrics, namely the success performance (SP) value, is used to evaluate the search efficiency of a tested algorithm. More particularly, this performance metric is used to compute the computational cost required by a tested algorithm to solve a given problem within the predefined  . Alternatively, the SP value is also considered as the quantitative representation of the algorithm’s convergence speed in solving a particular benchmark. Mathematically, the SP value is defined as follow (Suganthan et al., 2005):

SP  mean(FEs for successful runs) 

Q Number of successful runs

61

(2.26)

In general, an algorithm with smaller SP value is more desirable. This implies that the tested algorithm is more capable to solve a given problem at the predefined  with less computational cost and faster speed.

2.5.4 Algorithm Complexity The final performance metric used in this research work is known as the algorithm complexity (AC) value. Specifically, the AC value reveals the computational complexity of a tested algorithm with a given dimensional value. The procedures of computing the AC value of a tested algorithm are illustrated in Figure 2.11. Once the values of T0, T1, and ̂ are obtained from Steps 1 to 3 as illustrated in Figure 2.11, the AC value of the tested algorithm is computed as follows (Suganthan et al., 2005): ^

(T 2 T1) AC  T0

(2.27)

In general, the algorithm with smaller AC values is more desirable, considering that smaller AC value implies the algorithm has lower complexity.

2.5.5 Non-Parametric Statistical Analyses Apart from the previously mentioned performance metrics, this research work also employs a set of non-parametric statistical procedure (García et al., 2009, Derrac et al., 2011) to

Step 1: Run the following test program: for i = 1 to 1.00E+06 x = (double) 5.55; x = x + x; x = x./ x; x = x * x; x = sqrt (x); x = ln (x); x = exp (x); y = x /x; end for Record the computing time of the abovementioned procedures as T0; Step 2: Evaluate the computing time for function F25 with 2.00E+06 evaluations as T1; Step 3: Compute the complete computing time required by a tested algorithm to solve function F25 with 2.00E+06 evaluations. Step 3 is executed for five times to get the mean value of ̂ = Mean(T2); Figure 2.11: Procedures to calculate the AC value of an algorithm (Suganthan et al., 2005).

62

perform the rigorous comparisons between a tested algorithm and its peers. Unlike the parametric tests, the non-parametric tests could be used to analyze the performance of stochastic algorithms based on computational intelligence, despite the assumptions of data types used (in terms of independence, normality, and homoscedasticity) are violated (García et al., 2009, Derrac et al., 2011). In this thesis, the first non-parametric statistical analysis employed is known as the Wilcoxon test (García et al., 2009, Derrac et al., 2011). Specifically, the Wilcoxon test performs the pairwise comparison between the tested algorithm and its peers. This test is conducted at a significance level of 5% (i.e.,  = 0.05) and the values of h, R+, R−, and p are reported. The h value indicates if the performance of tested algorithm is better (i.e., h = “+”), insignificant (i.e., h = “=”), or worse (i.e., h = “−”) than the other peer algorithms at the statistical level. R+ and R− denote the sum of the ranks that the tested algorithm outperforms and underperforms the compared methods. Meanwhile, the p-value represents the minimal level of significance for detecting differences. If the p-value is less than  , it is a strong evidence that the better result achieved by the best algorithm in each case is statistically significant and was not obtained by chance. Meanwhile, the second non-parametric statistical analysis employed in this research work serves for another purpose, i.e. to perform the multiple comparisons between the tested algorithm and its peers (García et al., 2009, Derrac et al., 2011). To undertake the multiple comparison, the Friedman test (García et al., 2009, Derrac et al., 2011) is first performed to (1) calculate the average ranking of each algorithm and (2) to detect the significant differences between the behaviors of two or more algorithms through p-values. If the existence between the global difference is found (p-value is reported less than  ), a set of post-hoc procedures (Derrac et al., 2011) are proceeded to characterize the concrete differences among algorithms. Unlike the Friedman test, the post-hoc procedures are able to establish proper comparisons between a control method (i.e., the tested algorithm) and a set of algorithms (i.e., the peer algorithms) (Derrac et al., 2011), by defining a family of hypotheses that all related to the control method. The application of a post-hoc test can lead 63

to obtaining an adjusted p-value which determines the degree of rejection of each hypothesis. This research work reports the adjusted p-values obtained using the following post-hoc procedures: Bonferroni–Dunn, Holm, and Hochberg (Derrac et al., 2011). Among the selected post-hoc procedures, Bonferroni-Dunn procedure is known as a very conservative test and it might not be able to detect many performance differences (Derrac et al., 2011). Holm and Hochberg procedures, on the other hand, are more powerful than Bonferroni-Dunn procedure in determining the performance differences between the tested algorithm and other peer algorithms.

2.6 Summary The basic PSO suffers with various drawbacks which restricts its applicability and feasibility in solving the real-world problems. These drawbacks include the premature convergence, diversity loss of swarm, and the challenges in balancing the exploration/exploitation searches. This chapter has presented a thorough review on the diverse ideas used to improve the performance of PSO. Generally, these research works are categorized into four major classes, namely the parameter adaptation, modified population topology, modified learning strategy, and hybridization with OED technique. Despite producing the PSO variants with competitive performance, most of these variants suffer with different tradeoffs. For instance, the CLPSO has excellent capability in avoiding the local optima of complex multimodal problems but it has inferior convergence speed in solving the unimodal and simple multimodal problems. Apart from that, it is also revealed that most of these variants are not equipped with the alternative learning phase for particles that fail to improve their fitness and the unique learning strategies for the neighborhood best particles. Intuitively, the inclusion of these two strategies could further improve the PSO’s performance. To introduce the concept of alternative learning phase, one of the recently proposed SI algorithms, namely the TLBO is described in this chapter. It is noteworthy that the advantages and disadvantages of TLBO have inspired the research works described in the next chapter.

64

As revealed by the literature works, the search performance of an optimization algorithm can be empirically evaluated via a set of benchmark problems with characteristics. A total of 30 benchmark problems that employed in this research work are adequately described in this chapter. Three engineering design problems are also employed in this thesis to investigate the feasibility and applicability of the proposed works in solving the real-world problems. Finally, the performance metrics and the non-parametric statistical analyses that are essential for the performance comparison between the proposed works and the peer algorithms are provided in the final part of this chapter.

65

CHAPTER 3 TEACHING AND PEER-LEARNING PARTICLE SWARM OPTIMIZATION

3.1 Introduction As mentioned in the earlier chapters, most existing PSO variants do not provide an alternative learning phase when particles fail to improve their fitness during the search process. The lack of alternative learning phase in the PSO framework prohibits the particle to explore for the new search directions when the earlier phase is not benefiting the particle’s search. This might slow down the PSO’s convergence speed towards the global optimum, or even worse, jeopardizing its ability to locate the global optimum, considering that this algorithm wastes some of its resources in navigating the less promising trajectory. Another noteworthy drawback of the PSOs is that most of their neighborhood best particle shares the same learning strategy with the non-fittest population members (Kiranyaz et al., 2010). For example, the global best particle Pg in the BPSO with fully-connected topology has the same self-cognitive and social components and this similarity could easily nullify its velocity. The consequence of this nullified effect is that the Pg particle has higher risk to stagnate at the local optimum or any arbitrary point in the search space, which subsequently attract other population members towards it and eventually leads to the premature convergence. In light of these facts, this chapter attempts to provide more promising search directions to the PSO swarm by devising an alternative learning phase to the particles and introducing a unique learning strategy to the global best particle. To this end, an enhanced PSO variant with modified learning strategy, called Teaching and Peer-Learning PSO (TPLPSO), is introduced in this chapter. It is noteworthy that the working mechanisms of TPLPSO are inspired and improved from the framework of the recent proposed TLBO (Rao et al., 2011, Rao et al., 2012). The remaining of this chapter is organized as follows. The next section provides the detail framework descriptions of the proposed TPLPSO. Simulation results and comparisons are subsequently provided in the following section to

66

demonstrate the effectiveness of the proposed TPLPSO. Finally, this chapter is concluded in the last section.

3.2 Teaching and Peer-Learning PSO This section systematically describes the working mechanism of TPLPSO in the following aspects. First, the research idea that motivated the development of TPLPSO is explained. Next, the general description of the TPLPSO is provided, followed by the implementation details of each module used by the TPLPSO. The differences between the TLBO and TPLPSO are also summarized, in order to highlight and justify the modifications proposed in the TPLPSO.

3.2.1 Research Ideas of TPLPSO As mentioned earlier, the development of the TPLPSO is inspired and modified from the TLBO. Previous studies reveal that TLBO exhibits competitive search performance in solving the various types of optimization problems. This implies that the search philosophy employed by the TLBO, especially the alternative learning phase called learner phase, is indeed viable in seeking for the promising solutions in the search space. Despite TLBO’s promising performance, specific mechanisms used by the original TLBO framework do not accurately reflect the actual scenario of the classical teachinglearning process. For example, in the learner phase of the original TLBO, the learner randomly selects a peer to learn from without considering the knowledge level (i.e., fitness value) of the selected peer. This behavior contradicts with the real-world experience because the learners tend to learn from someone with better knowledge than them. Another notable observation is that, in the original framework of TLBO, all learners enter the learner phase without considering if they have successfully improved their knowledge level in the previous teacher phase. However, in actual scenarios, the learners tend to seek help from their peers only if they fail to improve the knowledge they obtained from their teacher in the classroom.

67

Based on the aforementioned statements, it can be concluded that the development of the original TLBO model could be incomplete because of the inaccurate modelling between the search mechanisms of TLBO framework and the actual teaching and learning process. On the other hand, previous studies performed by Back et al. (1997) and Akhtar et al. (2013) have confirmed that a slightly different understanding and perspective in theorizing the natural process can lead to different consequential effects on the natureinspired optimization algorithm. These studies concluded that a more accurate modeling of real-world scenario can lead to a better nature-inspired optimization algorithm. Hans-Paul Schewel, one of the ES founders, emphasized such necessity in his article entitled Challenges to and Future Development of EA in The Handbook of Evolutionary Computation (Back et al., 1997, Akhtar et al., 2013). Accordingly: “Current evolutionary algorithms are certainly better models of organic evolution. Nevertheless, they are still far from being isomorphic mappings of what happens in nature. In order to perform better, an appropriate model of evolution would have to comprise the full temporal and spatial development on the earth (a real global model) if not within the whole universe. We must be more modest in order to understand at least a little of what really happens – as always within natural science”

The studies performed by Akhtar et al. (2013) have provided specific sound computation evidences to support Schewefel’s hypothesis. Specifically, Akhtar et al. (2013) developed a framework for EA based on Peirce’s theory of evolution (Ochs and Peirce, 1993), which offers more adequate evolutionary principles than Darwin’s theory. According to Akhtar et al. (2013), Peirce’s theory was not introduced as antithesis of Darwin’s but as a more universal theory of evolution, one that even meets Schewefel’s conditions for modeling an EA. Experimental results have revealed that the Peircean EA outperforms classical Darwinian EA because the former can address particular major problems in contemporary EAs, such as diversity loss, stagnation, and premature convergence.

68

Motivated by the aforementioned statements, it can be deduced that the incomplete modeling development of the original TLBO provides the possibility of enhancing the algorithm’s optimization capability by refining its mathematical model. It must be emphasized that this chapter does not attempt to introduce a new metaphor/algorithm just for the sake being close to reality. Rather, the research work proposed in this chapter mainly focuses on identifying existing behaviors of the original TLBO, which do not accurately represent real-world teaching-learning scenarios. Based on the observed findings, specific appropriate innovations are proposed on the original TLBO to refine its framework and to improve its optimization capability further. This enhanced TLBO framework will then be adapted into BPSO to develop TPLPSO.

3.2.2 General Description of TPLPSO This subsection intends to provide a general description of the proposed TPLPSO. As explained in the previous subsection, the learning strategies of TPLPSO were developed by adapting the enhanced TLBO framework into BPSO. Similar to TLBO, the proposed TPLPSO evolves the particles through two phases, namely, the teaching phase and the peerlearning phase. For the TPLPSO with fully-connected topology, the global best particle (Pg) with the best knowledge (i.e., fitness value) is assigned as the teacher particle, whereas those that remain are assigned as the student particles. During the teaching phase, each student particle seeks for knowledge improvement through its self-cognition (i.e., Pi) and the knowledge of the teacher particle (i.e., Pg). However, it must be emphasized that this process does not guarantee that the teacher particle can always improve the knowledge of every student particle. Several student particles can improve their knowledge through the information imparted by the teacher, whereas other may fail to do so. In the latter case, the student particles enter the peer-learning phase, where each particle learns from their classmates or peers. To ensure better knowledge improvement, the student particles tend to select peers with better knowledge.

69

Furthermore, a unique learning strategy called the stochastic perturbation-based learning strategy (SPLS) is introduced to tackle the premature convergence issue, considering that there is a possibility where the teacher particle of TPLPSO could be stuck in the local optimum or any arbitrary point in the search space. Specifically, the SPLS aims to provide the teacher particle with extra momentum when the fitness of this particle is not improved for long time during the search process. In the context of classroom teachinglearning process, the SPLS can be considered as a brainstorming session offered to the teacher particle when its knowledge is stuck at certain point. This strategy allows the teacher particle to occasionally acquire some new and useful knowledge by performing some wild guesses or thinking out of box (i.e., via the random perturbation process of SPLS). In the following subsection, the interaction and implementation of the TPLPSO particles during the teaching phase and the peer-learning phase are presented in details. Additionally, the mechanism employed by the SPLS module in alleviating the swarm stagnation is also described.

3.2.3 Teaching Phase This subsection provides the detail descriptions of the teaching phase of TPLPSO. In the teaching phase, each student particle i updates its velocity Vi and position Xi via the Equations (2.1) and (2.2). In other words, the learning strategy employed by the teaching phase of TPLPSO is same with that used by the BPSO. In contrast to the teacher phase of the original TLBO in Equation (2.11) where each learner improves its knowledge from the teacher (Xteacher) and from the average or mainstream knowledge in class (Xmean), the student particles in the proposed TPLPSO learn from the teacher (i.e., global best experience Pg) and is based on his/her self-cognition (i.e., personal best experience, Pi), as shown in Equation (2.1). The objective function value (ObjV) of the updated student particle i, i.e., ObjV(Xi), is then evaluated and compared with the ObjV of its personal best position Pi, i.e., ObjV(Pi). If the former has a better fitness than the latter, i.e., ObjV(Xi) < ObjV(Pi), it implies that the 70

Teaching_Phase (Vi, Xi, Pi, Pg, ObjV(Pi), ObjV(Pg), fes, fc) Input: Particle i’s old velocity (Vi), position (Xi), and personal best position (Pi), old global best position (Pg), old ObjV of particle i’s personal best position [ObjV(Pi)], old ObjV of global best particle [ObjV(Pg)], number of fitness evaluation consumed (fes), failure counter (fc) 1: Update the velocity Vi and position Xi of student particle i via Equations (2.1) and (2.2) respectively; 2: Evaluate ObjV of updated Xi [i.e., ObjV(Xi)]; 3: fes = fes + 1; //update the FEs number for each fitness evaluation is performed 4: if ObjV(Xi) < ObjV(Pi) then 5: Pi = Xi , ObjV(Pi) = ObjV(Xi); 6: if ObjV(Xi) < ObjV(Pg) then 7: Pg = Xi, ObjV(Pg) = ObjV(Xi); 8: fc = 0; 9: else 10: fc = fc + 1; 11: end if 12: else 13: fc = fc + 1; 14: end if Output: Updated Vi, Xi, Pi, Pg, ObjV(Pi), ObjV(Pg), fes, and fc Figure 3.1: Teaching phase of the proposed TPLPSO.

student particle i successfully improves its knowledge and the updated Xi will then replace Pi. Similarly, the updated Xi replaces Pg if the value of ObjV(Xi) is smaller than that of ObjV(Pg), where ObjV(Pg) denotes the ObjV of the teacher particle. In this scenario, the improved student particle i becomes more knowledgeable than the teacher particle, thereby leading to the promotion of the former as a teacher particle and the demotion of the latter as a student particle. Figure 3.1 illustrates the procedure of the teaching phase in TPLPSO. As to be explained in Section 3.2.5, a perturbation mechanism (i.e., SPLS) is performed on the teacher particle when ObjV(Pg) is not improved for long time during the search process. Note that the number of FEs consumed by an algorithm is updated each time when a particle performs the fitness evaluation process. A failure counter fc is therefore used to record the number of FEs where Pg fails to improve its fitness. Specifically, as illustrated in Figure 3.1, it is increased by one when a student particle fails to replace the teacher particle (see lines 10 and 13), and it is reset to zero if any student particle is promoted as a teacher particle (see line 8). As to be explained in Section 3.2.5, when the value stored in the fc exceeds the 71

predefined threshold Z (i.e., fc > Z), the SPLS is triggered to perform the perturbation on teacher particle Pg and assist it to escape from inferior regions in the search space.

3.2.4 Peer-Learning Phase In this subsection, the mechanism of the peer-learning phase of TPLPSO is described. As mentioned earlier, the peer-learning phase is only offered to the student particles that fail to improve their knowledge in the previous teaching phase. During the peer-learning phase, these student particles are allowed to select an exemplar Pe among the peers as their guide. Specifically, for each student particle i, the personal best positions of all peers in the population are eligible as the exemplar candidates, except for the Pi of student particle i itself and the teacher particle Pg. Both of the Pi and Pg particles are excluded because the main purpose of peer-learning phase is to allow the student particle i to seek for the knowledge improvement through the interaction with its peers, instead of through its own exploration or through the teacher particle. During the peer-learning process, the student particle i tends to select peer particle j with better fitness, i.e., ObjV(Pj) < ObjV(Pi), because the more knowledgeable exemplar has a better chance of improving the fitness of student particle i. Thus, instead of using the random selection technique as proposed in the original TLBO, this research work uses the roulette wheel selection technique to select the student particle i’s exemplar, i.e., Pei, based on the personal best fitness criterion. Prior to the selection of exemplar Pei for particle i, each of the exemplar candidate k is assigned with a weightage value Wk computed as follow: Wk 

ObjVmax  ObjV ( Pk ) , ObjVmax  ObjVmin

k  [1, K ]

(3.1)

where ObjVmax and ObjVmin are the worst and best personal best fitness values of the exemplar candidates, respectively, and K is the number of exemplar candidates available for the roulette wheel selection. As shown in Equation (3.1), the exemplar candidate k with better fitness has a larger Wk value, which implies that this exemplar candidate has a greater probability to be selected as the exemplar. It is noteworthy that the use of the roulette

72

Exemplar_Selection (Pall, ObjV (Pall), Pg, particle i) Input: Personal best positions (Pall = [P1, P2, … , PS]) and the corresponding ObjV (ObjV (Pall)= [ObjV (P1), ObjV (P2), … , ObjV (PS)]) of all particles in the population, global best particle’s position (Pg), particle i's index 1: Identify the index of the target particle and global best particle as i and g, respectively; 2: By excluding the Pi and Pg, construct an array to store the exemplar candidates, i.e., ECi = [P1, P2, … PK]; 3: Identify ObjVmax and ObjVmin from ECi; 4: for each exemplar candidate k do 5: Calculate Wk for each exemplar candidate k via Equation (3.1); 6: end for 7: Construct an array, WCi = [W1, W2, … , WK] to store the weight contributions of each exemplar candidate; 8: Perform the roulette wheel selection technique based on WCi to select the exemplar particle Pei; 9: Return Pei; Output: Exemplar particle for student particle i (Pei) Figure 3.2: Exemplar selection in the peer-learning phase of the TPLPSO.

selection technique in selecting the exemplar could introduce a high selection pressure and therefore could lead to fast diversity loss of swarm. To compensate for this drawback, the SPLS module is introduced in the following subsection to provide the extra diversity to TPLPSO. The procedure of selecting the exemplar particle Pei for the student particle i is described in Figure 3.2. Considering that the Pei exemplar of student particle i is probabilistically selected, two outcomes are possible: (1) the Pei exemplar has a better fitness than the student particle i, [i.e., ObjV (Pei) < ObjV (Pi)], or (2) the Pei exemplar has a worse fitness than the student particle i, [i.e., ObjV(Pei) > ObjV (Pi)]. In scenario 1, the student particle i is attracted toward the Pei exemplar because the latter is more knowledgeable and is more likely to improve the former’s fitness. In scenario 2, the selected Pei exemplar is inferior to the student particle i, and the former is unlikely to contribute to the latter’s fitness improvement. Thus, the student particle i is repelled from the Pei exemplar. This repelling strategy aims to preserve the swarm diversity by preventing the student particle i from converging towards the Pei exemplar with inferior performance. The learning strategies for the student particle i in scenarios 1 and 2 of the peer-learning phase are expressed as shown:

73

Peer_Learning_Phase (Vi, Xi, Pi, Pg, ObjV(Pi), ObjV(Pg), fes, fc, Pall, ObjV(Pall), particle i) Input: Particle i’s old velocity (Vi), position (Xi), personal best position (Pi), old global best position (Pg), old ObjV of particle i’s personal best position [ObjV(Pi)] and global best position [ObjV(Pg)], number of fitness evaluation consumed (fes), failure counter (fc), personal best positions (Pall = [P1, P2, … , PS]) and the corresponding ObjV (ObjV(Pall) = [ObjV(P1), ObjV(P2), … , ObjV(PS)]) of all particles in population, particle i's index 1: Pei = Exempler_Selection (Pall, ObjV (Pall), Pg, particle i); 2: Update the velocity Vi and position Xi of student particle i via Equation (3.2) and (2.2) respectively; 3: Evaluate ObjV of updated Xi [i.e., ObjV(Xi)] ; 4: fes = fes + 1; 5: if ObjV(Xi) < ObjV(Pi) then 6: Pi = Xi, ObjV(Pi) = ObjV(Xi); 7: if ObjV(Xi) < ObjV(Pg) then 8: Pg = Xi, ObjV (Pg) = ObjV(Xi); 9: fc = 0; 10: else 11: fc = fc + 1; 12: end if 13: else 14: fc = fc + 1; 15: end if Output: Updated Vi, Xi, Pi, Pg, ObjV(Pi), ObjV(Pg), fes, and fc Figure 3.3: Peer-learning phase in the TPLPSO.

V  cr1 ( Pei  X i ), Vi   i Vi  cr2 ( Pei  X i ),

ObjV ( Pei )  ObjV ( Pi ) otherwise

(3.2)

where c = 2.0 is the acceleration coefficient and r1 and r2 are random numbers in the range of [0, 1]. Similar to the teaching phase, the ObjV of the updated position of Xi is evaluated. If the updated Xi has a better fitness than Pi [i.e., ObjV(Xi) < ObjV(Pi)], Xi replaces Pi. Similarly, Xi replaces Pg if ObjV(Xi) < ObjV(Pg). The content of fc is reset to zero or increased by one, depending on the fitness of the updated Xi. The procedure of the peer-learning phase is illustrated in Figure 3.3. Note that the peer-learning phase of TPLPSO shares several similarities with the search mechanisms of CLPSO, FLPSO-QIW, FIPS, and OLPSO, given that these PSO variants use exemplars that are derived from non-global best solutions to guide the search. Nevertheless, the working mechanism used in the peer-learning phase to obtain the exemplar

74

is different from these PSO variants. For example, the exemplar of the FIPS particle is derived from all its neighborhood members, whereas CLPSO and FLPSO-QIW randomly select two particles from the population to construct each dimensional component of an exemplar via the tournament selection. OLPSO, on the other hand, employs the orthogonal experiment design (OED) technique to construct the exemplars. Unlike these PSO variants, the student particle in TPLPSO probabilistically selects another peer particle (based on the personal best fitness criterion) in the population as the exemplar to guide its search during the peer-learning phase.

3.2.5 Stochastic Perturbation-Based Learning Strategy As mentioned earlier, the learning strategies as shown in Equations (2.1) and (3.2) involves the random movements and thus there is a probabilistic opportunity of the ObjV of teacher particle is not improved. Generally, there are two possibilities where the value of ObjV(Pg) is not improved for long time during the search process. First, the teacher particle successfully locates the global optimum of a given problem. Second, the teacher particle stagnates at the local optimum or any arbitrary point in the search space and it has no extra momentum to jump from these inferior regions. While the first possibility is desirable for the search process, the second possibility could lead to the poor optimization outcomes. This is because the student particles tend to be attracted towards the teacher particle, as shown in the Equation (2.1). If the latter stuck at some inferior points in the search space, premature convergence is likely to occur due to the clustering behavior of student particles around the teacher particle with inferior fitness. To mitigate the premature convergence issue as described, a stochastic perturbationbased learning strategy (SPLS) is performed on the teacher particle when ObjV(Pg) is not improved for Z successive fitness evaluations (FEs). As mentioned in the earlier subsection, a failure counter fc is used to record the number of FEs where Pg fails to improve its fitness. When fc exceeds the threshold Z (i.e., fc > Z), the SPLS is triggered to perform the perturbation on the teacher particle Pg. The parameter Z should not be set too large or too 75

small, considering that the former tends to consume excessive computational resources, whereas the latter is likely to jeopardize the algorithm’s convergence speed. In SPLS, one of the d-th dimension of Pg particle (i.e., Pg,d) is first randomly selected and it is then perturbed by a normal distribution as follow:

Pgper ,d  Pg ,d  sgn( r3 )r4 ( X max,d  X min,d )

(3.3)

where Pgper ,d is the perturbed Pg,d; sgn() is the sign function; r3  0 is a random number in the range of [−1, 1] with uniform distribution; r4 is a random number generated from the normal distribution of N ~ ( ,  2 ) with a mean value of   0 and   R . R denotes the perturbation range that linearly decreased with the numbers of FEs as shown: R  Rmax  ( Rmax  Rmin )

fes FEmax

(3.4)

where Rmax = 1 and Rmin = 0.1 are the maximum and minimum perturbation ranges, respectively; fes is the number of FEs consumed so far by the algorithm; and FEmax is the maximum FEs defined as the termination criterion of TPLPSO. Figure 3.4 presents the implementation of the SPLS. This is observed that once the perturbation process [Equation (3.3)] is completed, the ObjV of the perturbed Pg particle (i.e., Pgper) is evaluated to obtain ObjV(Pgper). The Pgper particle replaces the existing Pg particle if

SPLS (Pgold, ObjV(Pgold), Rmax, Rmin, fes); Input: Previous global best position (Pgold), and the corresponding ObjV [ObjV(Pgold)], maximum perturbation range (Rmax), minimum perturbation range (Rmin), number of fitness number evaluation (fes) 1: Compute the perturbation range R using Equation (3.4); 2:

Randomly select a dimension d of the Pgold particle (i.e., Pgold , d ) for the perturbation;

3:

per Perform the perturbation on Pgold , d using Equation (3.3) and produce Pg ,d ;

4: Perform fitness evaluation on the Pg per particle; 5: if ObjV(Pg per) < ObjV(Pgold) then Pg = Pg per, ObjV(Pg ) = ObjV(Pg per); 6: else 7: Pg = Pgold, ObjV(Pg ) = ObjV(Pg old); Output: Updated Pg and the corresponding ObjV(Pg) Figure 3.4: SPLS in the TPLPSO.

76

the former has a better fitness than the latter [i.e., ObjV(Pgper) < ObjV(Pg)]. Otherwise, the Pgper particle is discarded because no fitness improvement is achieved by the SPLS.

3.2.6 Complete Framework of TPLPSO The complete framework of the proposed TPLPSO is implemented by integrating previously explained teaching phase, peer-learning phase, and SPLS, as illustrated in Figure 3.5. Contrary to the original TLBO where all learners enter the learner phase, the proposed TPLPSO only offers the peer-learning phase to student particles that fail to improve their knowledge during the previous teaching phase. Additionally, Figure 3.5 shows that the stagnation check will be performed as soon as the teaching or peer-learning phases are completed (see lines 6 to 9 and lines 14 to 17). The main purpose of this procedure is to detect if the threshold Z is exceeded, each time after

TPLPSO Input: Population size (S), dimensionality of problem space (D), objective function (F), the initialization domain (RG), problem’s accuracy level (  ), maximum number of fitness evaluation (FEmax) 1: Generate initial swarm and initialize fes = fc = 0; 2: while fes < FEmax do 3: for each student particle i do 4: previous_ObjV(i) = ObjV (Pi); 5: Perform Teaching_Phase (Vi, Xi, Pi, Pg, ObjV(Pi), ObjV(Pg), fes, fc); 6: /*check for stagnation*/ 7: if fc > Z then 8: Perform SPLS (Pgold, ObjV(Pgold), Rmax, Rmin, fes); 9: end if 10: /*no fitness improvement is achieved from teaching phase*/ 11: if previous_ObjV(i) < ObjV(Xi) then 12: Perform Peer_Learning_Phase (Vi, Xi, Pi, Pg, ObjV(Pi), ObjV(Pg), fes, fc, Pall, ObjV (Pall), particle i); 13: end if 14: /*check for stagnation*/ 15: if fc > Z then 16: Perform SPLS (Pgold, ObjV(Pgold), Rmax, Rmin, fes); 17: end if 18: end for 19: end while Output: The best found solution, i.e., the teacher particle or the global best particle (Pg) Figure 3.5: Complete framework of the TPLPSO.

77

a fitness evaluation is consumed by the student particle during the teaching or peer-learning phase. As soon as the threshold Z is exceeded, the SPLS (as illustrated in Figure 3.4) will be revoked immediately to assist the teacher particle Pg escapes from the inferior regions of the search space. This strategy prevents the entrapment of the Pg particle in the local optima or arbitrary point of search space for excessive amounts of fitness evaluation, which could subsequently lead to the poor search accuracy and the poor convergence speed of the algorithm.

3.2.7 Comparison between TPLPSO and TLBO As mentioned earlier, the learning strategies of TPLPSO were developed by adapting the enhanced TLBO framework into BPSO. In this section, the differences between the TLBO and TPLPSO are summarized, in order to highlight and justify the modifications proposed in the TPLPSO. First, as shown in the teacher phase of the original TLBO in Equation (2.11), each learner searches for knowledge improvement via the knowledge of the teacher (Xteacher) and the average or mainstream knowledge (Xmean) (Sun et al., 2004b). In the proposed TPLPSO, the learning strategy adopted by the student particle is the same with the one used by BPSO, that is Equation (2.1). One reason that this research work discards Equation (2.11) in the teaching phase of TPLPSO is that this learning strategy shows contradicting behaviors that do not reflect the actual scenario of the classroom teaching-learning process. For example, the value of Xmean in Equation (2.11) can be near zero because positive and negative values may cancel each other out. These positive and negative values could be treated as two types of extreme knowledge that exist in the classroom, and they have conflicting effects on the mainstream knowledge of classroom. It is unlikely for the classroom to achieve a common mainstream knowledge (i.e., Xmean = 0) with the presence of two types of extreme knowledge (i.e., positive and negative values). Another justification to use Equation (2.1) instead of Equation (2.11) in the teaching phase of TPLPSO is based on the observation on the student’s behavior when they learn in the classroom. Specifically, when a student attempts to 78

improve his/her knowledge during the teaching process, he/she will learn based on the knowledge imparted by his/her teacher (Pg) as well as his/her self-cognition (Pi). This behavior is well represented by Equation (2.1). Meanwhile, for the learner phase of the original TLBO, the learner randomly selects a peer to learn from without considering the knowledge level of the peer, as shown in Equation (2.12) and (2.13). This behavior contradicts real-world scenarios because the learners tend to look for someone with better knowledge to learn from. To represent the realworld scenario more accurately, the peer-learning phase of TPLPSO is modified so that the student particle selects its exemplar via the roulette wheel-selection technique, as shown in Figure 3.2. This technique ensures that the more knowledgeable peer has a higher probability to be selected as the exemplar, thereby providing a more promising search direction. From the original framework of TLBO (Figure 2.10), all learners enter the learner phase regardless if they successfully improved their knowledge in the previous phase. In the proposed TPLPSO (Figure 3.5), only the student particles that fail to improve their knowledge during the teaching phase are allowed to proceed to the peer-learning phase. This research work considers that when a student particle successfully updates its fitness during the teaching phase, this particle is on the right track in locating the global optimum. Thus, the peer-learning phase is omitted to prevent the intervention of peers on the trajectory of this particle. Finally, in the original TLBO, no unique learning strategy has been developed to guide the teacher during the search process. On the other hand, this research work attempts to emulate a brainstorming session to the teacher particle Pg to prevent the entrapment of this particle in the inferior regions of search space for too long time. To this end, the SPLS is developed to provide the extra momentum to the teacher particle when its fitness is not improved for Z successive fitness evaluation (FEs). This strategy aims to increase the exploratory moves of the teacher particle during the search process and it is expected to enhance the algorithm’s capability in tackling the premature convergence issue.

79

3.3 Simulation Results and Discussions In this section, the experimental study of the proposed TPLPSO is performed based on the benchmark problems introduced in Section 2.4.1. Specifically, this section begins with the experimental setup used in this study, followed by the parameter sensitivity analyses of TPLPSO. Next, the comparative studies between the proposed TPLPSO and the peer algorithms in solving the employed benchmark problems and the real-world problems are conducted to investigate the effectiveness of the proposed work.

3.3.1 Experimental Setup Experiments were conducted to compare the proposed TPLPSO with six PSO variants on the 30 benchmark problems in 50 dimensions (50-D). CLPSO, FLPSO-QIW, FIPS, and RPPSO are chosen for comparison because their learning strategies share specific similarities with the peer-learning phase of TPLPSO, i.e., the non-global best particles are employed to guide the search processes of these algorithms. To investigate the effectiveness of the proposed modifications, TPLPSO is also compared with APSO and UPSO because the latter two are the well-established PSO variants developed from the parameter adaptation and modified population topology approaches, respectively. The parameter settings for all involved algorithms were extracted from their corresponding literature and summarized in Table 3.1. Considering that the control parameters of APSO, CLPSO, and FLPSO-QIW were tuned in smaller dimension (i.e., D = 10 or 30) and in limited problem categories (i.e., conventional and rotated problems) in their original papers, it is worth investigating if these parameter settings are optimal in other dimension (i.e., D = 50) and in other problem categories (i.e., shifted, complex and composition problems). A series of parameter sensitivity analyses are thus performed on the APSO, CLPSO, and FLPSO-QIW by using ten 50-D benchmarks with different characteristics, i.e., two conventional problems of F1 and F4, two rotated problems of F11 and F14, two shifted problems of F19 and F21, two complex problems of F23 and F26, and two composition problems of F28 and F30. The procedures used to perform the parameter 80

Table 3.1 Parameter settings of the involved PSO variants Algorithm APSO (Zhan et al., 2009)

Population topology Fully-connected

Parameter settings  : 0.9  0.4 , c1  c 2 : [3.0,4.0] ,  : 1.0-0.1,   [0.05,0.1]  : 0.9  0.4 , c  2.0 , m  7

CLPSO (Liang et al., 2006)

Comprehensive learning

FLPSO-QIW (Tang et al., 2011)

Comprehensive learning

FIPS (Mendes et al., 2004)

Local URing

1 = 0.9,  2 = 0.2, ̌ = ̌ = 1.5, ̂ = 2.0, ̂ = 1.0, m  1 , Pi  [0.1, 1] , K1  0.1 , K 2  0.001 ,  1  1 ,  2  0   0.729 ,  ci  4.1

RPPSO (Zhou et al., 2011)

Random

 : 0.9  0.4 , clarge = 6, csmall = 3

UPSO (Parsopoulos and Vrahatis, 2004)

Fully Connected and Local Ring

  0.729 , c1 = c2 = 1.49445, u= [0, 1]

TPLPSO

Fully-connected

 : 0.9  0.4 , c1 = c2 = c = 2.0, Z = 5, R : 1.0-0.1

sensitivity analyses on each of these PSO variants are same as those provided in their respective original papers. The experimental results are presented in Tables A1 to A5 in Appendix A. From these tables, it is observed that APSO, CLPSO, and FLPSO-QIW exhibit the best search performance in majority of the tested benchmarks with the parameter settings recommended by their respective authors. In other words, the parameter settings of these PSO variants, as presented in Tables A1 to A5, are optimal in the 50-D and in the new problem categories. Thus, they were used in the following performance comparisons. For TPLPSO, the values of the values of  , c1, c2, and c are set based on the recommendations in previous studies (Mendes et al., 2004, Liang et al., 2006). Two parameter sensitivity analyses were also conducted in the following subsection to investigate the effects of parameters Z and R on the search performance of TPLPSO. All PSO variants were independently run 30 times to reduce random discrepancy and the average results are recorded. A similar maximum fitness evaluation numbers of FEmax = 3.00E+05 was used to terminate the algorithms to ensure a fair comparison. On the

81

other hand, the optimal population sizes of the compared algorithms in solving the employed benchmarks problems were undetermined because none of these variants have been reported by their corresponding literature to solve 50-D problems. To resolve this issue, the population size (S) used by these PSO variants to solve the 50-D problems was set based on the recommendation of Li et al. (2012) (i.e., S = 30).

3.3.2 Parameter Sensitivity Analysis As explained earlier, the parameters Z and R of the proposed TPLPSO are used to determine how frequent the perturbation mechanism is triggered and the perturbation range, respectively. In this subsection, a series of parameter sensitivity analyses are performed on ten selected benchmark problems, namely the functions F4, F7, F9, F13, F15, F22, F23, F26, F28, and F30, to answer the following two questions: (1) how do parameters Z and R influence the search performance of TPLPSO, and (2) how are parameters Z and R of TPLPSO best set.

3.3.2(a) Effect of the Parameter Z The effect of parameter Z on the search performance of TPLPSO was investigated. Specifically, the ten selected benchmarks mentioned earlier were solved by TPLPSO by using Z with an integer value from 1 to 10. Each different Z value was run 30 times, where the R value of TPLPSO was set to decrease linearly with FEs from 1.0 to 0.1. The simulations are performed on three different dimensions, namely 10-D, 30-D, and 50-D, in order to investigate if the optimal value of parameter Z varies with the change in dimensionality of search space. Tables 3.2, 3.3, and 3.4 illustrate the search accuracy (Emean) exhibited by TPLPSO with different values of Z in 10-D, 30-D, and 50-D, respectively. The best result for each benchmark is indicated in boldface text. The experimental results of functions F4, F7, F9, and F13 are omitted in Tables 3.2 to 3.4 considering that the search accuracy of TPLPSO is not sensitive to parameter Z in these tested problems. On the other hand, Tables 3.2 to 3.4 show that the parameter Z can 82

Table 3.2 Effects of the parameter Z on TPLPSO in 10-D Value of Z 1 2 3 4 5 6 7 8 9 10

F15 8.66E-04 7.43E-04 6.69E-04 7.61E-04 6.23E-04 8.32E-04 9.41E-04 7.59E-04 9.78E-04 1.21E-03

F22 1.93E-01 1.53E-01 1.29E-01 1.04E-01 8.24E-02 9.75E-02 1.31E-01 1.26E-01 1.43E-01 1.67E-01

F23 5.04E-01 4.89E-01 4.56E-01 4.24E-01 3.97E-01 3.73E-01 4.11E-01 4.53E-01 4.38E-01 4.73E-01

F26 8.59E-01 9.03E-01 6.89E-01 5.03E-01 3.34E-01 4.71E-01 5.64E-01 8.36E-01 9.42E-01 1.04E+00

F28 1.76E+02 1.48E+02 1.39E+02 1.15E+02 1.21E+02 1.34E+02 1.56E+02 1.74E+02 1.91E+02 1.84E+02

F30 8.57E+02 8.48E+02 8.39E+02 8.33E+02 8.28E+02 8.36E+02 8.32E+02 8.41E+02 8.47E+02 8.52E+02

Table 3.3 Effects of the parameter Z on TPLPSO in 30-D Value of Z 1 2 3 4 5 6 7 8 9 10

F15 4.87E-03 4.32E-03 3.76E-03 3.59E-03 2.52E-03 3.65E-03 4.53E-03 3.89E-03 4.45E-03 5.01E-03

F22 9.32E-01 8.06E-01 6.28E-01 5.34E-01 4.10E-01 4.21E-01 6.78E-01 7.22E-01 8.53E-01 7.96E-01

F23 1.89E+00 1.56E+00 1.23E+00 9.33E-01 9.46E-01 9.97E-01 1.14E+00 1.43E+00 1.35E+00 1.78E+00

F26 6.06E+00 5.21E+00 6.73E+00 3.54E+00 1.29E+00 2.67E+00 5.89E+00 7.53E+00 6.92E+00 9.10E+00

F28 2.75E+02 2.52E+02 2.29E+02 2.46E+02 2.19E+02 2.35E+02 2.57E+02 2.53E+02 2.69E+02 2.85E+02

F30 9.37E+02 9.28E+02 9.23E+02 9.10E+02 9.03E+02 9.17E+02 9.20E+02 9.26E+02 9.31E+02 9.38E+02

Table 3.4 Effects of the parameter Z on TPLPSO in 50-D Value of Z 1 2 3 4 5 6 7 8 9 10

F15 1.93E-02 1.68E-02 1.32E-02 1.43E-02 1.17E-02 1.25E-02 1.46E-02 1.83E-02 1.74E-02 2.01E-02

F22 6.08E-01 6.32E-01 5.67E-01 5.01E-01 3.97E-01 4.23E-01 6.69E-01 5.43E-01 6.94E-01 7.23E-01

F23 7.06E-03 7.38E-03 6.74E-03 5.09E-03 4.24E-03 4.89E-03 5.61E-03 7.07E-03 8.23E-03 8.89E-03

F26 6.78E+00 5.43E+00 2.89E+00 9.53E-01 1.68E+00 2.73E+00 4.87E+00 6.98E+00 5.43E+00 7.32E+00

F28 4.41E+02 4.13E+02 3.97E+02 3.82E+02 3.51E+02 3.73E+02 3.96E+02 4.21E+02 4.06E+02 4.38E+02

F30 9.56E+02 9.61E+02 9.43E+02 9.38E+02 9.29E+02 9.35E+02 9.45E+02 9.37E+02 9.44E+02 9.59E+02

influence the search accuracy of TPLPSO in functions F15, F22, F23, F26, F28, and F30. Specifically, in these three tested problem categories (i.e., the shifted, complex, and composition problems), the search accuracy of TPLPSO tend to deteriorate when Z is set too high (i.e., Z = 8, 9, 10) or too low (i.e., Z = 1, 2, 3). When Z is set too high, the perturbation mechanism of TPLPSO is not triggered frequent enough. Consequently, the teacher particle Pg does not obtain sufficient diversity and tend to be trapped in the local optima or any arbitrary point in the search space, which subsequently leads to the poor solution found by the TPLPSO. Conversely, the perturbation mechanism of TPLPSO could be overemphasized 83

when Z is set too low. In this extreme scenario, the TPLPSO population is oversupplied with diversity, and this oversupply potentially exacerbates the convergence rate of TPLPSO toward the problem’s global optimum. Finally, the results of parameter sensitivity analysis reveal that TPLPSO solves the tested benchmarks with the best search accuracy at Z value at 5 in 10-D, 30-D, and 50-D. Based on the aforementioned experimental findings, the optimal setting of parameter Z is invariant with the change in dimensionality and can be used in solving the benchmark problems with various types of fitness landscape. This observation suggests that the parameter Z of TPLPSO can be set as 5 in the following performance evaluations.

3.3.2(b) Effect of the Parameter R To assess the effect of parameter R on the search accuracy of TPLPSO, six strategies for setting the value of R were tested using three fixed values (0.1, 0.5, and 1.0) and three timevarying values (from 1.0 to 0.5, from 0.5 to 0.1, and from 1.0 to 0.1) (Zhan et al., 2009). All other parameters of TPLPSO were set according to the values in Table 3.1. Tables 3.5, 3.6, and 3.7 present the Emean values of TPLPSO with different values of R in 10-D, 30-D, and 50-D, respectively. Similar with parameter Z, the search accuracy of TPLPSO is also not sensitive to parameter R in functions F4, F7, F9, and F13. Therefore, their simulation results were omitted from Tables 3.5 to 3.7. On the other hand, Tables 3.5 to 3.7 reveal that TPLPSO with time-varying R has a more superior search accuracy than those with fixed values R in solving shifted (F15 and F22), complex (F23 and F26), and composition (F28 and F30) problems. This result implies that TPLPSO requires a large perturbation range to avoid premature convergence at an early phase and a small perturbation range at a latter phase to refine the found optimal solution. Among the three tested time-varying strategies, decreasing time-varying R from 1.0 to 0.1 offers the most promising search accuracy to TPLPSO in all tested dimensions. Hence, this parameter setting was used in the following performance evaluation. 84

Table 3.5 Effects of the parameter R on TPLPSO in 10-D Value of R Fixed at 0.1 Fixed at 0.5 Fixed at 1.0 From 1.0 to 0.5 From 0.5 to 0.1 From 1.0 to 0.1

F15 1.15E-03 1.19E-03 1.04E-03 7.78E-04 7.35E-04 6.23E-04

F22 1.10E-01 1.21E-01 1.15E-01 9.56E-02 9.43E-02 8.24E-02

F23 7.45E-01 7.12E-01 6.78E-01 4.10E-01 4.03E-01 3.97E-01

F26 6.16E-01 5.83E-01 6.09E-01 3.27E-01 3.56E-01 3.34E-01

F28 1.73E+02 1.62E+02 1.57E+02 1.29E+02 1.32E+02 1.21E+02

F30 8.44E+02 8.46E+02 8.43E+02 8.32E+02 8.36E+02 8.28E+02

Table 3.6 Effects of the parameter R on TPLPSO in 30-D Value of R Fixed at 0.1 Fixed at 0.5 Fixed at 1.0 From 1.0 to 0.5 From 0.5 to 0.1 From 1.0 to 0.1

F15 3.94E-03 4.65E-03 4.01E-03 2.69E-03 2.86E-03 2.52E-03

F22 8.20E-01 8.43E-01 7.67E-01 4.75E-01 4.53E-01 4.10E-01

F23 1.25E+00 1.15E+00 1.04E+00 9.83E-01 9.37E-01 9.46E-01

F26 3.33E+00 3.27E+00 3.54E+00 1.42E+00 1.38E+00 1.29E+00

F28 2.81E+02 2.75E+02 2.63E+02 2.21E+02 2.28E+02 2.19E+02

F30 9.20E+00 9.24E+02 9.19E+02 9.07E+02 9.10E+02 9.03E+02

Table 3.7 Effects of the parameter R on TPLPSO in 50-D Value of R Fixed at 0.1 Fixed at 0.5 Fixed at 1.0 From 1.0 to 0.5 From 0.5 to 0.1 From 1.0 to 0.1

F15 2.82E-02 2.96E-02 2.54E-02 1.29E-02 1.23E-02 1.17E-02

F22 9.26E-01 9.33E-01 8.94E-01 4.14E-01 4.32E-01 3.97E-01

F23 7.13E-03 6.79E-03 6.43E-03 4.67E-03 4.45E-03 4.24E-03

F26 3.31E+00 3.16E+00 3.45E+00 1.52E+00 1.45E+00 1.68E+00

F28 4.17E+02 4.13E+02 4.01E+02 3.58E+02 3.67E+02 3.51E+02

F30 9.51E+02 9.58E+02 9.47E+02 9.31E+02 9.35E+02 9.29E+02

3.3.3 Comparison of TPLPSO with Other Well-Established PSO Variants The experimental results obtained by all involved PSO variants are reported in this subsection. Table 3.8 presents the results of mean error (Emean), standard deviation (SD), and Wilcoxon test achieved by the seven algorithms in each tested problem. The SD value denotes the amount of dispersion of error value in each simulation run from Emean. Meanwhile, the success rate (SR) and success performance (SP) values produced by all involved algorithms are shown in Table 3.12 to compare their reliability and efficiency. The text in boldface in the tables indicates the best results. Specifically, the results of the comparison of the Emean values of TPLPSO and its peers are shown in Table 3.8 as w/t/l and #BME. w/t/l means that TPLPSO outperforms a particular peer in w functions, is tied in t functions, and loses in l functions. #BME represents the number of best (i.e., lowest) Emean values achieved by each PSO variant. The Wilcoxon

85

test result (h) is summarized as +/=/- to indicate the number of functions wherein TPLPSO performs significantly better, almost the same, and significantly worse than its contender, respectively. Meanwhile, the SR and SP results presented in Table 3.12 are summarized as #S/#PS/#NS and #BSP, respectively. The former indicates the number of functions that are completely (i.e., SR = 100%), partially (i.e., 0% < SR < 100%), and never (i.e., SR = 0%) solved by a particular PSO variant. A benchmark function is considered solved if the objective function values (ObjV) obtained by a tested algorithm are smaller than the predefined accuracy level  . The value of #BSP, on the other hand, represents the number of best (i.e., lowest) SP values achieved by the algorithm. The SR and SP values of functions F3, F10, F16, and F24 to F30 are omitted in Table 3.12 because none of the involved algorithms are able to solve the aforementioned functions within the predefined  in at least one run.

3.3.3(a) Comparison of the Mean Error Results From Table 3.8, the proposed TPLPSO has the most superior search accuracy because this algorithm outperforms its peers with a large margin in most of the problems. Specifically, TPLPSO achieves 22 best mean error (Emean) values out of the 30 benchmarks used, i.e., 5.5 times better than the second-ranked OLPSO-L. For the conventional (F1 to F8) problems, the proposed TPLPSO successfully locates the global optima of all tested problems, except for the functions F3. More particularly, the TPLPSO is the only algorithm to successfully solve the conventional functions of F1, F2, F4, F5, and F7 with the mean error values of Emean = 0. The CLPSO and FLPSO-QIW also exhibit promising search accuracy in solving the conventional problems, considering that these PSO variants are able to find the global optima or the near-global optima of some tested conventional functions such as functions F1, F6, F7, and F8. Meanwhile, it is notable that all involved algorithms, including the proposed TPLPSO, experience different levels of performance degradation in solving the rotated

86

Table 3.8 The Emean, SD, and Wilcoxon test results of TPLPSO and six compared PSO variants for the 50-D benchmark problems F1

F2

F3

F4

F5

F6

F7

F8

F9

F10

F11

F12

F13

F14

F15

F16

F17

F18

F19

F20

Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h

APSO 2.50E-01 1.81E-01 + 1.46E+03 4.82E+02 + 4.62E+01 1.53E+00 + 5.80E-01 6.29E-01 + 3.60E-02 3.22E-02 + 1.70E-01 8.21E-02 + 6.60E-02 2.57E-02 + 5.44E-01 1.88E-01 + 1.26E+03 3.22E+02 + 5.15E+01 1.39E+01 + 1.83E+02 5.61E+01 + 2.59E+02 6.15E+01 + 2.10E+02 1.01E+02 + 6.32E+01 4.24E+00 + 2.27E-01 9.70E-02 + 1.08E+03 5.18E+02 + 1.97E+03 3.83E+03 + 5.92E-01 7.76E-01 + 7.20E-03 1.06E-02 = 0.00E+00 0.00E+00 =

CLPSO 3.29E-47 1.28E-46 + 5.13E+03 1.00E+03 + 4.35E+01 1.83E-01 = 9.10E+01 1.08E+01 + 8.10E+01 9.76E+00 + 3.39E-11 1.73E-10 + 1.15E-14 2.59E-15 + 0.00E+00 0.00E+00 = 5.77E+03 9.90E+02 + 4.48E+01 2.39E+00 + 3.33E+02 2.34E+01 + 3.21E+02 2.52E+01 + 1.45E+00 4.50E-01 + 5.65E+01 2.42E+00 + 5.68E-14 0.00E+00 6.23E+03 1.26E+03 + 7.80E+01 4.23E+01 + 6.85E+01 1.01E+01 + 6.99E+01 7.32E+00 + 4.24E-09 1.35E-08 +

FLPSO-QIW 2.90E-81 5.97E-81 + 2.62E+02 8.90E+01 + 4.22E+01 2.39E-01 2.60E+00 1.52E+00 + 5.58E+00 2.36E+00 + 5.75E-04 2.21E-03 + 3.43E-14 1.07E-14 + 1.88E-05 8.29E-05 + 2.62E+02 7.62E+01 + 4.55E+01 3.16E+00 + 1.26E+02 1.76E+01 + 1.28E+02 2.13E+01 + 1.52E+00 5.39E-01 + 4.86E+01 3.40E+00 = 1.44E-13 4.15E-14 6.97E+02 1.66E+02 + 1.05E+02 4.86E+01 + 5.88E+00 2.51E+00 + 1.20E+01 3.16E+00 + 2.05E-03 3.49E-03 +

87

FIPS 2.96E-01 8.06E-01 + 8.13E+00 2.47E+01 + 4.77E+01 8.44E-01 + 1.57E+00 3.71E+00 + 5.70E-01 8.65E-01 + 1.93E-01 3.47E-01 + 1.70E-01 3.38E-01 + 9.80E-01 9.53E-01 + 8.45E+00 2.24E+01 + 4.85E+01 5.72E-02 + 2.65E+01 3.39E+01 = 4.15E+01 5.13E+01 + 1.94E-01 4.08E-01 + 5.35E+01 4.38E+00 + 6.20E+00 3.90E+00 + 1.90E+03 5.54E+02 + 9.08E+03 4.88E+03 + 1.31E+02 2.93E+01 + 1.48E+02 3.94E+01 + 0.00E+00 0.00E+00 =

RPPSO 1.28E-02 2.98E-02 + 9.12E+01 4.21E+01 + 4.76E+01 4.30E-01 + 9.25E+00 1.55E+01 + 1.25E+01 1.94E+01 + 7.08E-03 1.85E-02 + 7.47E-01 9.17E-01 + 4.69E-01 1.25E+00 + 9.09E+01 3.77E+01 + 5.03E+01 8.66E+00 + 4.25E+01 4.64E+01 + 8.00E+01 5.30E+01 + 3.03E+01 5.60E+01 + 2.22E+01 1.80E+01 1.44E-01 5.22E-01 = 1.01E+03 1.45E+03 + 3.58E+03 4.93E+03 + 1.62E+02 4.08E+01 + 2.09E+02 5.21E+01 + 0.00E+00 0.00E+00 =

UPSO 8.80E+03 1.69E+03 + 1.58E+04 4.79E+03 + 4.30E+02 1.29E+02 + 2.94E+02 3.99E+01 + 2.26E+02 3.86E+01 + 7.49E+01 1.93E+01 + 1.28E+01 1.04E+00 + 3.96E+01 4.33E+00 + 1.82E+04 5.66E+03 + 4.47E+02 1.26E+02 + 3.80E+02 2.78E+01 + 2.90E+02 3.51E+01 + 2.56E+02 7.78E+01 + 6.41E+01 3.71E+00 + 7.18E+04 1.38E+04 + 1.06E+05 2.13E+04 + 2.12E+10 5.20E+09 + 5.13E+02 5.41E+01 + 4.32E+02 6.15E+01 + 7.21E+01 1.75E+02 +

TPLPSO 0.00E+00 0.00E+00 0.00E+00 0.00E+00 4.35E+01 1.23E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 3.40E+01 2.31E+00 2.47E+01 5.69E+01 2.26E+01 4.36E+01 0.00E+00 0.00E+00 4.43E+01 1.38E+01 1.17E-02 4.69E-03 3.09E+02 1.24E+02 6.99E+01 7.76E+01 6.27E-03 3.05E-03 5.23E-03 2.36E-03 0.00E+00 0.00E+00

Table 3.8 (Continued) F21

F22

F23

F24

F25

F26

F27

F28

F29

F30

Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h #BME w/t/l +/=/-

APSO 6.04E-02 1.71E-02 + 8.79E-01 5.86E-01 + 1.49E+00 1.47E-01 + 2.07E+01 1.65E-01 1.32E+07 4.09E+06 + 4.13E+00 1.19E+00 + 4.60E+02 7.79E+01 + 5.13E+02 8.95E+01 + 1.10E+03 9.90E+01 + 1.08E+03 9.29E+01 + 2 28/1/1 27/2/1

CLPSO 4.52E-08 2.47E-07 2.96E-02 1.06E-01 2.95E-02 5.28E-02 + 2.11E+01 4.40E-02 + 5.19E+07 8.32E+06 + 2.03E+01 1.65E+00 + 2.62E+02 6.70E+01 = 2.91E+02 6.57E+01 = 9.31E+02 1.84E+01 + 9.34E+02 5.53E+00 + 2 23/2/5 23/4/3

FLPSO-QIW 2.19E-13 5.74E-14 3.87E-02 7.38E-02 6.02E-03 1.04E-02 + 2.11E+01 4.16E-02 + 1.89E+07 4.92E+06 + 4.01E+00 1.46E+00 + 1.78E+02 1.39E+02 1.79E+02 8.40E+01 9.22E+02 3.40E+01 + 9.34E+02 1.05E+01 + 4 24/0/6 23/1/6

FIPS 4.47E+00 8.26E-01 + 2.00E+01 3.86E+00 + 4.81E+00 1.78E+00 + 2.12E+01 5.03E-02 + 1.02E+07 3.44E+06 + 2.76E+01 5.79E+00 + 1.88E+02 6.55E+01 2.31E+02 8.36E+01 9.53E+02 1.26E+01 + 9.63E+02 2.54E+01 + 1 27/1/2 26/2/2

RPPSO 1.50E+01 6.17E-01 + 4.49E+01 4.16E+00 + 6.60E-01 2.61E-01 + 2.11E+01 3.53E-02 + 1.49E+07 1.13E+07 + 4.79E+01 2.19E+01 + 2.19E+02 1.16E+02 2.47E+02 1.08E+02 = 9.60E+02 2.92E+01 + 9.56E+02 2.74E+01 + 2 27/1/3 25/3/2

UPSO 1.98E+01 5.01E-01 + 5.99E+01 3.93E+00 + 2.61E+03 5.99E+02 + 2.11E+01 4.98E-02 + 3.89E+08 1.18E+08 + 2.23E+03 1.60E+03 + 5.10E+02 8.08E+01 + 6.49E+02 1.19E+02 + 1.19E+03 5.01E+01 + 1.18E+03 6.42E+01 + 0 30/0/0 30/0/0

TPLPSO 1.99E-02 4.63E-03 3.97E-01 1.45E-01 4.24E-03 7.86E-03 2.09E+01 7.86E-02 3.55E+06 1.15E+06 1.68E+00 3.94E-01 2.91E+02 1.67E+02 3.51E+02 1.80E+02 9.20E+02 1.13E+01 9.29E+02 8.58E+00 22

problems (F9 to F14) compared with the conventional counterparts. For example, although the CLPSO and TPLPSO successfully find the global optimum of conventional Weierstrass (F8) function, none of these tested algorithms are able to solve the rotated Weierstrass (F14) function with the same accuracy (i.e., Emean = 0). This implies that the rotated fitness landscapes are indeed more challenging and thus impose greater difficulties for the algorithms in locating the global optima of these modified fitness landscapes. Among the tested algorithms, the proposed TPLPSO is least susceptible to the rotation operation, considering that it is the only algorithm that is able to find the global optima of some rotated functions such as functions F9 and F13. Although the proposed TPLPSO is unable to locate the global optima of functions F10, F11, and F12, the Emean values produced by TPLPSO in these functions are better than those of the peer algorithms. These observations further

88

justify the better capability of the TPLPSO in dealing with the rotated search spaces as compared to its peers. Similar performance degradations of the tested algorithms could be observed in the shifted problems (F15 to F22), considering that none of them are able to find the global optima of most shifted problems. It is also observed that most tested algorithms have better capability in dealing with the shifted fitness landscapes than the rotated ones, given that the Emean values obtained by these algorithms in the former problems are generally lower (i.e., better search accuracy) than those in the latter problems. Among the eight shifted problems, the shifted Griewank function (F20) is identified as the easiest shifted function because many tested algorithms such as the APSO, CLPSO, FIPS, RPPSO, and TPLPSO produce promising Emean values in solving this function. It is also noteworthy that the proposed TPLPSO exhibits the best robustness towards the shifting operation, considering that it produces five best Emean and three third best Emean values in eight shifted problems. Particularly, the TPLPSO is the only algorithm to successfully obtain the Emean values with an accuracy of 10-3 in the functions F18 and F19. Apart from the proposed TPLPSO, the search accuracies of CLPSO and FLPSO-QIW in the shifted problems are relatively promising because these algorithms are able to find the near-global optima of some shifted problems such as functions F15, F20, and F21. In the following complex (F23 to F26) and composition (F27 to F30) problems, all PSO variants suffer from further performance deterioration, considering that all involved algorithms neither find the global optima nor the near-global optima of these functions. The inferior search accuracies of the tested algorithms in solving these problems could be attributed to the fact that the inclusions of both rotating and shifting operations (F23 to F25), expended operation (F26), and composition mechanism (F27 to F30) have significantly increased the problems’ complexities. Consequently, these additional mechanisms impose even greater challenges for the algorithms in searching for the global optima of these modified problems. Among the seven tested algorithms, TPLPSO is identified to be the least vulnerable to these additional mechanisms because it produces five best Emean values in eight 89

of these problems. Although the search accuracy of TPLPSO in function F24 is relatively inferior to the APSO, the outperformance margin of the latter against the former is relatively insignificant. Another notable observation is that the Emean values produced by TPLPSO in solving the composition functions F27 and F28 are inferior as compared to its peers. This observation suggests that there is a room for improvement for the proposed TPLPSO in solving the problems with complicated fitness landscapes. Based on the experimental results reported in Table 3.8, it can be concluded that the proposed TPLPSO in general exhibits better search accuracy than its compared peer algorithms. Moreover, the promising values of #BME and w/t/l values obtained by the TPLPSO against its peers in each problem category imply that the proposed algorithm exhibits relatively better robustness towards any modifications made on the fitness landscapes of problems. Therefore, the proposed TPLPSO has better capability to tackle the problems with various types of fitness landscapes as compared to its peer algorithms.

3.3.3(b) Comparison of the Non-Parametric Statistical Test Results In this subsection, the non-parametric statistical tests are performed to thoroughly investigate if the search accuracy of the proposed TPLPSO is better, insignificant, or worse than the other six compared peers at the statistical level. Specifically, the pairwise comparison results between TPLPSO and its peers via the Wilcoxon test are summarized in Tables 3.8 and 3.9. Table 3.8 describes the pairwise comparison results between the proposed TPLPSO and its peers in each benchmark by using h values, whereas Table 3.9 reports the sum of ranks that TPLPSO outperforms (R+) and underperforms (R-) the compared methods, as well as the associated p-value. To recap, the pvalue represents the minimal level of significant for detecting the performance differences

Table 3.9 Wilcoxon test for the comparison of TPLPSO and six other PSO variants TPLPSO vs. R+ R− p-value

APSO 429.0 6.0 5.22E-08

CLPSO 399.0 66.0 3.13E-04

FLPSO-QIW 378.0 87.0 2.02E-03

90

FIPS 388.0 47.0 7.73E-05

RPPSO 376.0 59.0 2.99E-04

UPSO 465.0 0.0 1.86E-09

Table 3.10 Average rankings and the associated p-values obtained by the TPLPSO and six other PSO variants via the Friedman test Algorithm Average ranking Chi-square statistic p-value

TPLPSO 1.75

FLPSO-QIW 2.90

CLPSO FIPS 3.68 4.20 95.76 0.00E+00

RPPSO 4.28

APSO 4.30

UPSO 6.88

between the TPLPSO and the compared algorithms. From Table 3.8, it is observed that the h values obtained from the Wilcoxon test in each tested problems are generally consistent with the previously reported Emean values. The number of problems TPLPSO significantly outperforms its peers is much larger than the number of problems where the former is significantly worse than the latter. Table 3.9 confirms the significant improvements of TPLPSO over six compared peers considering independent pairwise comparison because all p-values obtained from the Wilcoxon test in Table 3.9 are less than  = 0.05. In addition to the pairwise comparison, the multiple comparisons (García et al., 2009, Derrac et al., 2011) are also employed to evaluate the effectiveness of TPLPSO comprehensively. To perform the multiple comparisons, the average rankings of the compared algorithms and the associated p-values (derived from the chi-square statistic/distribution) are first computed via the Friedman test. The computed values are then summarized in Table 3.10, where the algorithm with a better search accuracy is assigned with a better (smaller) rank value. From Table 3.10, the involved PSO variants are ranked by the Friedman test based on their search accuracy as follows: TPLPSO, FLPSO-QIW, CLPSO, FIPS, RPPSO, APSO, and UPSO. The proposed TPLPSO has emerged as the best performing algorithm with the average rank of 1.75. Moreover, the p-value (i.e., 0.00E+00) computed through the statistic of Friedman test is smaller than the level of significance considered in this research work (i.e.,  = 0.05). This observation strongly suggests that a significant global difference exists among the compared algorithms.

91

Table 3.11 Adjusted p-values obtained by comparing the TPLPSO with six other PSO variants using the Bonferroni-Dunn, Holm, and Hochberg procedures TPLPSO vs. UPSO APSO RPPSO FIPS CLPSO FLPSO-QIW

z 9.20E+00 4.57E+00 4.54E+00 4.39E+00 3.47E+00 2.06E+00

Unadjusted p 0.00E+00 5.00E-06 6.00E-06 1.10E-05 5.28E-04 3.92E-02

Bonferroni-Dunn p 0.00E+00 2.90E-05 3.30E-05 6.70E-05 3.17E-03 2.35E-01

Holm p 0.00E+00 2.40E-05 2.40E-05 3.40E-05 1.06E-03 3.92E-02

Hochberg p 0.00E+00 2.20E-05 2.20E-05 3.40E-05 1.06E-03 3.92E-02

Considering that the Friedman test successfully detects a significant global difference among the tested algorithms, a set of post-hoc statistical analyses (García et al., 2009, Derrac et al., 2011), namely the Bonferroni-Dunn, Holm, and Hochberg tests, are subsequently utilized to detect concrete differences for the control algorithm (i.e., TPLPSO). The associated z values, unadjusted p-values, and adjusted p-values (APVs) obtained from the aforementioned post-hoc procedures are listed in Table 3.11. The z-values are measures of standard deviation that determine whether or not to reject the null hypotheses derived from the multiple comparison, whereas the p-values (i.e., unadjusted and adjusted) denotes the probabilities of falsely rejecting the null hypotheses. Unlike the adjusted p-values, the computations of APVs consider the family error accumulated and thus are more appropriate to conduct multiple comparisons without disregarding the Family Wise Error Rate (FWER) (García et al., 2009, Derrac et al., 2011). Table 3.11 reveals that, at a significance level of  = 0.05, all post-hoc procedures confirm the significant improvement of TPLPSO over the UPSO, APSO, RPPSO, FIPS, and CLPSO, considering that all APVs produced by the three selected post-hoc tests are smaller than  . Meanwhile, unlike the Bonferroni test, both of the Holm and Hochberg tests reveal that the search accuracies exhibited by the TPLPSO in solving the 30 tested problems significant outperforms the FLPSO-QIW. These observations also prove that the Holm and Hochberg tests are more powerful than the Bonferroni-Dunn test, in term of detecting the performance differences between the compared algorithms.

92

Table 3.12 The SR and SP values of TPLPSO and six compared PSO variants for the 50-D benchmark problems F1

SR SP F2 SR SP F4 SR SP F5 SR SP F6 SR SP F7 SR SP F8 SR SP F9 SR SP F11 SR SP F12 SR SP F13 SR SP F14 SR SP F15 SR SP F17 SR SP F18 SR SP F19 SR SP F20 SR SP F21 SR SP F22 SR SP F23 SR SP #S/#PS/#NS BSP

APSO 0.00 Inf 0.00 Inf 0.00 Inf 6.67 2.60E+06 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 86.67 2.51E+05 100.00 1.20E+04 0.00 Inf 0.00 Inf 0.00 Inf 1/2/27 0

CLPSO 100.00 1.25E+05 0.00 Inf 0.00 Inf 0.00 Inf 100.00 1.11E+05 100.00 1.05E+05 100.00 1.42E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 1.16E+05 0.00 Inf 0.00 Inf 0.00 Inf 100.00 7.44E+04 100.00 1.33E+05 83.33 2.27E+05 56.67 4.50E+05 7/2/21 0

FLPSO-QIW 100.00 6.04E+04 0.00 Inf 6.67 3.46E+06 0.00 Inf 100.00 5.00E+04 100.00 4.79E+04 100.00 6.67E+04 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 5.85E+04 0.00 Inf 0.00 Inf 0.00 Inf 100.00 4.88E+04 100.00 4.72E+04 70.00 1.03E+05 83.33 2.04E+05 7/3/20 3

FIPS 80.00 9.86E+04 70.00 1.62E+05 40.00 3.56E+05 33.33 2.66E+05 70.00 1.05E+05 53.33 1.07E+05 16.67 4.59E+05 73.33 1.62E+05 53.33 2.60E+05 43.33 2.84E+05 80.00 8.72E+04 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 2.47E+03 0.00 Inf 0.00 Inf 0.00 Inf 1/11/18 0

RPPSO 73.33 1.52E+04 0.00 Inf 70.00 4.19E+04 60.00 2.50E+04 86.67 1.34E+04 36.67 1.25E+05 83.33 3.15E+04 3.33 1.69E+06 40.00 3.95E+04 20.00 9.26E+04 53.33 2.15E+04 13.33 7.14E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 1.01E+03 0.00 Inf 0.00 Inf 0.00 Inf 1/11/18 1

UPSO 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 80.00 3.78E+03 0.00 Inf 0.00 Inf 0.00 Inf 0/1/29 0

TPLPSO 100.00 6.65E+02 100.00 1.13E+05 100.00 1.32E+03 100.00 2.02E+03 100.00 7.03E+02 100.00 8.34E+03 100.00 7.83E+02 100.00 1.13E+05 80.00 3.16E+03 76.67 5.60E+03 100.00 6.37E+02 6.67 8.03E+04 0.00 Inf 3.33 8.84E+06 83.33 3.41E+05 93.33 3.00E+05 100.00 1.12E+03 0.00 Inf 0.00 Inf 86.67 2.94E+05 10/7/13 15

3.3.3(c) Comparison of the Success Rate Results The success rate (SR) analysis as reported in Table 3.12 reveals that the proposed TPLPSO has the most impressive search reliability among the compared peers. Specifically, the TPLPSO is able to completely solve 10 (out of 30) of the employed benchmarks with SR = 100%, i.e., 1.43 times better than the second-ranked CLPSO and FLPSO-QIW.

93

From Table 3.12, it is observed that TPLPSO successfully solves all the seven (out of eight) conventional problems completely (i.e., SR = 100%) at the predefined accuracy level  . Specifically, the proposed TPLPSO is the only algorithm that is able to completely solve the functions F2, F4, and F5. The conventional functions F1, and F6 to F8 are relatively easier to solve, considering that majority of the tested algorithms are able to achieve promising SR values in these functions. For example, the CLPSO and FLPSO-QIW, are able to solve the functions F1, and F6 to F8 with SR = 100%. On the other hand, the UPSO is identified as the worst optimizer for the conventional problems because this algorithm fails to solve any of the tested problems completely or partially. In other words, the SR values obtained by the UPSO in these conventional problems are 0.00%. Similar with the previously reported Emean values, it is also observed that the search reliabilities of most tested algorithms are jeopardized when they are employed to solve the modified benchmark functions with more complicated fitness landscapes. Table 3.12 reveals that the SR values produced by most compared peers (i.e., APSO, CLPSO, FLPSO-QIW, and UPSO) in the rotated problems are equal to 0.00%, implying that these compared algorithms are unable to completely or partially solve the rotated problems. On the other hand, both of the FIPS and RPPSO are reported to have better search reliabilities than the mentioned four PSO variants because the former two are able to partially solve most rotated problems (i.e., functions F9, F11, F12, and F13) with the acceptable SR values (i.e., 0% < SR < 100%). Among the seven tested algorithms, the proposed TPLPSO is identified as the best optimizer in solving the rotated problems because it is the only algorithm that successfully solves two (out of six) rotated problems (i.e., functions F9 and F13) with the SR values of 100.00%. Additionally, the TPLPSO also exhibits the most robust search reliability in solving the rotated functions F11 and F12, by producing the best SR values of 80.00% and 76.67%, respectively. Meanwhile, Table 3.12 reveals the similar performance deteriorations of the tested algorithms, in term of search reliability, to solve the shifted problems. In contrast to the previous experimental findings observed from the rotated problems, both of the FIPS and 94

RPPSO are reported to have worse search reliabilities than the CLPSO and FLPSO-QIW in tackling the shifted problems. Meanwhile, the search reliability of the proposed TPLPSO in the shifted problems is promising because it solves four (out of eight) shifted problems with the best SR values. Specifically, TPLPSO is the only optimizer that partially solves the shifted functions F17 and F18 with the SR values of 3.33 and 83.33, respectively. Finally, the search reliabilities exhibited by the CLPSO and FLPSO-QIW in dealing with the shifted problems are also competitive because these algorithms are able to completely or partially solve four (out of eight) tested problems with good SR values. For complex and composition problems, most of the algorithms’ search reliabilities are further compromised by the challenging fitness landscapes of the tested problems. Specifically, none of the involved algorithms can solve these two problem categories completely or partially except for function F23, where TPLPSO achieves the best SR value of 86.67%. For the remaining complex and composition problems, the proposed TPLPSO is proven better than its peers for achieving the better Emean values in these problems, as reported in Table 3.8. Based on the SR analysis, it can be concluded that the proposed TPLPSO exhibits excellent search reliability in tackling the simpler problems (e.g., conventional and rotated problems) by producing the expected SR values. However, the performance of TPLPSO in shifted, complex, and composition problems are not consistent, considering that this algorithm fails to solve some tested benchmarks in at least one simulation run. For instance, the SR value of TPLPSO in functions F15, F21, and F22 are 0.00%, as shown in Table 3.12.

3.3.3(d) Comparison of the Success Performance Results As explained in Section 2.5.3, the success performance (SP) metric is used to quantitatively evaluate the computation cost required by a tested algorithm to solve a given problem within the predefined  (Suganthan et al., 2005). Nevertheless, as illustrated in Table 3.12, some tested algorithms are unable to completely or partially solve particular problems (i.e., SR = 0.00%). In such a case, the SP value is set as an infinity values (“Inf”), and only the 95

(a)

(b)

(c)

(d)

(e)

(f)

Figure 3.6: Convergence curves of 50-D problems: (a) F2, (b) F7, (c) F12, (d) F13, (e) F17, and (f) F18.

convergence curves as shown in Figure 3.6 are used to qualitatively evaluate the algorithm’s convergence speed. In this subsection, a total of ten representative convergence curves, i.e.,

96

(g)

(h)

(i)

(j)

Figure 3.6 (Continued): Convergence curves of 50-D problems: (g) F23, (h) F26, (i) F28, and (j) F30.

two from conventional (F2 and F7), rotated (F12 and F13), shifted (F17 and F18), complex (F23 and F26), and composition (F28 and F30) problems, are presented. As reported in Table 3.12, the proposed TPLPSO exhibits the best search efficiency in all conventional and rotated problems. Specifically, the TPLPSO achieves seven best SP value in eight conventional problems and five best SP values in six rotated problems. This implies that TPLPSO requires the least computation cost to solve most conventional and rotated problems within a predefined  . The outstanding convergence characteristics of TPLPSO in the conventional and rotated problems are validated by Figures 3.6(a) to 3.6(d). It is notable that, except for functions F3, F10, F11, F12, and F14, the convergence curves of TPLPSO in most conventional and rotated problems exhibit a typical feature, i.e., a curve

97

that sharply drops at one point, which is usually at the early stage [functions F7 and F13, as illustrated by Figures 3.6(b) and 3.6(d)] or at the middle stage [function F2, as illustrated by Figure 3.6(a)]. These observations prove the capability of TPLPSO to break out from the local optima and its ability to locate the global optima by consuming a significantly small amount of fitness evaluations (FEs). Meanwhile, the convergence curves of the remaining problems (i.e., functions F3, F10, F11, F12, and F14) are similar with that illustrated in Figure 3.6(d). This observation further confirms the promising convergence characteristic of TPLPSO over its peers in solving the conventional and rotated problems, despite of the absence of SP values in some tested problems (i.e., function F3 and F10). Table 3.12 reveals that both of the FLPSO-QIW and TPLPSO exhibit the best search efficiencies in solving the shifted problems, considering that these two algorithms achieve three best SP values out of eight tested problems. Specifically, FLPSO-QIW obtains the best SP values in functions F15, F21, and F22, whereas TPLPSO solves the functions F17, F18, and F19 with best search efficiency. The promising convergence speeds exhibited by the TPLPSO over its peers in solving the shifted problems are illustrated in Figures 3.6(e) and 3.6(f). Another notable finding that worth to be highlighted from Table 3.12 is that, TPLPSO consumes more fitness evaluations (FEs) number in solving the shifted problems within the predefined  as compared to the rotated problems. For instance, the SP values obtained by the TPLPSO in solving the rotated Rastrigin function (F11) and the shifted Rastrigin function (F18) are 3.16E+03 and 3.41E+05, respectively. This observation implies that the task of locating the shifted global optima in search space is more challenging and timeconsuming than locating the non-shifted global optima. Therefore, more computation costs (i.e., FEs) are incurred when the proposed TPLPSO is employed to solve the shifted problems within the predefined  . Finally, for most complex and composition problems, no SP values are available for comparison, except for function F23, where the proposed TPLPSO achieves the second best SP value. The convergence curves of functions F23, F26, and F30 [represented by Figures 3.6(g), 3.6(h), and 3.6(j), respectively] show that the convergence speeds of TPLPSO in 98

majority of the complex and composition problems are competitive as compared with its peers. As illustrated in these convergence curves, TPLPSO tends to exhibit faster convergence compared with other algorithms in the early or middle stages of the search process. This promising convergence characteristic enables the TPLPSO to locate and exploits the optimal regions of the search space earlier than its peers. Thus, the TPLPSO provides better opportunity to achieve better solutions than the other algorithms in solving the complex and composition problems.

3.3.3(e) Comparison of the Algorithm Complexity Results In this section, the computational complexities of the seven involved algorithms are quantitatively evaluated by using the algorithm complexity (AC) value. The results are presented in Table 3.13. Specifically, T0 , T1 , and Tˆ2 denote the computing times required to perform the Steps 1 to 3 of the procedures as illustrated in Figure 2.11, respectively. These values are measured in CPU seconds. Table 3.13 shows that the proposed TPLPSO has the least computational complexity at D = 50, by producing the smallest AC value. Moreover, it is worth mentioning that the AC value recorded by the TPLPSO is in the similar range with those of FIPS, RPPSO, and UPSO. This suggests that the modifications proposed in the TPLPSO are not more complex than these algorithms, despite the search performance (i.e., Emean, SR, and SP) achieved by the former algorithm is much better than the latter three. On the other hand, although the CLPSO and FLPSO-QIW exhibit relatively promising search performance in solving the employed benchmarks, the computational complexities of these two compared peers are much higher than the proposed TPLPSO. Specifically, the AC values recorded by the CLPSO and FLPSO-QIW are 1.03E+04 and 1.56E+04, respectively, and these values are almost ten times higher than that of TPLPSO (i.e., 1.07E+03). Based on the experimental results in Table 3.13, it can be concluded that the proposed TPLPSO emerges as better optimizer than its compared peers. This is because the

99

Table 3.13 AC Results of the TPLPSO and six other PSO variants in D = 50 T0 T1 ̂2 AC

APSO 1.88E−01 4.19E+00 1.37E+03 7.27E+03

CLPSO 1.88E−01 4.19E+00 1.93E+03 1.03E+04

FLPSO-QIW 1.88E−01 4.19E+00 2.95E+03 1.56E+04

FIPS 1.88E-01 4.19E+00 5.86E+02 3.09E+03

RPPSO 1.88E−01 4.19E+00 2.22E+02 1.16E+03

UPSO 1.88E−01 4.19E+00 2.43E+02 1.27E+03

TPLPSO 1.88E−01 4.19E+00 2.05E+02 1.07E+03

TPLPSO exhibits better search performance than its peer algorithm without severely compromising the complexity of its algorithmic framework.

3.3.4 Effect of Different Proposed Strategies In this section, the contribution of each strategy proposed in developing the TPLPSO is studied. These strategies include (1) the peer-learning phase which acts as the alternative learning phase of TPLPSO and (2) the SPLS which employed as the unique learning strategy for the global best particle. To conduct this study, this research work investigates the performance of (1) TPLPSO without the peer-learning phase (TPLPSO1), (2) TPLPSO without the SPLS (TPLPSO2), (3) TPLPSO that adopts the original TLBO framework (TPLPSO3), and (4) complete version of TPLPSO. The following modifications are introduced on the TPLPSO3 to emulate the classical teaching-learning process of the original TLBO framework. First, in the teaching phase of TPLPSO3, the self-cognitive component in Equation (2.1) are excluded, considering that this component does not play any role in the teaching stage of the original TLBO framework. Second, in the teaching phase of TPLPSO3, the neighborhood best position Pn and particle’s current position Xi in the social components in Equation (2.1) are replaced with the global best position Pg and classroom’s mainstream knowledge Xmean, respectively. Third, the random selection technique was employed in the peer-learning stage of TPLPSO3 to select the exemplars. The SPLS module in the TPLPSO3 is kept to ensure that any performance deviation observed between TPLPSO3 and TPLPSO are attributed to the types of learning framework adopted.

100

Considering that the main contribution of this research work is to improve the search performance of BPSO by adapting it with the enhanced TLBO framework, it is worth to investigate the performance gain achieved by the proposed TPLPSO over the original BPSO. To perform this investigation, the Emean values attained by all these TPLPSO variants were compared with those produced by BPSO. The comparison results are expressed in terms of percentage improvement (%Improve) as shown (Lam et al., 2012):

%Improve

Emean( BPSO)  Emean( )  100% Emean( BPSO)

(3.5)

where  denotes the TPLPSO variants. If  eclipses BPSO, a positive %Improve is obtained. Otherwise, the value is negative. The Emean and %Improve values of all TPLPSO variants in each problem are presented in Table 3.14. Meanwhile, the comparison results of all TPLPSO variants in each problem category are summarized as #BME and average %Improve in Table 3.15. Tables 3.14 and 3.15 shows that the performances of all TPLPSO variants improved compared with that of BPSO. These results imply that the use of any aforementioned strategies, i.e., the peer-learning phase or the SPLS, helps to enhance the search performance of BPSO. Among the compared variants, the complete TPLPSO shows the largest average %Improve values of 81.822%, followed by TPLPSO3, TPLPSO1, and TPLPSO2. By comparing the search accuracy and the %Improve value of the TPLPSO variants in each problem category, it is notable that the alternative peer-learning phase plays an important role in helping the TPLPSO to deal with the rotated fitness landscape. This is revealed by the experimental findings in Tables 3.14 and 3.15, which report that TPLPSO1 attains the least %Improve values in solving the rotated problems as compared to other TPLPSO variants. Meanwhile, being the unique learning strategy of the global best particle, the SPLS is proven to be more effective in tackling the shifted problems and the complex problems. Another important finding that worth to be highlighted is that it is crucial to improve the overall search performance of TPLPSO by integrating all proposed strategies, namely the enhanced TLBO framework and SPLS module, into one algorithmic framework. This 101

Table 3.14 Comparison of TPLPSO variants with BPSO in 50-D problems

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 F21 F22 F23 F24 F25 F26 F27 F28 F29 F30

BPSO 4.67E+03 (-) 2.08E+04 (-) 2.10E+02 (-) 1.15E+02 (-) 1.14E+02 (-) 3.92E+01 (-) 1.21E+01 (-) 8.09E+00 (-) 2.57E+04 (-) 1.08E+02 (-) 1.70E+02 (-) 2.00E+02 (-) 2.04E+02 (-) 5.80E+01 (-) 2.31E+04 (-) 7.72E+04 (-) 9.59E+09 (-) 2.93E+02 (-) 3.14E+02 (-) 7.21E+02 (-) 1.47E+01 (-) 4.99E+01 (-) 7.05E+02 (-) 2.10E+01 (-) 6.11E+08 (-) 2.64E+04 (-) 5.35E+02 (-) 5.59E+02 (-) 1.07E+03 (-) 1.07E+03 (-)

Emean (%Improve) TPLPSO1 TPLPSO2 0.00E+00 1.67E+03 (64.214) (100.000) 3.34E+02 1.95E+04 (98.391) (6.049) 4.33E+01 4.73E+01 (77.428) (79.337) 5.02E-04 2.59E+01 (77.382) (100.000) 6.25E-04 3.17E+01 (99.999) (72.074) 0.00E+00 6.02E+00 (84.631) (100.000) 0.00E+00 1.59E+01 (100.000) (-31.346) 0.00E+00 1.60E+00 (80.213) (100.000) 4.89E+02 1.22E+04 (98.095) (52.467) 9.17E+01 4.74E+01 (15.376) (56.310) 2.06E+02 1.53E+01 (-21.173) (91.000) 2.53E+02 1.94E+01 (-26.595) (90.293) 1.82E+02 0.00E+00 (10.948) (100.000) 5.50E+01 4.71E+01 (5.183) (18.782) 2.56E-02 2.61E-02 (100.000) (100.000) 6.63E+03 3.17E+03 (91.406) (95.898) 1.12E+03 4.57E+02 (100.000) (100.000) 3.62E-03 2.43E+02 (17.159) (99.999) 5.21E-03 1.93E+02 (38.568) (99.998) 0.00E+00 6.00E+02 (16.737) (100.000) 3.26E-02 1.34E+00 (99.778) (90.890) 3.92E+01 7.45E+00 (21.513) (85.077) 6.64E-01 4.73E-01 (99.906) (99.933) 2.09E+01 2.09E+01 (0.471) (0.506) 7.01E+06 5.73E+08 (98.854) (6.289) 2.06E+00 1.74E+03 (99.992) (93.399) 4.94E+02 3.83E+02 (7.729) (28.494) 5.37E+02 4.35E+02 (3.956) (22.156) 9.91E+02 9.62E+02 (7.317) (10.041) 9.94E+02 9.51E+02 (7.460) (11.393)

102

TPLPSO3 0.00E+00 (100.000) 0.00E+00 (100.000) 4.43E+01 (78.860) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 4.56E+01 (57.899) 6.34E+01 (62.707) 1.13E+02 (43.457) 0.00E+00 (100.000) 4.59E+01 (20.972) 1.44E-02 (100.000) 5.01E+02 (99.351) 9.73E+01 (100.000) 4.89E-03 (99.998) 5.74E-03 (99.998) 0.00E+00 (100.000) 2.85E-02 (99.806) 4.67E-01 (99.065) 3.44E-02 (99.995) 2.09E+01 (0.726) 4.19E+06 (99.315) 1.76E+00 (99.993) 2.88E+02 (46.200) 3.71E+02 (33.572) 9.48E+02 (11.386) 9.36E+02 (12.803)

TPLPSO 0.00E+00 (100.000) 0.00E+00 (100.000) 4.35E+01 (79.242) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 3.40E+01 (68.632) 2.47E+01 (85.471) 2.26E+01 (88.691) 0.00E+00 (100.000) 4.43E+01 (23.676) 1.17E-02 (100.000) 3.09E+02 (99.600) 6.99E+01 (100.000) 6.27E-03 (99.998) 5.23E-03 (99.998) 0.00E+00 (100.000) 1.99E-02 (99.864) 3.97E-01 (99.205) 4.24E-03 (99.999) 2.09E+01 (0.569) 3.55E+06 (99.419) 1.68E+00 (99.994) 2.91E+02 (45.646) 3.51E+02 (37.204) 9.20E+02 (13.962) 9.29E+02 (13.484)

Table 3.15 Summarized comparison results of TPLPSO variants with BPSO in each problem category

Conventional Problems (F1 to F8) Rotated Problems (F9 to F14) Shifted Problems (F15 to F22) Complex Problems (F23 to F26) Composition Problems (F27 to F30) Overall Result (F1 to F30)

Emean (average %Improve) BPSO TPLPSO1 TPLPSO2 0 5 0 (-) (97.216) (53.831) 0 0 3 (-) (13.639) (68.142) 0 3 0 (-) (89.087) (68.041) 0 2 1 (-) (74.806) (50.032) 0 0 0 (-) (6.615) (18.021) 0 10 4 (-) (63.265) (55.201)

TPLPSO3 7 (97.357) 2 (64.173) 2 (99.777) 1 (75.007) 1 (25.990) 13 (78.870)

TPLPSO 7 (97.405) 4 (77.745) 6 (99.833) 4 (74.995) 3 (27.574) 24 (81.822)

observation is proven by the simulation results reported in Tables 3.14 and 3.15. Accordingly, the average %Improve values of TPLPSO3 and TPLPSO are higher than those of the other TPLPSO variants. Finally, it is also proven that the enhanced TLBO framework proposed in this research work is more superior to the original TLBO because TPLPSO outperforms TPLPSO3 in majority of the tested problem categories, in terms of the #BME and average %Improve values. This observation is consistent with Schewefel’s hypothesis (Back et al., 1997, Akhtar et al., 2013) because the proposed TPLPSO, which possesses a more accurate modeling of real-world teaching-learning scenarios, has better optimization capability than TPLPSO3 with the original TLBO framework.

3.3.5 Comparison with Other State-of-The-Art Metaheuristic Search Algorithms In this subsection, the search performance of the proposed TPLPSO is also compared with some state-of-art metaheuristic search (MS) algorithm, considering that the latters are also widely used to solve the optimization problems. As shown in the following subsection, this performance study begins with the details comparison between the TPLPSO and TLBO algorithms. Next, the TPLPSO is compared with five other well-established MS algorithms to further verify the former’s optimization capability.

103

3.3.5(a) Comparison between TPLPSO and TLBO Considering that the development of the proposed TPLPSO is inspired from the recent proposed TLBO (Rao et al., 2011, Rao et al., 2012), it is crucial to investigate if the former’s search performance could outperform the latter’s. In order to conduct this investigation, this research work performs the own coded simulation comparison between the TLBO and TPLPSO with 30 benchmark problems as presented in Table 2.6. The experimental settings of this comparative study are described as follows: (1) both of the TPLPSO and TLBO are employed to solve the 30 dimensional (30-D) benchmark problems, (2) both of the TPLPSO and TLBO are run independently 30 times to reduce the random discrepancy, (3) both of the TPLPSO and TLBO are terminated if the exact solution has been found or the maximum number of fitness evaluation (FEmax) is reached, and (4) the population size S and FEmax used by the TPLPSO and TLBO in solving the 30-D benchmark problems are set as 20 and 1.00E+05, respectively, according to the recommendations of Li et al. (2012) and Suganthan et al. (2005). The mean error (Emean), standard deviation (SD), and Wilcoxon test result (h) attained by the TPLPSO and TLBO are reported in Table 3.16. Apart from this, the convergence curves of TPLPSO and TLBO in some selected problems are also illustrated in Figure 3.7 to compare the convergence speeds of the tested algorithms. In this study, a total of ten representative convergence curves, i.e., two from conventional (F7 and F8), rotated (F9 and F12), shifted (F18 and F20), complex (F25 and F26), and composition (F27 and F29) problems, are presented. Based on the experimental results reported in Table 3.16, it is observed that the proposed TPLPSO outperforms the TLBO in term of search accuracy. This is because the Emean values produced by the former algorithm in solving all the 30 employed benchmarks are better (i.e., lower) than those of the latter algorithm. As compared to the TLBO which only manages to locate the near global optima of functions F2, F4 and F13, the proposed TPLPSO successfully locates the global optima of seven conventional problems (i.e., F1, F2, and F4 to F8), two rotated problems (i.e., F9 and F13), and one shifted problem (i.e., F20), by solving these functions with Emean = 0.00E+00. 104

Table 3.16 The Emean, SD, and h values of TPLPSO and TLBO in 30-D problems F1 Algorithm TLBO TPLPSO

Emean 7.10E+02 0.00E+00

TLBO TPLPSO

2.22E+02 0.00E+00

TLBO TPLPSO

6.39E+00 0.00E+00

TLBO TPLPSO

3.37E+01 2.51E+01

TLBO TPLPSO

2.15E-05 0.00E+00

TLBO TPLPSO

2.88E+02 1.07E+02

TLBO TPLPSO

2.69E+02 1.56E-02

TLBO TPLPSO

2.46E+01 4.10E-01

TLBO TPLPSO

7.58E+06 3.12E+06

TLBO TPLPSO

2.84E+02 2.19E+02

SD 1.57E+03 0.00E+00 F4 2.20E+01 0.00E+00 F7 2.38E+00 0.00E+00 F10 3.50E+01 7.04E-01 F13 1.18E-04 0.00E+00 F16 1.49E+01 4.46E+01 F19 5.01E+01 8.01E-03 F22 2.66E+00 2.04E-01 F25 6.05E+06 1.56E+06 F28 1.43E+02 1.05E+02

F2 h (+)

Emean 4.50E-10 0.00E+00

(+)

1.91E+02 0.00E+00

(+)

3.27E+01 0.00E+00

(+)

2.81E+02 6.74E+00

(=)

3.67E+01 2.48E+01

(+)

9.58E+02 3.51E+02

(+)

4.88E+01 0.00E+00

(+)

1.02E+00 9.46E-01

(+)

5.59E+02 1.29E+00

(+)

9.14E+02 8.98E+02

SD 1.72E-09 0.00E+00 F5 2.17E+01 0.00E+00 F8 2.64E+00 0.00E+00 F11 4.14E+01 2.07E+01 F14 7.41E+01 6.22E+01 F17 2.66E+03 8.92E+02 F20 1.41E+02 0.00E+00 F23 5.33E-02 9.00E-02 F26 1.13E+03 2.72E-01 F29 1.86E+01 2.94E+01

F3 h (=)

Emean 3.92E+01 2.51E+01

(+)

7.27E+00 0.00E+00

(+)

2.98E-04 0.00E+00

(+)

2.74E+02 2.33E+01

(+)

3.35E-03 2.52E-03

(+)

2.73E+02 1.29E-02

(=)

4.35E-02 3.91E-02

(+)

2.11E+01 2.09E+01

(+)

2.38E+02 2.14E+02

(+)

9.15E+02 9.03E+02

SD 1.50E+01 7.04E-01 F6 1.19E+01 0.00E+00 F9 1.63E-03 0.00E+00 F12 3.86E+01 4.28E+01 F15 1.42E-02 1.08E-02 F18 3.97E+01 6.75E-03 F21 1.12E-02 1.00E-02 F24 4.66E-02 8.07E-02 F27 1.23E+02 6.36E+01 F30 1.78E+01 3.78E+01

h (+)

(+)

(=)

(+)

(+)

(+)

(=)

(+)

(+)

(+)

The excellent search accuracy of TPLPSO over the TLBO is further verified by the pairwise comparison results (i.e., h value) produced by the Wilcoxon test in Table 3.16. Accordingly, the search accuracies of TPLPSO in solving majority (i.e., 26 out of 30) of the tested problems significantly outperform those of TLBO (i.e., h = ‘+’). On the other hand, the Wilcoxon test did not show any significant improvement of TPLPSO over TLBO in solving the functions F2, F9, F13, and F22, despite the Emean values produced by these two algorithms are largely different. This scenario happens when the algorithm such as TLBO runs with predefined number of independent runs, there is a small probability for it to be trapped in the inferior regions of the search space, and thereby produces a large objective functions value (ObjV). This large ObjV, which is occasionally produced due to the random discrepancy, tend to deteriorate the overall Emean value of the TLBO.

105

(a)

(b)

(c)

(d)

(e)

(f)

Figure 3.7: Convergence curves of 30-D problems: (a) F7, (b) F8, (c) F9, (d) F12, (e) F18, and (f) F20.

Meanwhile, the convergence curves illustrated in Figure 3.7 show that the proposed TPLPSO has more competitive convergence speed than the TLBO. Specifically, the convergence curves of the TPLPSO tend to drop off at one point, usually at the early stage

106

(g)

(h)

(i)

(j)

Figure 3.7 (Continued): Convergence curves of 30-D problems: (g) F25, (h) F26, (i) F27, and (j) F29.

[functions F7, F8, and F20, as illustrated in Figures 3.7(a), 3.7(b), and 3.7(f), respectively] or the middle stage [function F9 as illustrated in Figure 3.7(c)] of optimization. These observations verify the excellent search efficiency of TPLPSO over the TLBO, considering that the former is able to locate the global optima of the given problems without consuming excessive amount of FEs. Although the proposed TPLPSO did not manage to find the global optima of the functions F12, F18, F25, F26, F27, and F29, the convergence curves of these functions [as illustrated in Figures 3.7(d), 3.7(e), 3.7(g), 3.7(i), and 3.7(j), respectively] reveal the promising convergence characteristic of TPLPSO. These convergence curves show that the convergence speeds of TPLPSO, especially during the early stage of optimization, outperform those of TLBO with different margins. Specifically, the largest

107

outperformance margins could be observed in function F18 [as illustrated in Figure 3.7(e)], followed by functions F12, F26, F25, F27, and F29 [as illustrated in Figures 3.7(d), 3.7(h), 3.7(g), 3.7(i), and 3.7(j), respectively]. The appealing convergence characteristic of TPLPSO enables it to locate and exploit the promising regions of search space earlier than the TLBO. Thus, the TPLPSO has better chance to achieve better solutions as compared to the TLBO.

3.3.5(b) Comparison between TPLPSO and Other MS Algorithms In this subsection, a total of five MS algorithms are selected for the comparative study with TPLPSO. These MS algorithms include the Real-Coded Chemical Reaction Optimization (RCCRO) (Lam et al., 2012), Group Search Optimization (GSO) (He et al., 2009), RealCoded Biogeography-based Optimization (RCBBO) (Gong et al., 2010), Covariance Matrix Adaptation Evolution Strategy (CMAES) (Hansen and Ostermeier, 2001), and Generalized Generation Gap Module with Generic Parent-Centric Crossover Operator (G3PCX) (Deb et al., 2002). The descriptions of each compared MS algorithms are provided as follows. RCCO is a real-coded version of chemical reaction optimization (CRO) (Lam and Li, 2010) and the search mechanism of this MS algorithm is inspired based on the chemical reaction. GSO mimics the animal’s search behavior based on the producer-scrounger (PS) model. RCBBO is the real-coded version of biogeography-based optimization (BBO) (Simon, 2008), which is developed from the geographical distribution of biological organism. CMAES is a classical evolution strategy (CES) (Beyer and Schwefel, 2002) variant that improved with the restart mechanism and the increasing population size mechanism. Finally, G3PCX is developed by integrating the elite-preserving and scale module, as well as the parent-centric recombination operator into the classical generic algorithm (GA) (Melanie, 1999). The search performance of the proposed TPLPSO is compared with the five mentioned MS algorithms across ten 30-D conventional problems. The parameter values of the involved MS algorithms are set based on the recommendations of their respective authors. The maximum number of fitness evaluation (FEmax) and population of all tested algorithms 108

Table 3.17 Maximum fitness evaluation number (FEmax) of the compared MS algorithms in 30-D problems Function Sphere Schewefel 2.22 Schewefel 1.2 Schewefel 2.21 Rosenbrock Step Quartic Rastrigin Ackley Griewank

RCCRO 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05

GSO 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05

RCBBO 1.50E+05 2.00E+05 5.00E+05 5.00E+05 5.00E+05 1.50E+05 3.00E+05 3.00E+05 1.50E+05 3.00E+05

CMAES 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05

G3PCX 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05 1.50E+05 1.50E+05 2.50E+05 1.50E+05 1.50E+05

TPLPSO 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05 1.00E+05

Table 3.18 Population size (S) of the compared MS algorithms in 30-D problems Algorithm RCCRO GSO RCBBO CMAES G3PCX TPLPSO 4  3 ln( D) S 10* 48 100 100 20 * Except for Rosenbrock, Rastrigin, and Grienwank functions, where the population size is S = 20.

for all functions are summarized in Tables 3.17 and 3.18, respectively. It is important to mention that the proposed TPLPSO is evaluated with the least FEmax in all tested functions, whereas the results for other compared MS algorithms are obtained with higher FEmax, considering that their data are acquired from the published results (Hansen and Ostermeier, 2001, Deb et al., 2002, He et al., 2009, Gong et al., 2010, Lam et al., 2012). The simulation results, in terms of Emean and SD, of each tested algorithms are reported in Table 3.19. Based on these experimental results, it can be observed that the proposed TPLPSO yields the best search performance in almost all tested functions, despite the fact that this algorithm is assigned with the lowest FEmax and a relatively small population size. More particularly, TPLPSO produces the lowest Emean values in eight out of ten tested problems. It is also noteworthy that TPLPSO is the only algorithm that is able to locate the global optima for Sphere, Schwefel 2.22, Schwefel 1.2, Schewefel 2.21, and Ackley functions, by producing Emean = 0.00E+00. Although the CMAES and G3PCX outperform the TPLPSO on the Rosenbrock and Quartic functions, the latter algorithm outperforms the former two in the remaining functions.

109

Table 3.19 Comparisons between TPLPSO and other tested MS algorithms in 30-D problems Function Emean Sphere SD Schwefel Emean 2.22 SD Emean Schwefel 1.2 SD Schwefel Emean 2.21 SD Emean Rosenbrock SD Emean Step SD Emean Quartic SD Emean Rastrigin SD Emean Ackley SD Emean Grienwank SD w/t/l #BME

RCCRO 6.43E−07 (2.09E−07) 2.19E−03 (4.34E−04) 2.97E−07 (1.15E−07) 9.32E−03 (3.66E−03) 2.71E+01 (3.43E+01) 0.00E+00 (0.00E+00) 5.41E−03 (2.99E−03) 9.08E−04 (2.88E−04) 1.94E−03 (4.19E−04) 1.12E−02 (1.62E−02) 8/1/1 2

GSO 1.95E−08 (1.16E−08) 3.70E−05 (8.62E−05) 5.78E+00 (3.68E+00) 1.08E−01 (3.99E−02) 4.98E+01 (3.02E+01) 1.60E−02 (1.33E−01) 7.38E−02 (9.26E−02) 1.02E+00 (9.51E-01) 2.66E−05 (3.08E−05) 3.08E−02 (3.09E−02) 9/0/1 0

RCBBO 1.39E−03 (5.50E−04) 7.99E−02 (1.44E−02) 2.27E+01 (1.03E+01) 3.09E−02 (7.27E−03) 5.54E+01 (3.52E+01) 0.00E+00 (0.00E+00) 1.75E−02 (6.43E−03) 2.62E−02 (9.76E-03) 2.51E−02 (5.51E−03) 4.82E−01 (8.49E−02) 8/1/1 1

CMAES 6.09E−29 (1.55E−29) 3.48E−14 (4.03E−15) 1.51E−26 (3.64E−27) 3.99E−15 (5.31E−16) 5.58E−01 (1.39E+00) 7.00E−02 (2.93E−01) 2.21E−01 (8.65E−02) 4.95E+01 (1.23E+01) 4.61E+00 (8.73E+00) 7.40E−04 (2.39E−03) 8/0/2 1

G3PCX 6.40E−79 (1.25E−78) 2.80E+01 (1.01E+01) 1.06E−76 (1.53E−76) 4.54E+01 (8.09E+00) 3.09E+00 (1.64E+01) 9.46E+01 (5.97E+01) 9.80E−01 (4.63E−01) 1.74E+02 (3.20E+01) 1.35E+01 (4.82E+00) 1.13E−02 (1.31E−02) 8/0/2 0

TPLPSO 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 2.50E+01 (9.63E-01) 0.00E+00 (0.00E+00) 8.76E+00 (5.14E-01) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 8

3.3.6 Comparison in Real-World Problems This subsection presents the comparative analysis of the proposed TPLPSO in solving three engineering application problems, namely, (1) the gear train design problem (Sandgren, 1990), (2) the frequency-modulated (FM) sound synthesis problem (Das and Suganthan, 2010), and (3) the spread spectrum radar polyphase code design problem (Das and Suganthan, 2010). The general descriptions and the mathematical models of these engineering applications have been presented in details in Section 2.4.2. All of the six PSO variants employed in the previous experiments (Section 3.3.3) are compared with the proposed TPLPSO in the three mentioned engineering design problems. The experimental settings for each problem are presented as follows. For the fourdimensional (4-D) gear train design problems, the algorithms’ population size (S) and maximum fitness evaluation numbers (FEmax) are set to 10 and 3.00E+04, respectively. The six dimensional (6-D) FM sound synthesis problem is solved with S = 10 and FEmax = 3.00E+04. Finally, for the 20-D spread spectrum radar polyphase code design problem, the values of S and FEmax are set to 20 and 2.00E+05, respectively. The experimental settings for

110

Table 3.20 Experimental settings for the three real-world engineering design problem Parameters D S FEmax

Gear train design 4 10 2.00E+04

FM sound synthesis 6 10 3.00E+04

Spread spectrum radar polyphase code design problem 20 20 2.00E+05

Table 3.21 Simulation results of TPLPSO and six other PSO variants in the gear train design problem Emean SD h tmean

APSO 1.28E-08 1.70E-08 + 1.11E+02

CLPSO 7.06E-11 1.77E-10 1.02E+02

FLPSO-QIW 3.34E-10 5.78E-10 1.17E+02

FIPS 5.59E-09 3.70E-08 + 9.14E+01

RPPSO 2.43E-07 3.79E-07 + 9.16E+01

UPSO 6.90E-08 2.25E-07 + 5.89E+01

TPLPSO 1.36E-09 1.57E-09 3.76E+01

these three engineering design problem are summarized in Table 3.20. The simulation results over 30 independent runs for the gear train design, FM sound synthesis, and spread spectrum radar polyphase code design problems are presented in Tables 3.21, 3.22, and 3.23, respectively. Specifically, the values of Emean, SD, h, and the mean computational time (tmean) obtained by each compared algorithms are reported in these tables. Table 3.21 reveals that all PSO variants exhibit competitive search accuracy in solving for the gear train design problem, considering that these tested algorithms successfully achieve the Emean values with the accuracy level less than 10 -6. Among the seven PSO variants, the proposed TPLPSO achieves the third best Fmean value, i.e., its search accuracy in solving the gear train design problems significantly outperforms the APSO, FIPS, RPPSO, and UPSO. Although the Emean value produced by the TPLPSO in the gear train design problem is relatively inferior to that of CLPSO and FLPSO-QIW, the former is 2.71 times more superior to the latter two in term of computational overhead (represented by tmean). Similar observation could be found in the FM sound synthesis problem, where Table 3.22 reports that the proposed TPLPSO is identified as the fourth best optimizer in solving this problem. The Emean value obtained by the TPLPSO is lower than the APSO, RPPSO, and UPSO, suggesting that the former exhibits more superior search accuracy in solving this problem than the latter three. Although the CLPSO and FLPSO-QIW produce more desirable

111

Table 3.22 Simulation results of TPLPSO and six other PSO variants in the FM sound synthesis problem Emean SD h tmean

APSO 2.06E+01 5.46E+00 + 1.11E+02

CLPSO 1.03E+01 6.35E+00 1.07E+02

FLPSO-QIW 5.23E+00 5.90E+00 1.14E+02

FIPS 1.19E+01 5.77E+00 9.01E+01

RPPSO 1.89E+01 4.37E+00 + 7.89E+01

UPSO 2.32E+01 2.76E+00 + 5.84E+01

TPLPSO 1.42E+01 7.23E+00 5.09E+01

Table 3.23 Simulation results of TPLPSO and six other PSO variants in the spread spectrum radar polyphase code design problem Emean SD h tmean

APSO 1.33E+00 1.92E-01 + 4.96E+02

CLPSO 1.08E+00 7.81E-02 + 7.22E+02

FLPSO-QIW 1.02E+00 6.88E-02 + 9.64E+02

FIPS 1.04E+00 1.47E-01 + 2.75E+02

RPPSO 1.10E+00 1.73E-01 + 5.03E+02

UPSO 1.40E+00 2.06E-01 + 2.23E+02

TPLPSO 9.28E-01 1.41E-01 2.08E+02

Emean values than TPLPSO, their excellent search accuracies in solving the FM sound synthesis problem is compensated by their huge computational overheads. Meanwhile, Table 3.23 shows that all involved PSO variants exhibit similar search accuracy in tackling the spread spectrum radar polyphase design problem, considering that the Emean values produced are relatively similar. Among the seven tested algorithms, the proposed TPLPSO achieves the best Emean value, implying that it has the most superior search accuracy in solving the radar polyphase code design problem. The excellent performance of TPLPSO is further validated by the Wilcoxon test, given that the h values in Table 3.23 reveal that the search accuracy of the TPLPSO is statistically better than the other six PSO variants. Meanwhile, the mean computational time required by the TPLPSO to solve the spread spectrum polyphase design problem is also the lowest, implying that the modifications proposed into the TPLPSO do not significantly increase the algorithm’s computational overhead. Based on the experimental results reported in Tables 3.21 to 3.23, it is observed that most compared PSO variants are unable to balance the performance improvement in term of search accuracy and the additional computational overhead incurred. For example, although the CLPSO and FLPSO-QIW can generally tackle the three engineering design problems

112

with promising Emean values, the tmean values required are significantly high. On the other hand, the RPPSO and UPSO which generally consume less computation overhead tend to exhibit the inferior search accuracy in solving the tested problems. Unlike the compared peers, the proposed TPLPSO is able to tackle the given engineering design problems with comparable search accuracy and with much lower tmean value. The prominent performance of TPLPSO, in terms of computational overhead, suggests that it is a suitable candidate to tackle the real-world engineering applications.

3.3.7 Discussion The simulation results reported in the previous subsections verified the superior search accuracy, search reliability, and search efficiency of the proposed TPLPSO compared with other six well-established PSO variants and six existing MS algorithms. Focusing on the algorithmic framework, it can be concluded that the excellent performance of TPLPSO is attributed to the two strategies proposed, i.e., the enhanced TLBO framework and the SPLS module. The modification proposed for TPLPSO allows for a better emulation of real-world teaching-learning scenarios compared with the original TLBO. The experimental results presented in Tables 3.14, 3.15 and 3,16 are consistent with the Schewefel’s hypothesis (Back et al., 1997, Akhtar et al., 2013) because TPLPSO, which possesses a more accurate modeling of real-world teaching-learning scenarios, has better optimization capability than those with the original TLBO framework. In the context of algorithmic design, the enhanced TLBO framework can be used to balance the exploration/exploitation searches of swarm during the optimization. Specifically, both of the teaching phase and scenario 1 of the peer-learning phase encourage the exploitation process because the student particles are attracted towards the teacher or the peer particles with better knowledge (i.e., fitness). These two mechanisms could enhance the likelihood of the student particles to improve their knowledge and subsequently converge towards the promising regions of the search space. Meanwhile, the scenario 2 of peerlearning phase promotes the exploration process considering that the student particles are 113

encouraged to repel away from the peer particles with inferior fitness. This mechanism is important in preserving the swarm diversity and thus useful in tackling the premature convergence issue. Meanwhile, the unique learning strategy designed for the teacher particle, i.e., the SPLS module also plays significantly different role. The random perturbation mechanism embedded in the SPLS could further increase the exploration capability of the teacher particle by providing it the extra amount of potential exploration moves when it is trapped in the local optimum or any arbitrary point in the search space. This mechanism serves as an effective countermeasure for the teacher particle to against the stagnation. Although the development of TPLPSO is motivated by the Schewefel’s hypothesis, which suggests that a more accurate modeling of real-world scenario can lead to better nature-inspired optimization algorithm, the opposite scenarios of Schewefel’s hypothesis were observed in particular studies. For example, in a study on ant colony algorithm (ACO) (Dorigo and Blum, 2005), earlier algorithms were much more plausible models of the actual biological pheromone laying process than latter state-of-the-art ACO variants, which traded biological plausibility/accuracy for performance in optimization problems. In genetic algorithms (GA) (Goldberg and Holland, 1988, Weise, 2008), uniform crossover is generally preferable over biologically inspired one-point crossover. These observations suggest that simultaneously trying to accurately model particular existing systems and efficiently solving an optimization problem is nearly impossible. While such claims are reasonable, it is important to emphasize that the scenario in this research work (i.e., TLBO) is different from those in ACO and GA. Specifically, unlike TLBO, the mathematical models of the original ACO and GA are very well developed and could reflect the actual biological process in a very accurate manner. In such cases, further improving the accuracy of the mathematical models of GA and ACO is impossible. Thus, researchers must explore other alternatives, such as introducing artificial mechanisms that would compromise the modeling’s accuracy, to enhance the optimization capabilities of these algorithms further. Conversely, the observations on the original TLBO framework reveal that many of its working mechanisms do not truly reflect the actual teaching-learning 114

paradigm. The optimization capability of the original TLBO is not as fully exploited as those in ACO and GA considering that the former has not used the full potential of the teachinglearning process in the classical classroom paradigm. Despite exhibiting competitive search performance, some design issues of the proposed TPLPSO need to be highlighted. Specifically, the implementations of the algorithmic components, as presented in Sections 3.2.4 and 3.2.5, clearly show that the teaching phase of the TPLPSO is exploitative because the student particles are attracted to the teacher particle. On the other hand, the SPLS focuses on enhancing the exploration capability of the teacher particle. Among the proposed strategies, only the alternative learning phase (i.e., peer-learning phase) of the TPLPSO plays a role in adjusting the exploration and exploitation searches of the swarm via the attraction and repulsion mechanisms of student particles, respectively. Nevertheless, the executions of these two mechanisms depend on the fitness of the peer particles that are selected via the roulette wheel selection technique. Therefore, the strategy used by the peer-learning phase in controlling the exploration/exploitation searches of TPLPSO swarm is considered highly probabilistic and less adaptive. In this scenario, two extreme scenarios can be anticipated. The first extreme scenario is only the attraction mechanism is triggered in the peer-learning phase because all student particles select the peer particles with better fitness than them. Another extreme scenario is that all student particles select the peer particles with worse fitness than them and therefore only the repulsion mechanism is emphasized in the peerlearning phase. Both of these scenarios are undesirable because the former introduces excessive exploitation search which leads to the premature convergence, while the latter prevents the swarm convergence towards the promising regions of search space by overemphasizing the exploration search. Additionally, it is also observed that the knowledge transfer mechanism in the proposed TPLPSO occurs in one way, i.e., only the teacher particle contributes its knowledge to improve the student particle’s knowledge, whereas there is no feedback of knowledge can be observed from the latter to the former. Nevertheless, according to van den 115

Bergh and Engelbrecht (2004), different improved student particles may have good values in different dimensional components and these useful components might be useful to further enhance the knowledge of the teacher particle. Intuitively, the absence of a feedback mechanism between the improved student particles and the teacher particle during the search process might limit the optimization capability of the TPLPSO. To prevent the occurrence of the mentioned scenarios, the following chapter will introduce an adaptive task allocation mechanism into the alternative learning phase of algorithm. This adaptive task allocation mechanism is expected to systematically assign the particles with exploration or exploitation searches. Apart from this, another unique learning strategy of the global best particle will also be proposed to further enhance the algorithm’s search performance. Specifically, a feedback mechanism will be introduced in this new learning strategy to ensure that any useful information obtained by the student particles during the teaching or peer-learning phases could be transferred to the teacher particle. These information are expected to further improve the knowledge of the teacher particle.

3.4 Summary In this chapter, a new PSO variant, namely, TPLPSO is proposed. The development of the TPLPSO is motivated by the Schewefel’s hypothesis which states that a more accurate modeling of real-world scenario can lead to a better nature-inspired optimization algorithm. Specifically, this research work focuses on identifying the existing behaviors of the original TLBO, which do not accurately represent the real-world teaching-learning scenarios. Based on the observations, several appropriate innovations are proposed on the original TLBO to refine its framework. This enhanced TLBO framework will then be adapted into the BPSO to develop TPLPSO. Similar with the original TLBO, the enhanced TLBO framework employed by the proposed TPLPSO consist of two learning phases, namely the teaching phase and the peerlearning phase. Specifically, the teaching phase and the scenario 1 of peer-learning phase are exploitative considering that these mechanisms encourage the attractions of student particles 116

towards the better performing teacher or peer particles. On the other hand, the scenario 2 of peer-learning phase encourages the repulsion of student particles from the inferior peer particles and therefore this mechanism is considered explorative. Apart from the enhanced TLBO framework, a unique learning strategy called the SPLS is particularly designed to guide the search direction of the teacher particle. The SPLS is considered explorative because it provides extra momentum to the teacher particle to escape from inferior regions of search space. The simulation results of extensive performance comparison indicate that TPLPSO dominates its PSO and MS peers in terms of search accuracy, search reliability, and search efficiency. The experimental results also prove the effectiveness of the two proposed strategies, namely the enhanced TLBO framework and SPLS, in improving the performance of TPLPSO. The modifications proposed for TPLPSO allows a better emulation of realworld teaching learning scenarios compared with the original TLBO. The proposed TPLPSO, which possesses a more accurate modelling of real-world teaching-learning scenario, has better optimization capabilities than those variants that employ the original TLBO framework. These observations are consistent with that anticipated by the Schewefel’s hypothesis.

117

CHAPTER 4 ADAPTIVE TWO-LAYER PARTICLE SWARM OPTIMIZATION WITH ELITIST LEARNING STRATEGY

4.1 Introduction In the previous chapter, a new PSO variant, namely the TPLPSO, had been proposed. Specifically, an alternative learning phase called the peer-learning phase was introduced in the TPLPSO to provide the particles with new search directions when they fail to improve their fitness in their previous learning phase (i.e., teaching phase). Apart from this, one unique learning strategy, namely the stochastic perturbation-based learning strategy (SPLS), has also been designed to specifically evolve the global best particle Pg of TPLPSO. Extensive comparative studies performed in the previous chapter have proven that (1) the proposed alternative learning strategy for the particles and (2) the unique learning strategy for the global best particles are useful for improving the search performance of TPLPSO. Nevertheless, it can be observed that there is lack of an adaptive and systematic mechanism to control the exploration and exploitation searches of the particles in TPLPSO. As mentioned in earlier chapter, both of the exploration and exploitation searches need to be well-balanced because the overemphasizing of any of these searches could lead to the performance deterioration of the algorithm. Another notable issue of the TPLPSO is the knowledge transfer mechanism in this algorithm occurs in one way. Such unidirectional learning mechanism might restrict the algorithm’s search performance because the useful information obtained by the student particles during the teaching or peer-learning phases is neglected by the teacher particle. To make up the demerit of TPLPSO, a new PSO variant called Adaptive Two-Layer PSO with Elitist Learning Strategy (ATLPSO-ELS) is proposed in this chapter. Unlike the TPLPSO, two adaptive task allocation mechanisms are proposed in this chapter to achieve better regulations of the exploration and exploitation searches of ATPLPSO-ELS particles. Apart from this, a new unique learning strategy is also introduced in the ATLPSO-ELS, in 118

order to establish a two-way interaction between the improved particles and the global best particle. It is noteworthy that despite some algorithmic components of the proposed ATLPSO-ELS, i.e., the alternative learning phase and the unique learning strategy, are inherited from the TPLPSO, the implementation details of these components are different, as will be explained in the following sections of this chapter. The remaining of this chapter is organized as follows. First, the algorithmic framework of the proposed ATLPSO-ELS will be described in detail. Extensive comparative studies are subsequently performed to investigate the optimization capability of the proposed work. Finally, this chapter is concluded in the last sections.

4.2 Adaptive Two-Layer PSO with Elitist Learning Strategy In this section, the research ideas that motivated the development of the proposed ATLPSOELS are first provided. The general description of the ATLPSO-ELS is subsequently provided, followed by the implementation details of the each module used by the proposed algorithm. Finally, the complete framework of the ATLPSO-ELS is presented and some important remarks of this algorithm are highlighted.

4.2.1 Research Ideas of ATLPSO-ELS This subsection aims to explore the deficiencies of the TPLPSO proposed in the previous chapter. Additionally, some useful research findings that inspire the development of ATLPSO-ELS’s search strategies are also discussed. According to the previous chapter, the alternative learning phase of the TPLPSO (i.e., peer-learning phase) is the only algorithmic component that is responsible for balancing the exploration and exploitation searches of the TPLPSO particles. Nevertheless, the mechanism employed by the peer-learning phase of TPLPSO in assigning the search task of each particle (i.e., exploration or exploitation searches) is probabilistic, because it depends on the fitness of peer particle selected via the roulette wheel selection. As explained in the Section 3.3.7, the employment of such probabilistic mechanism could lead to two extreme 119

scenarios, i.e., the particles are either all attracted towards the selected peer particles or repelled away from the selected peer particles. Both of these extreme scenarios are undesirable because the former overemphasizes on the exploitation search which likely causes the swarm stagnation, while the latter introduces excessive exploration search that inhibits the swarm convergence. To prevent the mentioned scenarios, it is always desirable to design an adaptive task allocation mechanism in the alternative learning phase. Specifically, this mechanism aims to systematically divide the algorithm’s population into two sections, in order to ensure that there is always a certain amount of particles that perform both of the exploration and exploitation searches. It is noteworthy that the sizes of these exploration and exploitation sections need not to be fixed and they could be varied according to the indicators such as the swarm diversity and the fitness values of swarm members. Apart from this, previous chapter also mentions that the teaching phase of TPLPSO does not contain any mechanism in regulating the exploration and exploitation searches of the TPLPSO swarm. More particularly, the teaching phase is considered exploitative because the particles in this phase are always attracted towards the teacher particle (i.e., global best particle Pg). Nevertheless, by inspecting the algorithmic framework of TPLPSO, it could be observed that the teaching phase is performed in every iteration of the algorithm, whereas the peer-learning phase is only triggered when a particle fails to achieve the fitness improvement from its previous teaching phase. Considering that the first learning phase of TPLPSO (i.e., teaching phase) is more frequently executed than the alternative learning phase (i.e., peerlearning phase), the former is expected to have greater influences than the latter in governing the overall optimization outcomes. It is worth investigating if the inclusion of another adaptive task allocation mechanism in the first learning phase of TPLPSO could further improve the algorithm’s search performance. It is also observed that during the teaching phase and the peer-learning phase of the TPLPSO, only the current position of the particle is evolved. On the other hand, no specific learning phases have been introduced to evolve the particle’s personal best position. According to Epitropakis et al. (2012), two types of swarms exist in the PSO algorithm. The 120

first type of swarm is known as the current swarm, which consists of the particles’ current position vectors in the search space. The second swarm is called memory swarm because it memorizes the best knowledge attained by all particles (i.e., self-cognitive and social experiences) during the search process. These two types of swarms exhibit different dynamic behaviors and characteristics in the search space. Specifically, the current swarm is more explorative because it tends to wander around the search space before gathering around their best experiences. On the other hand, the memory swarm is considered exploitative because its members (i.e., particles’ best experiences) prone to congregate around the optimal regions of the search space and exploit these regions. According to Epitropakis et al. (2012), the strong clustering tendency exhibited by the memory swarm could be used to locate the promising regions of the search space by evolving the memory swarm members. In other words, the findings of Epitropakis et al. (2012) suggest that the different characteristics exhibited by the current swarm and the memory swarm of PSO could be capitalized to effectively guide the algorithm in searching for the global optimum of a given problem. Finally, it is also can be observed that the knowledge transfer mechanism of the TPLPSO is unidirectional, i.e., only the teacher particle contributes its knowledge to improve the student particle’s knowledge, whereas there is no feedback of knowledge can be observed from the latter to the former. According to van den Bergh and Engelbrecht (2004), different improved particles may have good values in different dimensional components and these information might be useful to the global best particle. Moreover, the modern learning paradigm that promotes two-way interaction between the teacher (i.e., teacher particle) and learners (i.e., student particles) has been proven as a more effective teaching-learning model because this type of interaction allows the information exchanges between the teacher and the learners (Hamilton and Tee, 2013, Milgate et al., 2011). In light of these observations, it could be deduced that the search performance of TPLPSO can be further improved by capitalizing the advantage of the bidirectional learning mechanism. Specifically, this mechanism allows the teacher particle to extract the useful information from the improved student particle and utilize this information to further evolve the former. 121

Motivated by the aforementioned findings, two new learning phases, namely the current swarm evolution and the memory swarm evolution, are eventually developed in the proposed ATLPSO-ELS. As suggested by their respective annotation, the former is employed to guide the search direction of current swarm members (i.e., particles’ current position), whereas the latter is responsible to evolve the memory swarm members according to the embedded search mechanisms. Considering the importance of regulating the exploration and exploitation searches during the optimization process, two adaptive task allocation (ATA) modules are also introduced into the current swarm evolution and the memory swarm evolution. Finally, an orthogonal experiment design (OED)-based learning strategy (OEDLS) is also proposed to specifically evolve the global best particle, by encouraging the information exchanges between the global best particle and the improved particles. The following subsections will cover the general description of the proposed ATLPSO-ELS, as well as the implementation details of all employed algorithmic components.

4.2.2 General Description of ATLPSO-ELS The general description of the proposed ATLPSO-ELS is provided in this subsection. In general, the development of the ATLPSO-ELS is based on the concept of memetic computing (MC) (Neri and Cotta, 2012), which is a broad subject that studies complex and dynamic computing structures composed of interacting modules (memes) whose evolution dynamics is inspired by the diffusion of ideas (Neri and Cotta, 2012). Specifically, the ATLPSO-ELS proposed in this research work consists of three interacting and harmoniously coordinated algorithmic components. These algorithmic modules collaborate with each other through information sharing or exchanging mechanisms, in order to seek the global optimum of a given problem. The general descriptions of the three main algorithmic components of ATLPSO-ELS are provided as follows. The first algorithmic component adopted in the proposed ATLPSO-ELS is known as the two-layer evolution framework. Similar with the TPLPSO, the two-layer evolution 122

framework employed by the ATLPSO-ELS consist of two learning phases. Nevertheless, unlike the TPLPSO where both of its learning phases are used to evolve the current swarm, the two-layer evolution framework of the ATLPSO-ELS performs the current swarm evolution and the memory swarm evolution. As explained in the earlier subsection, the development of the two-layer framework in the ATLPSO-ELS is inspired by the different dynamic behaviors exhibited by the current swarm and the memory swarm of PSO during its evolution. Specifically, the current swarm members that represent the particle’s current position prone to explore the search space before congregating around their best experiences (i.e., self-cognitive and social experiences) (Epitropakis et al., 2012). On the other hand, the memory swarm, which carries the particles’ best experiences, tends to be distributed in the vicinity of the problem’s optima and exploits these regions (Epitropakis et al., 2012). Earlier experimental findings suggest the strong clustering behavior of the memory swarm members could be capitalized to improve the search performance of PSO. This is achievable by evolving the particles’ best experiences via some intelligent optimization procedures. The second algorithmic components proposed in the ATLPSO-ELS is known as the adaptive task allocation (ATA) modules that aim to resolve the intense conflict between the exploration and exploitation searches of PSO. Unlike the TPLPSO which only performs the task allocation in its alternative learning phase, two ATA modules are proposed in ATLPSOELS to self-adaptively assign different search tasks to each swarm members during the current swarm evolution and the memory swarm evolution. Specifically, the proposed ATA modules will either locate the swarm members to visit the unexplored region in the search space (i.e., exploration) or locate the swarm members to fine-tune the regions of alreadyknown optima (i.e., exploitation), based on the swarm member’s fitness and diversity. It is noteworthy that although the ATA modules share some similarities with the evolutionary state estimation (ESE) module used by the APSO (Zhan et al., 2009), some notable differences between these two strategies need to be highlighted. First, the ATA modules proposed in this research work aim to divide the population into the exploration and exploitation sections according to the swarm member’s fitness and diversity, whereas the 123

ESE module is used to determine the evolutionary states of APSO particles. Second, the ATA modules are used to self-adaptively assign different search tasks to different swarm members with different fitness and diversity. Conversely, the outputs of the ESE module are used to tune the inertia weight and the acceleration coefficients of the APSO particle. The third algorithmic component proposed in the ATLPSO-ELS is known as the elitist-based learning strategy (ELS) module. The development of the ELS module is motivated by the fact that the learning strategy used by the global best particle Pg in majority of the PSO variants [e.g., FIPS (Mendes et al., 2004), CLPSO (Liang et al., 2006), and FLPSO-QIW (Tang et al., 2011)] is the same as those used by the remaining particles in the population. Intuitively, some unique learning strategies need to be assigned to the Pg particle, considering that it is the most important particle for guiding the swarm to seek the optimum solution. To this end, two learning strategies, namely the OED-based learning strategy (OEDLS) and the stochastic perturbation-based learning strategy (SPLS), are proposed in the ELS module to specifically evolve the Pg particle when predefined conditions are met. The OEDLS is designed to extract the useful information from other particles to further evolve the Pg particle. The bidirectional learning mechanism established in the OEDLS intends to enhance the algorithm’s convergence speed by rapidly guiding the best particle in the population towards the global optimum. Meanwhile, the SPLS aims to address the swarm stagnation issue by providing the extra momentum to the Pg particle when it is trapped in the local optima or any arbitrary points in the search space.

4.2.3 Diversity Metrics As mentioned in the earlier subsections, two ATA modules are designed in the proposed ATLPSO-ELS to self-adaptively assign different search tasks to each member of the current swarm and the memory swarm based on their respective fitness and diversity values. In this research work, two diversity metrics are introduced to realize the adaptive task allocation mechanisms of ATLPSO-ELS. Specifically, these metrics are known as (1) the population spatial diversity (PSD) metric and (2) the population’s fitness spatial diversity (PFSD) metric. 124

The methodologies employed to compute the PSD and PFSD metrics are presented in the following subsections.

4.2.3(a) PSD Metric The PSD metric describes the population’s solution space diversity and it is computed according to the population spread across the solution space. This metric is employed by the ATLPSO-ELS to adaptively divide the population into the exploration and exploitation sections and assign different search tasks to each cooperating particle during the memory swarm evolution. In other words, the PSD metric is capable to control the relative size of the exploration and exploitation sections during the memory swarm evolution. Consider the population of ATLPSO-ELS, which consists of the S particles with the personal best positions (i.e., self-cognitive experiences) of P = [P1, P2, …, PS]. To compute the PSD of each ATLPSO-ELS particle, a hypothetical particle of Pave is first computed as the average over all S particles. Specifically, Pdave represents the d-th component of the mentioned hypothetical particle and it is calculated as the average of the d-th component of all P position as follow: Pdave 

1 S i 1 Pi ,d S

(4.1)

Once the hypothetical particle Pave is obtained, the PSD metric of each particle i (i.e., PSDi) is subsequently computed as the Euclidean distance between the particle i and the hypothetical particle Pave as shown: psd i  dD1 ( Pi,d  Pdave ) 2

PSDi 

psd i  psd min psd max  psd min

(4.2)

(4.3)

where D represents the dimensionality of search space; psdi denotes the non-normalized spatial diversity of particle i; psdmax and psdmin represent the maximum and minimum nonnormalized spatial diversity in the population, where both of these values are obtained from the particles that are farthest and nearest from Pave, respectively. By inspecting the Equations 125

(4.2) and (4.3), it could be deduced that the particle i with the Pi position closer to the Pdave has lower spatial diversity, and vice versa.

4.2.3(b) PFSD Metric The second diversity metric proposed in this research work is known as the PFSD metric. Specifically, the PFSD metric is used to adaptively regulate the particle’s exploration strength during the current swarm evolution and the memory swarm evolution of ATLPSOELS. Unlike the previously explained PSD metric, the PFSD metric describes the population’s solution space diversity from the fitness perspective. In other words, the PFSD of each particle depends on its fitness value. The computation of the PFSD metric involves the merging of both fitness and solution spaces, in which each particle’s distance contribution is weighted based on its fitness. Another hypothetical particle of PW.ave is introduced in order to compute the PFSD of each particle. Similar with the previously described Pave, the hypothetical particle PW.ave is also derived from the personal best experiences of ATLPSO-ELS (i.e., P = [P1, P2, …, PS]). It is noteworthy that, unlike the Pave, the contribution of each particle i in deriving the hypothetical particle PW.ave is influenced by its fitness value. Mathematically, the degree of influences of each particle i in constructing the hypothetical particle PW.ave is quantified as a weight contribution value of Wi, which is calculated as follows:

wi 

ObjVmax  ObjV ( Pi ) ObjVmax  ObjVmin Wi 

wi S i 1 wi

(4.4)

(4.5)

where wi represents particle i's non-normalized weight value; ObjVmax and ObjVmin represent the worst and best fitness values observed in the population of P = [P1, P2, …, PS], respectively. Once the weight contribution of each particle i (i.e., Wi) is obtained, the weighted average hypothetical particle of PW.ave is computed. Mathematically, the d-th component of the hypothetical particle PW.ave (i.e., Pd

126

W.ave

) is computed as the fitness-

weighted average across the d-th components of all personal best positions in the population (i.e., P = [P1, P2, …, PS]) as shown:

PdW .ave  iS1Wi Pi,d

(4.6)

To calculate the PFSD metric of each particle i (i.e., PFSDi), the particle i’s Euclidean distance to the hypothetical particle PW.avg is computed and this value is then weighted according to the particle i's weight contribution of Wi as shown in Equation (4.7). By inspecting the Equation (4.7), it is notable that only the particle i with superior fitness [i.e., low value of ObjV(Pi)] and far from the PW.ave are assigned with the large value of PFSDi. On the other hand, the small values of PFSDi are assigned to those with the inferior fitness [i.e., large values of ObjV(Pi)] or close to the PW.ave. PFSDi  Wi dD1 ( Pi ,d  PdW .ave ) 2

(4.7)

4.2.3(c) Remarks It is worth noting that the computations of the PSD and PFSD metrics in this research work are different from those of the mean distance and evolutionary factor in APSO (Zhan et al., 2009). The differences between these metrics are explained as follows. First, the proposed PSD metric is computed based on the distance between a target particle and a hypothetical particle P avg. Meanwhile, no hypothetical particle is derived in APSO to compute the values of mean distance and evolutionary factor. Second, both of the PSD and PFSD values assigned to each ATLPSO-ELS particle are different considering that the computations of these metrics rely on the fitness value of each individual particles as well as its distances from the hypothetical particles Pavg and PW.avg. On the other hand, the same value of evolutionary factor is assigned to all APSO particles.

127

4.2.4 Current Swarm Evolution In this section, the search mechanism of the first learning phase of ATLPSO-ELS, namely, the current swarm evolution, are presented in this subsection. As mentioned in the earlier section, an adaptive task allocation (ATA) module, called the ATAcurrent module, is developed in the current swarm evolution to adaptively adjust the particle’s exploration/ exploitation strengths based on their respective PFSD values. The proposed ATAcurrent module adjusts the velocity of each particle i (i.e., Vi) via a selected exemplar particle Pe,i. To generate the Pe,i of each particle i, the tournament size Tsize,i that is dependent on the particle i's PFSDi value is first calculated as shown:

  PFSDi  Tsize ,i  max 2,   Tsize ,max    PFSD  max   

(4.8)

where   is a ceiling operator; PFSDmax denotes the maximum PFSD; Tsize ,max  Z * S represents the maximum tournament size that is available for the current swarm member, with the parameter Z that ranges from 0 to 1. The Tsize,i value of each particle i is crucial to self-adaptively adjust the particle’s exploration/exploitation strengths by determining the number of candidates that are eligible as the exemplar candidates for particle i. Specifically, the particle with a larger Tsize,i value is more exploitative because it is more likely to select the fitter particle as exemplar. In contrary, the particle with a smaller Tsize,i value tends to select outlying particles to guide the search and therefore it is more explorative. From Equation (4.8), it can be deduced that the Tsize,i of each particle varies linearly with their respective PFSD value. Accordingly, the particle i with better fitness [i.e., lower ObjV(Pi)] and distant from PdW .ave has stronger exploitation strength because the higher values of PFSDi and Tsize,i are assigned to the particle i. Conversely, the particle with inferior fitness [i.e., higher ObjV(Pi)] or close to

PdW .ave has lower values of PFSDi and Tsize,i, and thus, is more explorative. Apart from the PFSD value of each particle, the parameter Tsize,max emerges as another key metric to change

128

Exemplar_index_selection (P, ObjV(P), Pg, particle i, Tsize,i) Input: Personal best positions (P = [P1, P2, …, PS]) and the corresponding ObjV (ObjV(P) = [ObjV(P1), ObjV(P2), …, ObjV(PS),]) of all particles, global best particle’s position Pg, particle i's index, tournament size of particle i (Tsize,i) 1: Identify the index of Pg particle and exclude it from the exemplar selection; 2: for each dimension d do 3: Select Tsize exemplar candidates, ECd = [ECd,1 …, ECd,k, … ,ECd,Tsize] from P; 4: Identify candidate with lowest fitness and store the corresponding index, k into the exemplar_index(i, d); 5: end for 6: Return exemplar_index(i); Output: Exemplar index for particle i , i.e., exemplar_index(i) Figure 4.1: Exemplar index selection in the current swarm’s ATAcurrent module.

the value of Tsize,i and hence it is also essential in tuning the exploration/exploitation strengths of particle during the current swarm evolution of ATLPSO-ELS. Figure 4.1 depicts the exemplar selection procedure for particle i. To obtain the d-th dimensional component of Pe,i, a total of Tsize,i candidates are first randomly selected from the personal best positions of all particles in the population (i.e., P = [P1, P2, …, PS]), by excluding the global best particle Pg. Among these Tsize,i exemplar candidates, the particles k with the best fitness [i.e., lowest ObjV(Pk)] is used as the d-th dimensional component of Pe,i. In this research work, the exemplar index of Pe,i is obtained instead of the real position of Pe,i to ensure that new information from the candidates can be used immediately once they successfully update their personal best position. The particle i in the current swarm updates the velocity of the particle as follow:

Vi  Vi  cr1 ( Pe,i  X i )

(4.9)

where c is the acceleration coefficient and is set to 2; r1 is a random number ranges from 0 to 1. To sustain the particle’s momentum during the search process, the Vi of particle i is randomly initialized when the value becomes zero. The ObjV value of the updated Xi of particle i [i.e., ObjV(Xi)] is then evaluated and compared with the ObjV value of its personal best position [i.e., ObjV(Pi)]. If the updated Xi of particle i has better fitness than its personal best position, [i.e., ObjV(Xi) < ObjV(Pi)], Xi will replace Pi. Similarly, Xi could replace Pg if ObjV(Xi) < ObjV(Pg). A failure counter fc is used to record the number of times where the

129

ATA_current (i, Vi, Xi, Pi, Pg, ObjV(Pi), ObjV(Pg), fc, Tsize,i, improve_flagi, P, ObjV(P)) Input: Particle i's old velocity (Vi), position (Xi), and personal best position (Pi), old global best position (Pg), old ObjV of particle i's personal best position [ObjV(Pi)], old ObjV of global best particle [ObjV(Pg)], failure counter (fc), particle i's index, tournament size of particle i (Tsize,i), improve_flagi, personal best positions (P = [P1, P2, …, PS]) and the corresponding ObjV (ObjV(P) = [ObjV(P1), ObjV(P2), …, ObjV(PS),]) of all particles. 1: if improve_flagi = ‘no’ then 2: exemplar_index(i) = Exemplar_index_selection(P, ObjV(P), Pg, particle i, Tsize,i); 3: end if 4: Generate Pe,i for particle i according to exemplar_index(i); 5: Update Vi and Xi of particle i using Equations (4.9) and (2.2), respectively; 6: Perform fitness evaluation on the updated Xi of particle i; 7: if ObjV(Xi) < ObjV(Pi) then 8: Pi = Xi; ObjV(Pi) = ObjV(Xi); 9: Update improve_flagi = ‘yes’; 10: if ObjV(Xi) < ObjV(Pg) then 11: Pg = Xi; ObjV(Pg) = ObjV(Xi); 12: fc = 0; 13: else 14: fc = fc + 1; 15: end if 16: else 17: Update improve_flagi = ‘no’; 18: fc = fc +1; 19: end if Output: Updated Vi, Xi, Pi, Pg, ObjV(Pi), ObjV(Pg), fc, improve_flagi, exemplar_index(i), Pe,i Figure 4.2: Overall framework of the ATAcurrent module adopted in the current swarm evolution of ATLPSO-ELS.

fitness of Pg particle fails to be improved. If the fitness of Pg is successfully improved, the fc counter is reset to zero; otherwise, the counter value is increased by one. The fc counter is used to check whether the condition for Pg particle to perform its unique learning strategy through ELS module is met. Figure 4.2 illustrates the complete procedure of the ATAcurrent module employed in the current swarm evolution of ATLPSO-ELS. To prevent the frequent changing of guidance direction, the Pe,i is maintained as the exemplar of particle i until Pe,i fails to improve the Pi position of particle i. To achieve this objective, an improve_flagi is introduced for each particle i to record the success experience of Pe,i in guiding the particle i's search direction. Specifically, when the current exemplar Pe,i is no longer able to guide the particle i in the

130

current swarm toward the better solution, the improve_flagi is set as ‘no’ and a new exemplar index is then obtained to generate a new Pe,i, which could offer a new search direction. It is noteworthy that the search mechanisms proposed in the ATAcurrent module shares some similarities with those of CLPSO (Liang et al., 2006), FLPSO-QIW (Tang et al., 2011), and DNLPSO (Nasir et al., 2012), given that these PSO variants also use the exemplars that are derived from the non-global best solutions to guide the search. Nevertheless, it must be emphasized that the working mechanisms used in the ATAcurrent module to obtain the exemplar are different from those of these PSO variants. For instance, the number of candidates used by the CLPSO, FLPSO-QIW, and DNSLPSO to generate the exemplar is always set as two for all particles without considering the latter’s diversity and fitness. Compared with these PSO variants, the proposed ATAcurrent module is equipped with a sequence of systematic procedures to determine the Tsize,i of each particle during the derivation of the exemplar. Therefore, the latter is anticipated to be more efficient in tuning the particle’s exploration/exploitation strengths as compared to the former.

4.2.5 Memory Swarm Evolution In this subsection, the second learning phase of the ATLPSO-ELS, i.e., memory swarm evolution is described. Unlike the peer-learning phase of TPLPSO, the memory swarm evolution of ATLPSO-ELS is designed to evolve the memory swarm members (i.e., particles’ self-cognitive and social experiences) instead of the current swarm members. An ATA memory module is proposed in this research work to perform the memory swarm evolution of ATLPSO-ELS. Specifically, the ATAmemory module uses the previously described PSD and PFSD metrics to adaptively allocate different search tasks to the memory swarm’s members to ensure these newly cognitive and social experiences could either locate between regions around the problem’s optima (i.e., exploration) or to rapidly fine-tune the regions of the already-found minima (i.e., exploitation). In the following subsections, the mechanism of the ATAmemory module is described in detail.

131

4.2.5(a) Adaptive Task Allocation in Memory Swarm Evolution In this research work, a heuristic rule is employed by the ATA memory module to adaptively divide the memory swarm of ATLPSO-ELS into the exploration and exploitation regions based on the PSD metric. Specifically, the exploitation search is favorable for particle i with higher spatial diversity (i.e., higher PSDi) to refine its solution. On the other hand, the particle i with lower spatial diversity (i.e., lower PSDi) tends to emphasize exploration search in order to prevent premature convergence. Based on the mentioned heuristic rule, a learning probability Pc,i is subsequently proposed in the ATAmemory module. This learning probability, as computed by Equation (4.10), is used to calculate the likelihood of particle i to engage in an exploitation search during the memory swarm evolution:

 PSDi  Pc,i    ( K 2  K1 )  K1  PSDmax 

(4.10)

where PSDmax represents the maximum PSD; K1 and K2 are parameters that range from 0 to 1, used to determine the maximum and minimum probabilities of a particle to perform exploration and exploitation searches in the memory swarm, respectively. Equation (4.10) reveals that a particle with higher PSDi has higher chance to perform the exploitation search because of its higher Pc,i. value. Conversely, a particle with smaller Pc,i value, which is associated with lower PSDi, has higher likelihood to be assigned into the exploration region. Once the learning probability Pc,i of each particle i is obtained, the proposed ATAmemory module will utilize this value to perform the adaptive task allocation in memory swarm evolution. This is achievable by generating a random number Rand and then compare this number with the Pc,i value of particle i. If Rand is smaller than Pc,i, particle i is assigned into the exploitation section and vice versa, as shown below: exploitation , taski   exploration ,

132

Rand  Pc, i otherwise

(4.11)

4.2.5(b) Exploitation Section in Memory Swarm Evolution During the memory swarm evolution, the memory swarm members that engage to the exploitation search are encouraged to fine-tune the already-found optimal regions in the search space. Considering that the global best particle Pg is the best so far solution found by the algorithm, it can be deduced that the regions around the Pg particle are the most promising as compared to the other regions in the search space. In light of this fact, each exploitation particle i in the memory swarm adjusts its selfcognitive experience Pi based on the social experience Pg as follows:

Pi temp  Pi  cr2 ( Pg  Pi )

(4.12)

where Pi temp is the adjusted self-cognitive experience of particle i and r2 is a random number ranges from 0 to1. The ObjV value of the Pi temp [i.e., ObjV(Pi temp)] is then evaluated and compared with the ObjV value of the unadjusted Pi [i.e., ObjV(Pi)]. If ObjV(Pi temp) is smaller than ObjV(Pi), it implies that the particle i successfully finds a more promising self-cognitive experience and the adjusted Pi temp will replace the original Pi. On the other hand, the original Pi is remained unchanged if the fitness of the adjusted Pi temp is not better than that of Pi [i.e., ObjV(Pi temp)  ObjV(Pi)].

4.2.5(c) Exploration Section in Memory Swarm Evolution As explained earlier, the particles assigned to the exploration section of the memory swarm evolution aim to wander around the unvisited regions in the search space and to investigate if these regions have more promising solutions than those already-found optima regions. Considering that the main purpose of this section is to explore the new regions of search space rather than to exploit the existing one, the global best particle is not considered as a feasible candidate to guide the search of memory swarm members that are assigned into the exploration section. On the other hand, previous studies reveal that the employment of the non-global best particles as the particle’s exemplars offers more flexibility to the search

133

trajectory and this strategy is crucial in preventing the swarm stagnation (Mendes et al., 2004, Liang et al., 2006, Tang et al., 2011, Nasir et al., 2012). Motivated by the aforementioned facts, a new learning strategy, which is inspired from the FIPS (Mendes et al., 2004), is proposed to evolve the memory swarm members that engage into the exploration section. Similar with the FIPS, the movements of these exploration particles are influenced by their respective neighborhood members. However, unlike the FIPS which takes the influences of all neighborhood members into account, the proposed learning strategy only considers the information provided by some selected neighborhood members. As shown in Equation (4.13), the proposed method randomly m m m m selects Ni members from the memory swarm (i.e., Pi  [ Pi ,1 , Pi , 2 ,...Pi , Ni ] ), by excluding the

exploration particle i itself, in order to update its self-cognitive experience Pi: Pi temp  Pi   ck rk ( Pim ,k  Pi )

(4.13)

Pim ,k Ni

where Pi ,mk is the personal best position of the randomly selected Pi m members; Ni is the number of selected Pi m members required by the particle i to guide the search and is set as Tsize,i to ensure that each exploration particle has varying exploration strength; rk is the random number in the range of [0, 1]; and ck is the acceleration coefficient that is equally distributed among the Ni randomly selected particles and calculated as (Mendes et al., 2004):

ck  call N i

, where call = 4.1

Once the adjusted self-cognitive experience Pi

temp

(4.14)

of particle i is obtained, the

corresponding ObjV value [i.e., ObjV(Pi temp)] is evaluated and compared with ObjV(Pi). If the adjusted Pi temp has better fitness than the original Pi, i.e., ObjV(Pi temp) < ObjV(Pi), the former will replace the latter. Otherwise, the adjusted Pi temp is discarded.

4.2.5(d) Complete Framework of ATAmemory Module The overall framework of the proposed ATAmemory module is depicted in Figure 4.3. Similar with the learning strategy introduced in the ATAcurrent module [as shown in Equation (4.9)], 134

ATA_memory (i, Pi, Pg, ObjV(Pi), ObjV(Pg), fc, Tsize,i, improve_flagi ,taski, P, Rmax, Rmin, Pim) Input: Particle i's old personal best position (Pi), old global best position (Pg), old ObjV of particle i's personal best position [ObjV(Pi)], old ObjV of global best particle [ObjV(Pg)], failure counter (fc), particle i's index, tournament size of particle i (Tsize,i), improve_flagi, the latest search task assigned to the particle i (taski), personal best positions (P = [P1, P2, …, PS]) of all particles, minimum perturbation range (Rmin), m m m m maximum perturbation range (Rmax), old exemplar array of Pi  [ Pi ,1 , Pi , 2 ,...Pi , Ni ] , number of fitness number evaluations (fes) 1: if Pi = Pg then 2: Calculate Pi,temp of particle i using SPLS (Pi, ObjV(Pi), Rmax, Rmin, fes); 3: else if taski = exploitation then 4: Calculate Pi temp of particle i using Equation (4.12); 5: else /* taski = exploration*/ 6: if improve_flagi = ‘no’ then 7: Randomly select Ni = Tsize,i memory swarm members, excluding the Pi itself, to m m m m form the exemplar array of Pi  [ Pi ,1 , Pi , 2 ,...Pi , Ni ] ; 8: end if 9: Calculate Pi temp of particle i using Equation (4.13); 10: end if 11: Perform fitness evaluation on the adjusted self-cognitive experience Pi temp of particle i; 12: if ObjV(Pi temp) < ObjV(Pi) then 13: Pi = Pi temp; ObjV(Pi) = ObjV(Pi temp); 14: Update improve_flagi = ‘yes’; 15: if ObjV(Pi temp) < ObjV(Pg) then 16: Pg = Pi temp; ObjV(Pg) = ObjV(Pi temp); 17: fc = 0; 18: else 19: fc = fc +1; 20: end if 21: else 22: Update improve_flagi = ‘no’; 23: fc = fc +1; 24: end if Output: Updated Pi, Pg, ObjV(Pi), ObjV(Pg), fc, improve_flagi, taski, Pim Figure 4.3: Overall framework of the ATAmemory module adopted in the memory swarm evolution of ATLPSO-ELS.

the Ni randomly selected Pim members that are used to guide the exploration particle i [as shown in Equation (4.13)] are maintained as long as they are able to improve the ObjV of the social experience of particle i [i.e., ObjV(Pi)]. When the current Pim members no longer improve the ObjV(Pi), the Boolean flag assigned to the particle (i.e., improve_flagi) is set as ‘no’. A new set of Pim members are then randomly selected again to provide a new searching direction for particle i. Furthermore, to prevent a nullified effect during the evolution of Pg

135

ELS_Module (Pgold, ObjV(Pgold), Pi, ObjV(Pi), ObjV(Piold), fes,fc, Rmin, Rmax) Input: Previous global best position (Pgold) and the corresponding ObjV [ObjV(Pgold)], particle i's improved personal best position (Pi) and the corresponding ObjV [ObjV(Pi)], the ObjV of the previous personal best position (Piold) of particle i [ObjV(Piold)], number of fitness evaluations consumed (fes), failure counter (fc), minimum perturbation range (Rmin), maximum perturbation range (Rmax) 1: if ObjV(Pi) < ObjV(Piold) and Pgold  Pi then 2: Perform OEDLS (Pgold, ObjV(Pgold), Pi , fes); 3: end if 4: if fc > m then 5: Perform SPLS (Pgold, ObjV(Pgold), Rmax, Rmin, fes); 6: fes = fes + 1; 7: end if Output: Updated global best position (Pg) and the corresponding ObjV [ObjV(Pg)], updated number of fitness evaluation (fes) Figure 4.4: ELS module in ATLPSO-ELS.

particle in the ATAmemory module, the Pg particle is evolved through SPLS, which will be explained in the following subsection.

4.2.6 Elitist Learning Strategy Module As mentioned in the earlier subsection, an elitist learning strategy (ELS) module is proposed to evolve the global best particle Pg considering that Pg is the most important particle to guide the swarm to seek for the optimum solution during the search process. As illustrated in the overall framework of ELS module in Figure 4.4, two learning strategies are introduced in the ELS module to specifically evolve the Pg particle when the predefined conditions are met. These learning strategies include the orthogonal experiment design (OED)-based learning strategy (OEDLS) and the stochastic perturbation-based learning strategy (SPLS). The mechanism of the OEDLS and SPLS will be presented in the following subsections.

4.2.6(a) Orthogonal Experiment Design-Based Learning Strategy Earlier studies performed by van den Bergh and Engelbrecht (2004) reveal that different improved particles may have good values in different dimensions of their self-cognitive experience (i.e., personal best experience Pi). Thus, it is worth investigating which dimension of the newly updated Pi consists of useful information to improve the existing 136

global best particle Pg. Meanwhile, Section 2.2.2(d) reveals the excellent prediction capability of OED technique in discovering the best experimental combinations. This suggests the feasibility of OED technique in identifying which dimensional components of the updated Pi positions consists of the useful information for the existing Pg. Motivated by these facts, an OED-based learning strategy (OEDLS) is introduced in this subsection as one of the unique learning strategies for the existing global best particle. The OEDLS has an important function in realizing the bidirectional learning mechanism between the existing global best particle and the particles with improved self-cognitive experiences. Specifically, this module allows the latters to transfer their useful information to the former and to further improve the former’s fitness. As illustrated in Figure 4.4, the OEDLS is only triggered when (1) the particle i successfully improves its self-cognitive experience (i.e., personal best position Pi) and (2) the improved Pi position of particle i is different from the global best particle Pg. The working mechanism of OEDLS is described as follows. In OEDLS, each of the d-th dimensional components of a position vector is assigned as a factor; thus, a Ddimensional optimization problem can be considered to have a total of D experimental factors. For each factor (dimension), the existing global best particle (i.e., Pg) and the improved social experience of particle i (i.e., Pi) contribute two factor levels (i.e., Q = 2), which are assigned as levels 1 and 2, respectively. When OEDLS is triggered, a LM(2D) OA is constructed according to the procedures described in Figure 2.9. Subsequently, the M combinations of candidate solutions, i.e., X j (1  j  M ) , are generated based on the contents of LM(2D) OA, Pg, and Pi positions. Specifically, for the d-th dimension, Pg,d is selected when the level in OA is 1, whereas Pi,d is selected for level 2. The ObjV of each combination Xj is then evaluated and recorded as ObjV j (1  j  M ) . To identify the most significant level for a particular dimension, the main effect value (i.e., Sjk) for each k-th level and j-th factor are computed via Equation (2.10), and the FA is then performed to derive the predictive solution Xp. The ObjV of Xp, i.e., ObjV(Xp) is

137

OEDLS_Module (Pg,old, f(Pg,old), Pi, fes) Input: Previous global best position (Pgold), and the corresponding ObjV [ObjV(Pgold)], particle i's improved personal best position (Pi), number of fitness evaluation consumed (fes) 1: Construct a LM(2D) OA based on the Pgold and the newly updated Pi by referring to Figure 2.9; 2: Generate M combination of candidates, X j (1  j  M ) from LM(2D) OA, Pgold and Pi; 3: for each combination Xj do 4: Evaluate the objective function value (ObjV) of Xj and record it as ObjVj; 5: fes = fes + 1; 6: end for 7: for each dimension d do 8: for each level l do 9: calculate Sdl for every l-th level and d-th factor via Equation (2.10); 10: end for 11: end for 12: Perform FA to construct the predictive solution Xp based on all the effect values of Sdl ; 13: Evaluate the ObjV of the predictive solution Xp to obtain ObjV(XP); 14: fes = fes + 1; 15: if ObjV(XP) < ObjV(Pgold) then 16: Pg = XP ; ObjV(Pg) = ObjV(Xp); 17: else 18: Pg = Pg,old; ObjV(Pg) = ObjV(Pgold); 19: end if Output: Updated global best position (Pg) and the corresponding ObjV [i.e., ObjV(Pg)], updated fes Figure 4.5: OEDLS in the ELS module.

then compared with the ObjV of the existing global best particle, i.e., ObjV(Pg). If the value of ObjV(Xp) is smaller than ObjV(Pg), it implies that the OEDLS successfully improves the existing global best particle by producing a more superior Xp solution. The fitter Xp solution is then used to replace the existing Pg and become the new global best particle in the ATLPSO-ELS population. The procedure of OEDLS is presented in Figure 4.5. A case study is performed in the Appendix B to investigate the capability of the proposed OEDLS. As illustrated in Table B1, the OEDLS successfully extracts the useful information from the improved self-cognitive experience of particle i and utilizes it to further improve the fitness of the existing global best particle. These observations reveal that the bidirectional learning mechanism indeed exists between the improved Pi position and the existing Pg particle.

138

Differences between the ATLPSO-ELS with other OED-based PSO variants Although the OEDLS module is designed based on the OED technique, this module is different from those of existing OED-based PSO variants in term of the following: (1) purposes, (2) involved particles, and (3) condition to execute the module. Contrary to Zhao et al. (2006) and Ko et al. (2007) who used OED for population initialization and parameter selection, respectively, the proposed OEDLS aims to establish a two-way learning mechanism process between the particles with improved self-cognitive experience and the existing global best particle. In ATLPSO-ELS, the factor level assigned in the OEDELS is Q = 2 because only the particle with improved self-cognitive experience and the global best particle are involved. This module is different from ODPSO (Yang et al., 2010), OT-PSO (Wang and Chen, 2009), and ODEPSO (Feng et al., 2012) given that these variants randomly select two or more particles into their respective OA. Moreover, the OEDbased operators of these variants perform search space quantization and their Q values are larger than 2. For OPSO (Ho et al., 2008), the IMM module uses temporary moves H and R to predict the particle’s next position and then its velocity. In contrast to ATLPSO-ELS, the IMM module is executed in every iteration. Finally, ATLPSO-ELS is different from OLPSO (Zhan et al., 2011) in terms of the conditions required to execute the OED-based module. Specifically, OLPSO triggers its orthogonal learning only when its particle’s exemplar is no longer effective in guiding the search. However, the proposed ATLPSO-ELS executes the OEDLS immediately after the self-cognitive experience of a particle is improved. Compared with OLPSO, the OEDLS employed by the ATLPSO-ELS is more effective in extracting useful information from non-global best particles and in further improving the fitness of the global best particle.

4.2.6(b) Stochastic Perturbation-Based Learning Strategy As mentioned earlier, there is a possibility that the global best particle Pg of a PSO algorithm is trapped at the local optimum or any arbitrary point in the search space when its ObjV is 139

not improved for long time during the search process. The entrapment of the Pg particle in these inferior regions of the search space might lead to the premature convergence, considering that the swarm members in PSO are strongly attracted towards the Pg particle. To prevent the Pg particle from being trapped in the local optima or any arbitrary point in the search space, the stochastic perturbation-based learning strategy (SPLS), which is inherited from the TPLPSO, is employed to perform perturbation on the Pg particle, if the fitness of this particle is not improved for m successive number of fitness evaluations (FEs). Please refer to Section 3.2.5 for more details regarding the implementation of SPLS.

4.2.7 Complete Framework of ATLPSO-ELS Figure 4.6 illustrates the complete implementation of the proposed ATLPSO-ELS. The population divisions of the current swarm and the memory swarm are not performed frequently to save computational resource. Specifically, the existing population division in the ATLPSO-ELS is maintained until the ObjV(Pg) is not improved for S successive FEs. As illustrated in Figure 4.6 (see lines 10 to 13 and 23 to 26), the new exploration and exploitation sections in the swarm are generated by computing the new values of PSD, PFSD, Tsize, and Pc when fc > S. All particles in the current swarm and the memory swarm will abandon their previous search tasks and select the new ones. Unlike the TPLPSO proposed in the previous chapter, the current swarm evolution and the memory swarm evolution of ATLPSO-ELS are not performed sequentially. Instead, only one type of swarm evolution will be performed in every generation of ATLPSO-ELS. The proposed ATLPSO-ELS will first initiate the evolution of current swarm. As long as the current swarm evolution is able to improve the fitness of the Pg particle with at least one of the particle’s swarm members, it can be assumed that these current swarm members are on the right track to locate the global optimum and no intervention from the memory swarm evolution is required. When the current swarm evolution no longer improves the Pg particle, the memory swarm evolution of ATLPSO-ELS will then take over the search. Similarly, the

140

ATLPSO-ELS Input: Population size (S), dimensionality of problem space (D), objective function (F), the initialization domain (RG), problem’s accuracy level (  ), maximum number of fitness evaluation (FEmax) 1: Generate initial swarm and initialize fes = 0, fc = 0; 2: Calculate PSD, PFSD, Tsize, and Pc, using Equations (4.1) to (4.3), (4.4) to (4.7), (4.8), and (4.8), respectively; 3: while fes < FEmax do 4: while ObjV(Pg) is improved at least once in the current swarm evolution do 5: for each current swarm member i do 6: ObjV(Piold) = ObjV (Pi); 7: Perform ATA_current (i, Vi, Xi, Pi, Pg, ObjV(Pi), ObjV(Pg), fc, Tsize,i, improve_flagi, P, ObjV(P); 8: fes = fes + 1; 9: Check for ELS_Module (Pgold, ObjV(Pgold), Pi, ObjV(Pi), ObjV(Piold), fes,fc, Rmin, Rmax); 10: if fc > S then /*Update population division*/ 11: Update values of PSD, PFSD, Tsize , and Pc; 12: Reset fc = 0; 13: end if 14: end for 15: end while 16: while ObjV(Pg) is improved at least once in memory swarm evolution do 17: for each memory swarm member i do 18: ObjV(Piold) = ObjV (Pi); 19: Assign the search task of all memory swarm member, i.e., taski using Equation (4.11); 20: Perform ATA_memory (i, Pi, Pg, ObjV(Pi), ObjV(Pg), fc, Tsize,i, improve_flagi ,taski, P, Rmax, Rmin, Pim); 21: fes = fes + 1; 22: Check for ELS_Module (Pgold, ObjV(Pgold), Pi, ObjV(Pi), ObjV(Piold), fes, fc, Rmin, Rmax); 23: if fc > S then /*Update population division*/ 24: Update values of PSD, PFSD, Tsize , and Pc; 25: Reset fc = 0; 26: end if 27: end for 28: end while 29: end while Output: The best found solution, i.e., the global best particle (Pg) Figure 4.6: Complete framework of the ATLPSO-ELS algorithm.

memory swarm evolution will be replaced by the current swarm evolution if the former fails to achieve any fitness improvement on the Pg particle. Another issue that is worth mentioning is that although the ideas behind the ATAcurrent and ATAmemory modules are similar, i.e., some individuals in the current and memory swarms focus on exploration search and others on exploitation search, these two

141

population division strategies are designed with different search dynamics. This necessity is driven by the fact that the current and memory swarms have exhibited different degrees of clustering tendency during the evolution, as validated by the Hopkins test (Hopkins and Skellam, 1954) and reported by Epitropakis et al. (2012). Accordingly, the H-measure of the memory swarm is higher than that of the current swarm, which implies that the former appears to have an exploitative behavior, whereas the latter appears to exhibit a more explorative nature. As different clustering tendencies have been demonstrated by the current and memory swarms, using the population division strategies with similar search dynamics on these two distinct swarms tends to introduce a conflicting effect on the evolution process. The poor performance of using the strategies with similar search dynamics is proven in the previous works of Parsopoulos and Vrahatis (2002) and Epitropakis et al. (2012). These findings suggest that using the population division strategies with different search dynamics might be plausible to enhance the performance of ATLPSO-ELS.

4.3 Simulation Results and Discussions In this section, the search performance of the proposed ATLPSO-ELS is evaluated by using the benchmark and real-world problems presented in Section 2.4.1. The section is presented in the following aspects. First, the simulation settings of this experimental study are described. Next, the effects of the parameters introduced into the proposed ATLPSO-ELS, i.e., parameters K1, K2, Z, and m, are thoroughly investigated. To comprehensively evaluate the optimization capability of the ATLPSO-ELS, the comparative studies between this proposed algorithm and the peers in solving the benchmark and the real-world problems are performed.

4.3.1 Experimental Setup A total of 30 benchmark problems, as described in Table 2.6 are used to extensively investigate the algorithms’ search performances. In this research work, six well-established PSO variants are employed to thoroughly compare with the proposed ATLPSO-ELS. Both 142

Table 4.1 Parameter settings of the involved PSO variants Algorithm APSO (Zhan et al., 2009)

Population topology Fully-connected

FLPSO-QIW (Tang et al., 2011)

Comprehensive learning

Parameter settings  : 0.9  0.4 , c1  c 2 : [3.0,4.0] ,  : 1.0-0.1,   [0.05,0.1] 1 = 0.9,  2 = 0.2, ̌ = ̌ = 1.5, ̂ = 2.0, ̂ = 1.0, m  1 , Pi  [0.1, 1] , K1  0.1 , K 2  0.001 ,  1  1 , 2  0

FPSO (Montes de Oca et al., 2009)

Time-varying

 : 0.9  0.4 ,  ci  4.1

FIPS (Mendes et al., 2004)

Local URing

  0.729 ,  ci  4.1

OLPSO-L (Zhan et al., 2011)

Orthogonal learning

 : 0.9  0.4 , c  2.0 , G = 5

OPSO (Ho et al., 2008)

Fully-connected

 : 0.9  0.4 , c  2.0

ATLPSO-ELS

Fully-connected

 : 0.9  0.4 , c  2.0 , call  4.1 , K1 = 0.4, K2 = 0.8, Rmax = 1.0, Rmin = 0.1, m  5 , Z= 0.5

of the OLPSO-L and OPSO were selected for the comparison because they are the OEDbased PSO variants. Meanwhile, FLPSO-QIW and FIPS were chosen because their learning strategies share specific similarities with the learning strategies employed by the ATLPSOELS, i.e., these learning strategies employ the non-global best solution to guide the swarm. Meanwhile, both of the APSO and FPSO are the representative PSO variants developed from the parameter adaptation and modified population approaches, respectively, and these two algorithms are also used to compare with the proposed ATLPSO-ELS. The parameter settings of all employed PSO variants were extracted from their respective literature and presented in Table 4.1. As shown in Tables A1, A2, and A4 to A6, the performances of the APSO, FLPSO-QIW, and OLPSO-L in solving the problems with (1) higher dimensional search space (i.e., 50-D) and (2) different fitness landscapes (i.e., shifted, complex, and composition problems) remain competitive by utilizing the parameter settings recommended by their respective authors in Table 4.1. For the proposed ATLPSO-ELS, the values of  , c, and call are set according to the recommendations in previous studies by

143

Mendes et al. (2004) and Liang et al. (2006). A series of parameter sensitivity analyses will be performed in the following subsection to thoroughly investigate the effects of the parameters K1, K2, Z, and m on the optimization capability of the proposed ATLPSO-ELS. In these comparative studies, all PSO variants were independently executed 30 times to alleviate the effect of random discrepancy. Meanwhile, the maximum fitness evaluation numbers (FEmax) and population size (S) of all tested algorithms are set as 3.00E+05 and 30, respectively. The experimental settings of FEmax and S have been justified in Section 3.3.1.

4.3.2 Parameter Sensitivity Analysis As mentioned in the earlier subsections, a total of four new parameters, i.e., K1, K2, Z, and m were introduced in the proposed ATLPSO-ELS. It is worth investigating the effects of K1, K2, Z, and m on the search performance of ATLPSO-ELS and how are these parameters best set. A series of parameter sensitivity analyses are therefore required to find the optimal settings of parameters K1, K2, Z, and m, in order to ensure competitive search performance of ATLPSO-ELS in solving the various types of benchmark problems. Nevertheless, it is noteworthy that the complete evaluation of all possible combinations of parameters K1, K2, Z, and m are impractical and time-consuming, considering that each of these parameters covers a relatively wide range of values. To tackle this issue, the parameter tuning strategy, as reported by Lam et al. (2012), is employed in this subsection to obtain an appropriate parameter combination that offers ATLPSO-ELS promising optimization capability in solving the tested benchmark problems. Two functions are selected from each problem category for parameter tuning and these functions include F1 and F7 for conventional problem, F9 and F13 for rotated problems, F17 and F18 for shifted problems, F24 and F25 for complex problem, and F27 and F29 for composition problems. For each problem category, the parameter combination of [K1, K2, Z, m] are first initialized as the mean values of their respective upper and boundary values, i.e., [0.5, 0.5, 0.5, 5]. Recall that the values of parameters K1, K2, and Z range from 0 to 1, whereas m ranges from 0 to 10. Based on this initial combination, the parameters are adjusted one at a 144

time. Specifically, with other parameters fixed, the value of K1 is first varied from 0 to 1 to obtain the K1 value that gives the best search accuracy. The value of K1 in the parameter combination [K1, K2, Z, m] is updated based on the experimental findings and this new combination subsequently is used to tune the value of K2. This process is repeated until all parameters of K1, K2, Z, and m are updated. To investigate if the optimal values of parameter K1, K2, Z, and m are varied with the dimensionality of the search space, the simulations are performed on three different dimensions, i.e., 10-D, 30-D, and 50-D. The search accuracy (Emean) of ATLPSO-ELS with different values of K1, K2, Z, and m in 10-D, 30-D, and 50-D, are reported in Tables 4.2 to 4.4. The best result for each benchmark is indicated in boldface text. From Tables 4.2 to 4.4, it is observed that the experimental results of functions F1, F7, F9, and F13 are omitted because the search accuracies of ATLPSO-ELS in solving these functions are insensitive to the parameters K1, K2, Z, and m. ATLPSO-ELS successfully finds the global optima of these functions (i.e., Emean = 0), regardless the values of K1, K2, Z, and m. Conversely, Tables 4.2 to 4.4 reveal that the search accuracies of ATLPSO-ELS in solving the shifted, complex, and rotated problems change along with parameters K1, K2, Z, and m. Generally, the Emean values obtained by the ATLPSO-ELS in solving these three problem categories are inferior when the values of K1, K2, Z, and m are set at their respective lower or upper boundaries. This observation could be justified by the fact that in the extreme case of K1 = 0 and K2 = 1, particle i with the lowest diversity (i.e., PSDi = 0) has a probability of 0 (Pc,i) and 1 (1 – Pc,i) to perform exploitation and exploration searches, respectively. By contrast, the particle i with the highest diversity of PSDmax has a probability of 1 (Pc,i) and 0 (1 – Pc,i) to engage in exploitation and exploration sections, respectively. The opposite scenario could be observed in another extreme scenario of K1 = 1 and K2 = 0. Both scenarios have inevitably limited the searching flexibilities of particles with the lowest and highest diversity values, as they are allowed to perform only one type of search task. Meanwhile, the lower and upper boundary values of Z produce Tsize,max = 2 and S, respectively. In these cases, the members of the current swarm tend to be overbiased toward the exploration or 145

Table 4.2 Parameter tunings of K1, K2, Z, and m for functions F17, F18, F25, F26, F27, and F29 in 10-D K1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 K2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Z 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 m 0 1 2 3 4 5 6 7 8 9 10

F17 3.99E+00 5.63E-02 2.90E-02 1.18E-02 2.61E-05 1.26E-02 4.74E-02 6.29E-02 1.51E-02 6.12E-01 3.99E+00 F17 2.53E-02 8.80E-03 9.09E-04 2.13E-05 5.42E-05 2.61E-05 2.08E-05 7.94E-06 2.03E-08 8.85E-04 3.42E-03 F17 4.03E+00 1.69E-01 5.67E-01 3.77E-03 8.75E-05 2.03E-08 4.89E-05 9.60E-04 1.58E-04 1.24E-04 3.99E+00 F17 8.41E-02 1.88E-03 4.12E-03 6.88E-06 4.60E-05 2.03E-08 2.08E-06 1.02E-03 4.90E-03 7.36E-02 2.47E-01

F18 5.68E-14 5.68E-14 1.14E-13 5.68E-14 1.89E-14 1.14E-13 5.68E-14 5.68E-14 1.14E-13 1.14E-13 5.68E-14 F18 9.47E-14 9.47E-14 7.58E-14 5.68E-14 5.68E-14 1.89E-14 5.68E-14 3.68E-14 4.73E-15 1.89E-14 3.68E-14 F18 7.58E-14 5.68E-14 7.58E-14 3.79E-14 3.79E-14 4.73E-15 3.79E-14 3.79E-14 5.68E-14 5.68E-14 5.68E-14 F18 5.68E-14 3.79E-14 7.58E-14 9.47E-14 5.68E-14 4.73E-15 7.58E-14 3.79E-14 3.79E-14 5.68E-14 5.68E-14

F25 1.41E+05 7.85E+04 5.70E+04 4.72E+04 3.59E+04 4.75E+04 5.14E+04 7.31E+04 1.09E+05 1.61E+05 2.37E+05 F25 1.82E+05 9.66E+04 8.44E+04 7.87E+04 5.92E+04 3.59E+04 3.45E+04 2.92E+04 1.77E+04 2.64E+04 3.65E+04 F25 9.49E+04 8.70E+04 6.76E+04 4.82E+04 2.57E+04 1.77E+04 2.81E+04 3.02E+04 6.88E+04 8.28E+04 8.50E+04 F25 1.67E+05 1.03E+05 7.05E+04 4.83E+04 2.73E+04 1.77E+04 2.98E+04 3.82E+04 6.71E+04 7.06E+04 9.36E+04

F26 5.65E-01 5.44E-01 4.87E-01 4.60E-01 2.91E-01 4.27E-01 5.78E-01 7.52E-01 6.82E-01 1.04E+00 9.48E-01 F26 5.41E-01 5.09E-01 4.14E-01 3.24E-01 3.16E-01 2.91E-01 2.27E-01 9.93E-02 7.90E-03 9.31E-02 1.23E-01 F26 2.76E-01 1.21E-01 1.40E-01 6.63E-02 4.96E-02 7.90E-03 1.98E-02 6.97E-02 1.40E-01 2.86E-01 3.42E-01 F26 5.79E-01 3.99E-01 3.67E-01 6.95E-02 5.82E-02 7.90E-03 4.97E-02 8.90E-02 1.65E-01 5.71E-01 7.87E-01

F27 3.02E+02 3.73E+02 2.51E+02 2.47E+02 2.20E+02 2.78E+02 2.61E+02 2.99E+02 3.23E+02 3.59E+02 3.95E+02 F27 3.21E+02 3.27E+02 3.01E+02 2.87E+02 2.58E+02 2.20E+02 2.39E+02 2.29E+02 2.15E+02 2.11E+02 2.22E+02 F27 2.78E+02 2.54E+02 2.56E+02 2.43E+02 2.09E+02 2.11E+02 2.26E+02 2.57E+02 2.36E+02 2.75E+02 2.98E+02 F27 2.29E+02 2.31E+02 2.23E+02 2.17E+02 2.19E+02 2.09E+02 2.25E+02 2.31E+02 2.29E+02 2.38E+02 2.40E+02

F29 1.05E+03 9.13E+02 9.65E+02 9.41E+02 9.33E+02 9.29E+02 9.56E+02 9.81E+02 9.91E+02 1.14E+03 1.41E+03 F29 1.00E+03 9.89E+02 9.65E+02 9.61E+02 9.56E+02 9.29E+02 9.37E+02 9.09E+02 9.00E+02 9.03E+02 9.11E+02 F29 9.577E+02 9.20E+02 9.43E+02 9.17E+02 9.06E+02 9.00E+02 9.10E+02 9.47E+02 9.36E+02 9.31E+02 9.42E+02 F29 9.43E+02 9.29E+02 9.14E+02 9.04E+02 9.05E+02 9.00E+02 9.08E+02 9.17E+02 9.24E+02 9.39E+02 9.27E+02

exploitation searches, given that the Tsize,i values that were produced are either too small or too large. Such extreme parameter settings tend to jeopardize the capability of the ATA current module in regulating the particle’s exploration/exploitation strengths.

146

Table 4.3 Parameter tunings of K1, K2, Z, and m for functions F17, F18, F25, F26, F27, and F29 in 30-D K1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 K2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Z 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 m 0 1 2 3 4 5 6 7 8 9 10

F17 6.34E+00 5.96E+00 5.60E+00 5.77E+00 5.56E+00 5.87E+00 5.98E+00 6.47E+00 6.34E+00 6.90E+00 7.01E+00 F17 5.75E+00 5.83E+00 5.62E+00 5.47E+00 5.34E+00 5.56E+00 5.10E+00 4.87E+00 3.62E+00 3.79E+00 3.69E+00 F17 4.71E+00 4.32E+00 4.01E+00 3.83E+00 3.69E+02 3.62E+00 3.65E+00 3.73E+00 3.98E+00 3.96E+00 4.15E+00 F17 4.02E+00 3.80E+00 3.71E+00 3.67E+00 3.64E+00 3.62E+00 3.67E+00 3.65E+00 3.79E+00 3.69E+00 3.77E+00

F18 1.89E-13 1.89E-13 1.71E-13 1.71E-13 1.35E-13 1.89E-13 1.71E-13 1.71E-13 1.89E-13 1.93E-13 1.93E-13 F18 1.93E-13 1.71E-13 1.89E-13 1.89E-13 1.71E-13 1.35E-13 1.35E-13 1.71E-13 1.21E-13 1.35E-13 1.93E-13 F18 1.89E-13 1.35E-13 1.89E-13 1.21E-13 1.35E-13 1.21E-13 1.13E-13 1.35E-13 1.71E-13 1.21E-13 1.71E-13 F18 1.89E-13 1.71E-13 1.35E-13 1.35E-13 1.21E-13 1.13E-13 1.35E-13 1.35E-13 1.21E-13 1.21E-13 1.35E-13

F25 3.94E+06 3.23E+06 2.84E+06 2.59E+06 2.25E+06 2.22E+06 2.87E+06 3.47E+06 3.93E+06 3.87E+06 4.10E+06 F25 2.59E+06 2.38E+06 2.29E+06 2.10E+06 2.04E+06 2.22E+06 1.94E+06 1.67E+06 1.47E+06 1.45E+06 1.53E+06 F25 2.04E+06 1.83E+06 1.59E+06 1.64E+06 1.36E+06 1.45E+06 1.76E+06 1.89E+06 1.85E+06 1.96E+06 1.90E+06 F25 2.03E+06 1.69E+06 1.47E+06 1.56E+06 1.38E+06 1.36E+06 1.29E+06 1.41E+06 1.50E+06 1.57E+06 1.63E+06

F26 2.56E+00 2.21E+00 1.90E+00 1.73E+00 1.57E+00 1.87E+00 2.36E+00 2.76E+00 2.59E+00 2.65E+00 2.93E+00 F26 2.35E+00 2.12E+00 1.93E+00 1.86E+00 1.45E+00 1.57E+00 1.26E+00 1.01E+00 9.04E-01 9.57E-01 1.17E+00 F26 1.32E+00 1.18E+00 9.98E-01 9.67E-01 9.43E-01 9.04E-01 9.52E-01 9.84E-01 1.05E+00 1.14E+00 1.10E+00 F26 1.13E+00 9.99E-01 9.87E-01 9.59E-01 9.23E-01 9.04E-01 9.07E-01 9.24E-01 9.43E-01 9.35E+01 9.68E-01

F27 3.76E+00 3.33E+02 3.43E+02 3.17E+02 3.05E+02 3.09E+02 3.25E+02 3.42E+02 3.68E+02 3.82E+02 4.07E+02 F27 3.27E+02 3.31E+02 3.19E+02 3.15E+02 3.21E+02 3.05E+02 2.94E+02 2.65E+02 2.28E+02 2.37E+02 2.56E+02 F27 2.68E+02 2.59E+02 2.73E+02 2.57E+02 2.37E+02 2.28E+02 2.41E+02 2.37E+02 2.53E+02 2.61E+02 2.87E+02 F27 3.04E+02 2.89E+02 2.59E+02 2.65E+02 2.31E+02 2.28E+02 2.36E+02 2.43E+02 2.56E+02 2.76E+02 2.83E+02

F29 1.16E+03 9.99E+02 9.75E+02 9.38E+02 9.25E+02 9.46E+02 9.53E+02 9.89E+02 9.74E+02 1.01E+03 1.25E+03 F29 9.92E+02 9.79E+02 9.51E+02 9.47E+02 9.24E+02 9.25E+02 9.38E+02 9.16E+02 9.02E+02 9.21E+02 9.34E+02 F29 9.41E+02 9.68E+02 9.36E+02 9.13E+02 9.29E+02 9.02E+02 9.17E+02 9.31E+02 9.35E+02 9.42E+02 9.55E+02 F29 9.68E+02 9.39E+02 9.31E+02 9.13E+02 9.05E+02 9.02E+02 9.10E+02 9.39E+02 9.27E+02 9.36E+02 9.53E+02

Tables 4.2 to 4.4 reveal that ATLPSO-ELS achieves the best Emean values in most tested functions when the values of parameters K1, K2, Z, and m are set as 0.4, 0.8, 0.5, and 5, respectively. The parameter settings of K1 = 0.4 and K2 = 0.8 allow the Pc,i of each particle i to vary from 0.4 to 0.8 according to its diversity value (0 < PSD < PSDmax). These settings could provide more flexibility to the particles with the lowest and highest diversities in

147

Table 4.4 Parameter tunings of K1, K2, Z, and m for functions F17, F18, F24, F25, F27, and F29 in 50-D K1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 K2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Z 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 m 0 1 2 3 4 5 6 7 8 9 10

F17 4.62E-01 3.89E-01 3.53E-01 3.76E-01 3.42E-01 3.62E-01 3.84E-01 4.09E-01 4.32E-01 4.17E-01 4.45E-01 F17 4.36E-01 4.09E-01 3.96E-01 3.68E-01 3.36E-01 3.42E-01 3.14E-01 2.98E-01 2.67E-01 2.84E-01 3.01E-01 F17 3.37E-01 3.18E-01 2.94E-01 2.83E-01 2.76E-01 2.67E-01 2.89E-01 3.17E-01 3.06E-01 3.25E-01 3.29E-01 F17 3.06E-01 2.94E-01 3.12E-01 2.86E-01 2.79E-01 2.67E-01 2.73E-01 2.69E-01 2.81E-01 2.89E-01 2.96E-01

F18 1.11E-12 7.25E-13 7.25E-13 5.58E-13 3.41E-13 5.58E-13 5.58E-13 5.58E-13 7.25E-13 7.25E-13 5.58E-13 F18 7.25E-13 5.58E-13 7.25E-13 5.58E-13 3.41E-13 3.41E-13 3.41E-13 5.58E-13 2.10E-13 3.41E-13 7.25E-13 F18 7.25E-13 3.41E-13 7.25E-13 3.41E-13 3.41E-13 2.10E-13 3.41E-13 3.41E-13 5.58E-13 3.41E-13 5.58E-13 F18 5.68E-13 3.41E-13 5.68E-13 3.41E-13 2.10E-13 2.10E-13 2.10E-13 3.41E-13 3.41E-13 5.68E-13 3.41E-13

F25 4.26E+06 3.78E+06 3.32E+06 3.08E+06 2.87E+06 3.46E+06 3.21E+06 3.68E+06 3.87E+06 4.09E+06 4.17E+06 F25 3.41E+06 3.29E+06 3.18E+06 3.32E+02 3.24E+06 2.87E+06 3.04E+06 1.92E+06 1.60E+06 1.84E+06 1.96E+06 F25 2.08E+06 2.43E+06 1.89E+06 1.74E+06 1.85E+06 1.60E+06 2.11E+06 1.97E+06 1.89E+06 2.02E+06 2.21E+06 F25 1.81E+06 1.79E+06 1.95E+06 1.87E+06 1.65E+06 1.60E+06 1.68E+06 1.73E+06 1.96E+06 1.91E+06 2.03E+06

F26 3.05E+00 2.86E+00 2.43E+00 2.10E+00 1.41E+00 1.56E+00 2.05E+00 2.43E+00 2.78E+00 2.56E+00 2.94E+00 F26 2.12E+00 1.96E+00 2.04E+00 1.75E+00 1.59E+00 1.41E+00 1.25E+00 1.19E+00 1.11E+00 1.28E+00 1.47E+00 F26 2.16E+00 1.94E+00 1.75E+00 1.46E+00 1.23E+00 1.11E+00 1.26E+00 1.19E+00 1.36E+00 1.27E+00 1.54E+00 F26 1.79E+00 1.53E+00 1.41E+00 1.26E+00 1.18E+00 1.11E+00 1.16E+00 1.12E+00 1.35E+00 1.29E+00 1.53E+00

F27 3.42E+02 3.16E+02 2.89E+02 2.54E+02 2.61E+02 2.29E+02 2.02E+02 2.35E+02 2.76E+02 2.98E+02 3.05E+02 F27 2.74E+02 2.48E+02 2.32E+02 2.56E+02 2.28E+02 2.02E+02 1.77E+02 1.94E+02 1.83E+02 2.16E+02 2.34E+02 F27 1.50E+00 1.64E+00 1.47E+02 1.53E+02 1.41E+02 1.77E+02 1.89E+02 2.04E+02 2.32E+02 2.36E+02 2.57E+02 F27 2.25E+02 1.92E+02 2.05E+02 1.76E+02 1.59E+02 1.41E+02 1.35E+02 1.39E+02 1.47E+02 1.53E+02 1.61E+02

F29 1.64E+03 1.42E+03 1.28E+03 1.03E+03 9.84E+02 9.99E+02 1.17E+03 1.35E+03 1.65E+03 1.59E+03 1.98E+03 F29 1.73E+03 1.45E+03 1.21E+03 1.39E+03 1.07E+03 9.84E+02 9.63E+02 9.45E+02 9.23E+02 9.31E+02 9.58E+02 F29 1.07E+03 9.95E+02 9.78E+02 9.54E+02 9.35E+02 9.23E+02 9.31E+02 9.54E+02 9.89E+02 9.76E+02 9.94E+02 F29 1.54E+03 1.14E+03 9.78E+02 9.52E+02 9.31E+02 9.23E+02 9.17E+02 9.28E+02 9.41E+02 9.65E+02 9.74E+02

performing the search. When particle i has PSDi = 0, particle i still has minimum probabilities of 0.4 (Pc,i) and 0.6 (1 – Pc,i) to perform exploitation and exploration searches, respectively. For particle i with highest diversity of PSDmax, the probability that it will engage in exploitation search is 0.8 (Pc,i), while the probability that it will perform

148

exploration search is 0.2 (1- Pc,i). Finally, the results of parameter sensitivity analysis in Tables 4.2 to 4.4 also reveal that ATLPSO-ELS solves the tested benchmarks with promising search accuracy at 10-D, 30-D, and 50-D when the values of parameters K1, K2, Z, and m are set as 0.4, 0.8, 0.5, and 5, respectively. Based on the aforementioned observations, it could be concluded that the appropriate parameter combination of [K1, K2, Z, m] obtained from this study is relatively insensitive to the variation of search space dimensionality and can be used in solving different types of benchmark problems. These experimental findings suggest that the parameters K1, K2, Z, and m can be set as 0.4, 0.8, 0.5, and 5, respectively, in the following performance evaluation of ATLPSO-ELS.

4.3.3 Comparison of ATLPSO-ELS with Other Well-Established PSO Variants This subsection reports the experimental results of all tested algorithms in solving the 30 employed benchmark problems. Specifically, the results of mean error (Emean), standard deviation (SD), and Wilcoxon test (h) obtained by the involved algorithms are summarized in Table 4.5. On the other hand, Table 4.9 compares the search reliability and search efficiency of the seven tested algorithms through the success rate (SR) and success performance (SP) values, respectively. The best result for each benchmark is indicated in boldface text. Considering that none of the tested algorithms are able to solve the functions F3, F10, F16, and F24 to F30 within the predefined  in at least one run, the SR and SP values of these functions are omitted in Table 4.9. Please refer to Section 3.3.3 for more detailed definitions of w/t/l, #BME, +/=/-, #S/#PS/#NS and #BSP, as presented in Tables 4.5 and 4.9.

4.3.3(a) Comparison of the Mean Error Results As reported in Table 4.5, the proposed ATLPSO-ELS exhibits the best search accuracy, considering that it outperforms the six other tested algorithms with large margin in majority of the problems. ATLPSO-ELS attains 25 best mean error (Emean) values out of the 30 tested 149

Table 4.5 The Emean, SD, and Wilcoxon test results of ATLPSO-ELS and six compared PSO variants for the 50-D benchmark problems F1

F2

F3

F4

F5

F6

F7

F8

F9

F10

F11

F12

F13

F14

F15

F16

F17

F18

F19

F20

Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h

APSO 2.50E-01 1.81E-01 + 1.46E+03 4.82E+02 + 4.62E+01 1.53E+00 + 5.80E-01 6.29E-01 + 3.60E-02 3.22E-02 + 1.70E-01 8.21E-02 + 6.60E-02 2.57E-02 + 5.44E-01 1.88E-01 + 1.26E+03 3.22E+02 + 5.15E+01 1.39E+01 + 1.83E+02 5.61E+01 + 2.59E+02 6.15E+01 + 2.10E+02 1.01E+02 + 6.32E+01 4.24E+00 + 2.27E-01 9.70E-02 + 1.08E+03 5.18E+02 + 1.97E+03 3.83E+03 + 5.92E-01 7.76E-01 + 7.20E-03 1.06E-02 + 0.00E+00 0.00E+00 =

FLPSO-QIW 2.90E-81 5.97E-81 + 2.62E+02 8.90E+01 + 4.22E+01 2.39E-01 + 2.60E+00 1.52E+00 + 5.58E+00 2.36E+00 + 5.75E-04 2.21E-03 + 3.43E-14 1.07E-14 + 1.88E-05 8.29E-05 + 2.62E+02 7.62E+01 + 4.55E+01 3.16E+00 + 1.26E+02 1.76E+01 + 1.28E+02 2.13E+01 + 1.52E+00 5.39E-01 + 4.86E+01 3.40E+00 = 1.44E-13 4.15E-14 6.97E+02 1.66E+02 + 1.05E+02 4.86E+01 + 5.88E+00 2.51E+00 + 1.20E+01 3.16E+00 + 2.05E-03 3.49E-03 +

FPSO 7.02E+01 6.98E+01 + 3.44E+03 1.33E+03 + 5.68E+01 7.08E+00 + 1.85E+01 1.02E+01 + 1.60E+01 9.56E+00 + 1.86E+00 9.28E-01 + 1.80E+00 1.10E+00 + 3.35E+00 2.35E+00 + 3.23E+03 1.79E+03 + 5.62E+01 7.00E+00 + 1.80E+02 5.01E+01 + 1.53E+02 3.74E+01 + 7.28E+00 5.62E+00 + 5.18E+01 3.93E+00 + 1.71E+04 1.47E+04 + 2.50E+04 6.28E+03 + 1.11E+09 2.65E+09 + 2.08E+02 4.59E+01 + 1.63E+02 2.82E+01 + 1.46E+03 4.63E+02 +

FIPS 2.96E-01 8.06E-01 + 8.13E+00 2.47E+01 + 4.77E+01 8.44E-01 + 1.57E+00 3.71E+00 + 5.70E-01 8.65E-01 + 1.93E-01 3.47E-01 + 1.70E-01 3.38E-01 + 9.80E-01 9.53E-01 + 8.45E+00 2.24E+01 + 4.85E+01 5.72E-02 + 2.65E+01 3.39E+01 + 4.15E+01 5.13E+01 + 1.94E-01 4.08E-01 + 5.35E+01 4.38E+00 + 6.20E+00 3.90E+00 + 1.90E+03 5.54E+02 + 9.08E+03 4.88E+03 + 1.31E+02 2.93E+01 + 1.48E+02 3.94E+01 + 0.00E+00 0.00E+00 =

150

OLPSO-L 4.86E-33 5.15E-33 + 5.71E+02 1.85E+02 + 4.30E+01 3.18E+00 + 3.32E-01 6.03E-01 + 1.17E+00 1.15E+00 + 0.00E+00 0.00E+00 = 5.09E-15 1.79E-15 + 0.00E+00 0.00E+00 = 1.92E+03 4.17E+02 + 4.24E+01 3.73E+00 + 9.80E+01 5.16E+01 + 1.78E+02 4.94E+01 + 7.58E-01 2.68E-01 + 4.58E+01 4.77E+00 = 5.68E-14 0.00E+00 9.30E+02 3.21E+02 + 1.33E+01 1.94E+01 + 1.43E+00 1.10E+00 + 3.00E+00 1.78E+00 + 1.47E-01 1.07E-01 +

OPSO 6.55E-56 1.04E-55 + 2.44E+04 9.35E+03 + 5.57E+01 3.28E+01 + 6.63E-02 2.52E-01 = 1.33E-01 3.46E-01 + 8.22E-04 2.53E-03 = 4.07E+00 6.39E+00 + 0.00E+00 0.00E+00 = 1.92E+04 8.76E+03 + 3.16E+01 5.84E+00 + 1.33E+02 6.23E+01 + 1.95E+02 3.36E+01 + 6.14E-01 2.84E-01 + 5.73E+01 5.63E+00 + 8.80E+00 3.90E+01 + 6.37E+04 1.71E+04 + 6.12E+04 3.34E+05 + 3.12E+00 2.20E+00 + 2.74E+00 1.87E+00 + 0.00E+00 0.00E+00 =

ATLPSO-ELS 0.00E+00 0.00E+00 0.00E+00 0.00E+00 1.70E+01 2.89E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 2.40E+01 3.51E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 4.21E+01 2.00E+01 2.27E-13 4.72E-14 6.08E+01 6.63E+01 2.67E-01 1.01E+00 2.10E-13 7.03E-14 1.93E-13 5.08E-14 0.00E+00 0.00E+00

Table 4.5 (Continued) F21

F22

F23

F24

F25

F26

F27

F28

F29

F30

Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h #BME w/t/l +/=/-

APSO 6.04E-02 1.71E-02 + 8.79E-01 5.86E-01 + 1.49E+00 1.47E-01 + 2.07E+01 1.65E-01 = 1.32E+07 4.09E+06 + 4.13E+00 1.19E+00 + 4.60E+02 7.79E+01 + 5.13E+02 8.95E+01 + 1.10E+03 9.90E+01 + 1.08E+03 9.29E+01 + 2 28/2/0 28/2/0

FLPSO-QIW 2.19E-13 5.74E-14 3.87E-02 7.38E-02 = 6.02E-03 1.04E-02 + 2.11E+01 4.16E-02 + 1.89E+07 4.92E+06 + 4.01E+00 1.46E+00 + 1.78E+02 1.39E+02 1.79E+02 8.40E+01 9.22E+02 3.40E+01 9.34E+02 1.05E+01 + 2 25/0/5 23/2/5

FPSO 9.29E+00 3.25E+00 + 2.96E+01 5.70E+00 + 2.06E+02 1.44E+02 + 2.11E+01 4.92E-02 + 1.03E+08 8.26E+07 + 4.35E+01 8.59E+00 + 4.38E+02 1.28E+02 + 4.47E+02 1.15E+02 + 9.77E+02 3.38E+01 + 9.70E+02 4.97E+01 + 0 30/0/0 30/0/0

FIPS 4.47E+00 8.26E-01 + 2.00E+01 3.86E+00 + 4.81E+00 1.78E+00 + 2.12E+01 5.03E-02 + 1.02E+07 3.44E+06 + 2.76E+01 5.79E+00 + 1.88E+02 6.55E+01 = 2.31E+02 8.36E+01 9.53E+02 1.26E+01 + 9.63E+02 2.54E+01 + 0 27/1/2 27/2/1

OLPSO-L 8.05E-14 1.51E-14 4.85E-02 1.48E-01 + 3.76E-02 4.00E-02 + 2.12E+01 5.06E-02 + 1.82E+07 5.14E+06 + 2.98E+00 8.26E-01 + 1.38E+02 7.31E+01 2.19E+02 9.83E+01 9.48E+02 8.46E+00 + 9.54E+02 8.84E+00 + 5 24/2/4 23/3/4

OPSO 6.76E-01 1.24E+00 + 7.87E-01 7.88E-01 + 6.33E+00 1.29E+01 + 2.13E+01 5.19E-02 + 2.01E+07 1.97E+07 + 2.65E+00 9.88E-01 + 2.20E+02 1.33E+02 = 3.24E+02 1.49E+02 = 9.29E+02 1.36E+01 + 9.30E+02 1.21E+01 + 2 28/2/0 24/6/0

ATLPSO-ELS 3.32E-13 7.95E-14 2.61E-14 1.24E-14 3.86E-03 7.15E-03 2.07E+01 2.17E-01 1.60E+06 6.08E+05 1.11E+00 5.89E-01 2.71E+02 1.55E+02 3.22E+02 1.10E+02 9.23E+02 7.42E+01 9.24E+02 2.52E+01 25

benchmarks problems and its search accuracy is five times better than the second-ranked OLPSO-L. On the other hand, the FPSO is identified as the worst optimizer because the Emean values produced by this algorithm are the worst in almost all tested problems. Specifically, Table 4.5 reveals that the proposed ATLPSO-ELS successfully locates the global optima of all conventional (F1 to F8) and rotated (F9 to F14) problems, except for the functions F3, F10, and F14. It is also noteworthy that ATLPSO-ELS is the only algorithm which is able to solve the conventional functions F1, F2, F4, F5, and F7 with Emean = 0. Meanwhile, it is also observed that the search accuracies of FLPSO-QIW, OLPSO-L, and OPSO in solving the conventional problems are promising because these peer algorithms manage to locate at least three global optima or near-global optima of the tested problems (i.e., F1, F6, F7, and F8). Despite the competitive search accuracies exhibited by all tested

151

algorithms in solving the conventional problems, Table 4.5 reports that most of these algorithms experience performance deterioration when dealing with the rotated problems with non-separable characteristic. For instance, the OLPSO-L fails to locate the global optima of the rotated Grienwank functions (F13) although it successfully solves the unmodified one (F6) with Emean = 0. Among the seven tested algorithms, it is observed that the search accuracy of ATLPSO-ELS is relatively resilient towards the rotation operation because of its capability to solve most rotated problems (i.e., F9, F11, F12, and F13) with Emean = 0. Although the ATLPSO-ELS is unable to locate the global optima of the rotated functions of F10 and F14, the Emean values produced by the proposed algorithm in solving these two problems are better than the compared peers. The search accuracies of all PSO variants in solving the shifted problems (F15 to F22) is also jeopardized according to Table 4.5 because none of these tested algorithms successfully locate the exact global optima of most shifted problems. As compared to other shifted problems, the shifted Grienwank function (F20) is the easiest to be solved because a total of four tested algorithms (i.e., APSO, FIPS, OPSO, and ATLPSO-ELS) successfully locate the shifted global optimum of this problem. It is also notable that the search accuracy of ATLPSO-ELS in solving the problems with shifted fitness landscapes is more promising than the six other compared PSO variants. Specifically, the proposed algorithm achieves the best Emean values in six (out of eight) shifted problems, i.e., functions F16 to F20, and F22. It is also noteworthy that the ATLPSO-ELS is the only optimizer which successfully solves the shifted functions F18, F19, and F22 with the Emean values less than 10-12. On the other hand, although the Emean values obtained by the ATLPSO-ELS in solving the shifted functions F15 and F21 is slightly inferior as compared to those of FLPSO-QIW and OLPSO-L, the outperformance margins of the latter two against the former in these two functions are relatively small. Finally, it is also reported in Table 4.5 that the search accuracies of all tested algorithms are degraded further in solving both complex (F23 to F26) and composition (F27 to F30) problems, given that none of the involved PSO variants are able to find the global 152

optima or near global optima of these two problem categories. Among the seven tested algorithms, the proposed ATLPSO-ELS is identified as the best optimizer in solving the complex and composition problems because it successfully achieves five best Emean in eight of these problems. It is notable that the Emean values obtained by the ATLPSO-ELS in solving the compositions functions F27, F28, and F29 are slightly larger (i.e., worse) as compared to its peers (i.e., FLPSO-QIW and OLPSO-L). The slightly inferior search accuracy of ATLPSO-ELS in solving the functions F27, F28, and F29 could be attributed to the fact that the parameter settings of [K1, K2, Z, m] = [0.4, 0.8, 0.5, 5] might not be the best parameter settings of ATLPSO-ELS to solve these particular benchmarks. As shown in Table 4.4, the search accuracies of ATLPSO-ELS in solving the functions F27 and F29 are better than those of FLPSO-QIW and OLPSO-L when different parameter combinations of [K1, K2, Z, m] are used. Specifically, the ATLPSO-ELS solves the function F27 with Emean = 1.35E+02 when the parameter combination of [K1, K2, Z, m] is set as [0.6, 0.6, 0.4, 6], whereas the Emean value of 9.17E+02 is obtained by ATLPSO-ELS in solving the functions F29 when the parameter combination of [K1, K2, Z, m] is set as [0.4, 0.8, 0.5, 6]. Based on the experimental findings reported in Table 4.5, it is concluded that the proposed ATLPSO-ELS in general exhibits more competitive search accuracy than its contenders. In addition, the #BME and w/t/l values achieved by the ATLPSO-ELS in each problem category are much better than those of the compared peers. These observations suggest that the proposed algorithm is robust enough to deal with the problems with different types of fitness landscapes as compared to its compared peers.

4.3.3(b) Comparisons of the Non-Parametric Statistical Test Results A set of non-parametric statistical analyses are performed in this subsection to rigorously evaluate if ATLPSO-ELS significantly outperformes its peers in all tested problems at the statistical point of view. The detailed explainations of the non-parametric statistical tests and their respective procedures in detecting the performance differences between the compared algorithms are provided in Sections 2.5.5 and 3.3.3(b), respectively. 153

Table 4.6 Wilcoxon test for the comparison of ATLPSO-ELS and six other PSO variants ATLPSO-ELS vs. R+ R− p-value

APSO 463.5 1.5 4.66E-09

FLPSO-QIW 398.0 67.0 3.45E-04

FPSO 465.0 0.0 1.86E-09

FIPS 388.0 47.0 7.73E-05

OLPSO-L 403.5 61.5 1.99E-04

OPSO 440.5 24.5 1.55E-06

Table 4.7 Average rankings and the associated p-values obtained by the ATLPSO-ELS and six other PSO variants via the Friedman test Algorithm Average ranking Chi-square statistic p-value

ATLPSO-ELS 1.53

OLPSO-L 3.22

FLPSO-QIW FIPS 3.27 4.43 87.98 0.00E+00

OPSO 4.50

APSO 4.70

FPSO 6.35

Tables 4.5 and 4.6 reveal the results of pairwise comparative studies between the proposed ATLPSO-ELS and its peers. The h values as reported in Table 4.5 shows that the number of benchmark problems in which ATLPSO-ELS performs significantly better than its peers is much larger than the number of problems when the latter performs significantly better than the former. The results of Wilcoxon test in Table 4.5 (summarized as +/=/-) are in line with the previously reported Emean values. Meanwhile, Table 4.6 reveals that all p-values of Wilcoxon test are reported less than  = 0.05 and these observations further verify the significant outperformance margins of ATLPSO-ELS against the six compared peers in the pairwise comparative studies. The multiple comparisons (García et al., 2009, Derrac et al., 2011) are also performed in this subsection to further investigate the significance of outperformance margins of ATLPSO-ELS over its peers. Table 4.7 shows the Friedman test results by computing the average rankings of the compared algorithms and the associated p-values. Accordingly, ATLPSO-ELS is identified as the best optimizer because it has the smallest average rank of 1.53. Table 4.7 also verifies that a significant global difference exists among the compared algorithms because the p-value (i.e., 0.00E+00) computed through the statistic of Friedman test is smaller than the level of significance considered (i.e.,  = 0.05). In light of this observation, the multiple comparisons are carried out by employing a set of post-hoc statistical analyses (García et al., 2009, Derrac et al., 2011) to identify the 154

Table 4.8 Adjusted p-values obtained by comparing the ATLPSO-ELS with six other PSO variants using the Bonferroni-Dunn, Holm, and Hochberg procedures ATLPSO-ELS vs. FPSO APSO OPSO FIPS FLPSO-QIW OLPSO-L

z 8.64E+00 5.68E+00 5.32E+00 5.20E+00 3.11E+00 3.02E+00

Unadjusted p 0.00E+00 0.00E+00 0.00E+00 0.00E+00 1.89E-03 2.55E-03

Bonferroni-Dunn p 0.00E+00 0.00E+00 1.00E-06 1.00E-06 1.13E-02 1.53E-02

Holm p 0.00E+00 0.00E+00 0.00E+00 1.00E-06 3.77E-03 3.77E-03

Hochberg p 0.00E+00 0.00E+00 0.00E+00 1.00E-06 2.55E-03 2.55E-03

concrete differences for the control algorithm (i.e., ATLPSO-ELS). The associated z values, unadjusted p-values, and adjusted p-values (APVs) obtained from the post-hoc procedures of Bonferroni-Dunn, Holm, and Hochberg tests are summarized in Table 4.8. From Table 4.8, it can be concluded that all employed post-hoc procedures verify the significant outperformance margins of ATLPSO-ELS over the six compared PSO variants, in term of search accuracy. This is because all APVs produced by the selected Bonferroni-Dunn, Holm, and Hochberg post-hoc tests are smaller than  = 0.05 considered in this research work.

4.3.3(c) Comparison of the Success Rate Results Table 4.9 presents the results of success rate (SR) analysis and it shows that search reliability exhibited by the ATLPSO-ELS is superior to the compared peers. Accordingly, the search reliability of the proposed ATLPSO-ELS is at least twice as better than the second-ranked FLPSO-QIW because the former completely solves 17 benchmarks with SR = 100%, in contrast to the latter which is only able to solve 7 of the tested problems. Meanwhile, the FPSO has the worst search reliability among the tested algorithm given that it fails to solve any of the tested problems with SR = 100%. As presented in Table 4.9, the proposed ATLPSO-ELS is able to solve all the eight conventional problems completely (i.e., SR = 100%) at the predefined accuracy level  , except for function F3. It is notable that ATLPSO-ELS is the only algorithm that is able to completely solve the conventional functions F2, F4, and F5. The search reliabilities of the FLPSO-QW, OLPSO-L, and OPSO in solving the conventional problems are also

155

Table 4.9 The SR and SP values of ATLPSO-ELS and six compared PSO variants for the 50D benchmark problems F1

SR SP F2 SR SP F4 SR SP F5 SR SP F6 SR SP F7 SR SP F8 SR SP F9 SR SP F11 SR SP F12 SR SP F13 SR SP F14 SR SP F15 SR SP F17 SR SP F18 SR SP F19 SR SP F20 SR SP F21 SR SP F22 SR SP F23 SR SP #S/#PS/#NS BSP

APSO 0.00 Inf 0.00 Inf 0.00 Inf 6.67 2.60E+06 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 86.67 2.51E+05 100.00 1.20E+04 0.00 Inf 0.00 Inf 0.00 Inf 1/2/27 0

FLPSO-QIW 100.00 6.04E+04 0.00 Inf 6.67 3.46E+06 0.00 Inf 100.00 5.00E+04 100.00 4.79E+04 100.00 6.67E+04 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 5.85E+04 0.00 Inf 0.00 Inf 0.00 Inf 100.00 4.88E+04 100.00 4.72E+04 70.00 1.03E+05 83.33 2.04E+05 7/3/20 0

FPSO 13.33 9.68E+04 0.00 Inf 0.00 Inf 0.00 Inf 6.67 1.60E+05 3.33 3.36E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 6.67 2.59E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0/4/26 0

FIPS 80.00 9.86E+04 70.00 1.62E+05 40.00 3.56E+05 33.33 2.66E+05 70.00 1.05E+05 53.33 1.07E+05 16.67 4.59E+05 73.33 1.62E+05 53.33 2.60E+05 43.33 2.84E+05 80.00 8.72E+04 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 2.47E+03 0.00 Inf 0.00 Inf 0.00 Inf 1/11/18 1

OLPSO-L 100.00 1.52E+05 0.00 Inf 73.33 2.97E+05 40.00 6.75E+05 100.00 1.24E+05 100.00 1.25E+05 100.00 1.66E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 1.37E+05 0.00 Inf 13.33 1.27E+06 6.67 3.45E+06 10.00 1.33E+06 100.00 1.15E+05 90.00 1.75E+05 13.33 2.20E+06 6/7/17 0

OPSO 100.00 5.57E+04 0.00 Inf 93.33 1.13E+05 86.67 1.35E+05 100.00 4.63E+04 70.00 1.01E+05 100.00 6.68E+04 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 76.67 7.53E+04 0.00 Inf 6.67 1.39E+06 3.33 3.16E+06 100.00 2.94E+03 73.33 7.49E+04 23.33 5.27E+05 76.67 2.12E+05 4/9/17 0

ATLPSO-ELS 100.00 2.84E+03 100.00 8.46E+03 100.00 2.85E+03 100.00 2.91E+03 100.00 3.08E+03 100.00 3.62E+03 100.00 2.55E+03 100.00 8.25E+03 100.00 5.72E+03 100.00 5.67E+03 100.00 4.79E+03 16.67 8.05E+05 100.00 2.65E+04 90.00 2.25E+05 100.00 1.05E+05 100.00 1.04E+05 100.00 4.78E+03 100.00 2.33E+04 100.00 3.10E+04 86.67 6.36E+04 17/3/10 19

competitive, considering that these compared peers solve the functions F1, F6, and F7 with SR = 100%. Meanwhile, the search reliabilities of most tested algorithms are degraded when they are employed to solve the rotated problems. Specifically, Table 4.9 reports that the SR values produced by these PSO variants in most rotated problems are equal to 0.00%, implying that these algorithms are unable to completely or partially solve the respective problems. Notably, 156

the performance deteriorations of the tested algorithms, in term of search reliabilities, in solving the modified problems are similar with the previously reported Emean results. Among the seven tested algorithms, the proposed ATLPSO-ELS is identified as the optimizer with best search reliability in solving the rotated problems because it successfully solves four (out of six) rotated problems, i.e., functions F9, F11, F12, and F13. It is noteworthy that, except ATLPSO-ELS, none of the compared peers are able to solve the mentioned functions with SR = 100%. Furthermore, Table 4.9 also reports that ATLPSO-ELS is the only algorithm that is able to partially solve the rotated function F14 with SR = 16.67%. Similar performance deteriorations of the tested algorithm, in term of search reliability, could also be observed in solving the shifted problems. Nevertheless, the simulation results as reported in Table 4.9 reveal that the SR values obtained by most tested algorithms in solving the shifted problems are generally higher (i.e., better) than those of the rotated problems. These observations imply that most of these tested algorithms have better capability in dealing with the shifted fitness landscapes than the rotated ones. Despite of different type of operation (i.e., shifting operation) is introduced into the shifted problems, the proposed ATLPSO-ELS is able to maintain its competitive search reliability. Specifically, the ATLPSO-ELS successfully solves six (out of eight) shifted problems (i.e., functions F15, F18, and F19 to F22) with SR = 100% and it is also the only algorithm that partially solves the shifted function F17 with SR = 90.00%. The FLPSO-QIW, OLPSO-L, and OPSO also exhibit promising search reliabilities in solving the shifted problems because these PSO variants could completely or partially solve at least four of the tested problems. Finally, Table 4.9 shows that none of the tested algorithms are able to completely or partially solve the complex problems and the composition problems, except for functions F23, where ATLPSO-ELS achieves the best SR values of 86.67%. For the remaining complex and composition problems, where the SR values produced by all tested algorithms are equal to 0.00%, the proposed ATLPSO-ELS is proven better than its peers because Table 4.5 reports the former achieves the more competitive Emean values than the latter in solving these problems. 157

4.3.3(d) Comparison of the Success Performance Results The SP values in Table 4.9 quantitatively evaluate the computation cost required by a tested algorithm to solve a particular particular benchmark within the predefined  (Suganthan et al., 2005). Ten representative convergence curves, i.e., two from conventional (F1 and F7), rotated (F11 and F14), shifted (F17 and F19), complex (F23 and F25), and composition (F28 and F29) problems, are used to qualitatively compare the algorithms’ convergence speeds. From Table 4.9, it is observed that the proposed ATLPSO-ELS outperforms its peer algorithms, in term of search efficiency, in all conventional and rotated problems. More particularly, ATLPSO-ELS achieves seven and five best (i.e., smallest) SP values among the eight and six conventional and rotated problems, respectively. These findings reveal the promising capability of the ATLPSO-ELS to solve these two problem categories within the predefined  by consuming the small numbers of fitness evaluations (FEs). The rapid convergence characteristics of ATLPSO-ELS in the conventional and rotated problems are also verified by Figures 4.7(a) to 4.7(d). Specifically, the convergence curves of ATLPSOELS in function F1, F7, and F11 [as depicted in Figures 4.7(a) to 4.7(c), respectively] drop off sharply at one point at the early stage of optimization, implying the capability of this algorithm to solve the optimal solutions with the least computation cost. Although the SP value of functions F14 is not available for comparisons, the convergence curve of ATLPSOELS in this function [as illustrated in Figure 4.7(d)] shows that convergence speed of the proposed algorithm is more competitive than its peers, especially during the early stage of optimization. It is also notable that the convergence curves of functions F2, F4, F5, F6, F8, F9, and F11 to F13 are similar with those in Figure 4.7(a) to 4.7(c), whereas the convergence curves of F3 and F10 are comparable with the one illustrated in Figure 4.7(d). As shown in Table 4.9, the proposed ATLPSO-ELS also exhibits the best search efficiency in solving the eight shifted problems. Specifically, the ATLPSO-ELS records the six best SP values in functions F15, F17, F18, F12, F21, and F22. The outstanding performances of ATLPSO-ELS, in term of convergence speeds, in shifted problems are also reflected in the convergence curves as illustrated in Figures 4.7(e) and 4.8(f). Specifically, 158

(a)

(b)

(c)

(d)

(e)

(f)

Figure 4.7: Convergence curves of 50-D problems: (a) F1, (b) F7, (c) F11, (d) F14, (e) F17, and (f) F19.

the convergence curves of function F17 [as depicted in Figure 4.7(e)] reveal the rapid convergence speeds of ATLPSO-ELS in the early stage of optimization. Meanwhile, the convergence curves of function F19 [as depicted in Figure 4.7(f)] show that all tested

159

(g)

(h)

(i)

(j)

Figure 4.7 (Continued): Convergence curves of 50-D problems: (g) F23, (h) F25, (i) F28, and (j) F29.

algorithms, including the ATLPSO-ELS, are trapped in the local optima during the middle stage of optimization. Nevertheless, the ATLPSO-ELS is the only algorithm that is able to break out of the local optima of function F19 and solve this function with good accuracy level. It is worth mentioning that the convergence curves of functions F16 and F18 are similar with those in Figures 4.7(e) and 4.7(f), respectively. Meanwhile, the convergence curves of ATLPSO-ELS in functions F15, F20, F21, and F22 have the combined benefits of those illustrated in Figures 4.7(e) and 4.7(f). In other words, ATLPSO-ELS has the rapid convergence speeds in solving the functions F15, F20, F21, and F22 during the early stage of optimization and this algorithm also successfully solves the mentioned functions with promising accuracy at the end of the optimization process.

160

Finally, for most complex and composition problems, no SP values are available for comparison, except for function F23, where the proposed ATLPSO-ELS achieves the best SP value. The convergence curves of functions F23, F25, F28, and F29 [represented by Figures 4.7(g) to 4.7(j), respectively] show that the convergence speeds of ATLPSO-ELS in these two problem categories are competitive as compared with its peers. Specifically, ATLPSO-ELS converges faster than its peers during the early stage of the search process in functions F23 and F25. Such rapid convergence characteristic facilitates the ATLPSO-ELS to locate and exploit the optimal regions in the search space of functions F23 and F25 earlier than its peers. This desirable characteristic explains the capability of ATLPSO-ELS in obtaining the more promising solutions than its peers in functions F23 and F25.

4.3.3(e) Comparison of the Algorithm Complexity Results In this subsection, the computational complexities of the seven tested algorithms are compared in Table 4.10 by using the AC evaluation as explained in Figure 2.11. Table 4.10 shows that OPSO has the least computational complexity at D = 50, considering that this algorithm produces the smallest AC value. On the other hand, the complexity of the proposed ATLPSO-ELS is ranked fifth with an AC value comparable with the third-ranked FPSO and fourth-ranked FIPS. The relatively higher computational complexity of the ATLPSO-ELS is anticipated because this algorithm is improved with the ATAcurrent, ATAmemory, and ELS modules, which would inevitably incur the excessive computational resources. Specifically, the exemplar selection procedure from the ATA current module (Figure 4.1) and the OEDLS in ELS module (Figure 4.5) have contributed most of the increment in computational complexity because these procedures perform their respective mechanism dimensional-wise. In additional, the computational resources required to calculate the PSD and PFSD metrics of each particle are also relatively high because these metrics involve the computation of the Euclidean distances between the particle and the hypothetical particles.

161

Table 4.10 AC Results of the ATLPSO-ELS and six other PSO variants in D = 50 T0 T1 ̂2 AC

APSO 1.88E−01 4.19E+00 1.37E+03 7.27E+03

FLPSO-QIW 1.88E−01 4.19E+00 2.95E+03 1.56E+04

FPSO 1.88E-01 4.19E+00 5.83E+01 3.08E+03

FIPS 1.88E-01 4.19E+00 5.86E+02 3.09E+03

OLPSO-L 1.88E−01 4.19E+00 4.60E+02 2.42E+03

OPSO 1.88E−01 4.19E+00 2.75E+02 1.44E+03

ATLPSO-ELS 1.88E−01 4.19E+00 5.93E+02 3.31E+03

ATLPSO-ELS significantly outperforms these PSO variants in terms of search accuracy (Emean), search reliability (SR), and search efficiency (SP). On the other hand, the complexity value of ATLPSO-ELS is much lower than those of (1) FLPSO-QIW with a relatively good search performance and (2) APSO with inferior search performance despite of high computational complexity is incurred in this algorithm. These observations suggest that compared with its peer algorithms, the proposed ATLPSO-ELS achieves better tradeoff between performance improvement and increment of computational complexity.

4.3.4 Effect of Different Proposed Strategies In this subsection, the effectiveness of each proposed strategy introduced to the ATLPSOELS is evaluated. These strategies include (1) the two-layer evolution framework which offers the current swarm evolution and memory swarm evolution, (2) the ATA modules that consists of the adaptive task allocation mechanism, and (3) the ELS module that offers two unique learning strategies to guide the search of global best particle. To comprehensively investigate the effectiveness of each proposed strategy, the search accuracies of (1) PSO with two-layer evolution framework (ATLPSO-ELS1), (2) PSO with two-layer evolution framework and ELS module (ATLPSO-ELS2), (3) PSO with two-layer evolution framework (ATLPSO-ELS3), and (4) complete ATLPSO-ELS in each problem category are evaluated. For ATLPSO-ELS1 and ATLPSO-ELS2, the Tsize,i and Pc,i values of each particle are assigned as 2 and 0.5, respectively because the population divisions in both of their current (ATAcurrent) and memory (ATAmemory) swarms are nonadaptive. In other words, the task allocation mechanisms in both ATLPSO-ELS1 and ATLPSO-ELS2 are purely stochastic and not based on the diversity and fitness values of 162

each swarm member. The Emean values obtained by the ATLPSO-ELS variants in each tested problem were compared with those produced by BPSO. Specifically, the comparison results are expressed in term of percentage improvement (%Improve) as shown in Equation (3.5) (Lam et al., 2012), where  denotes the ATLPSO-ELS variants. The simulation results (i.e., Emean and %Improve values) of all ATLPSO-ELS variants in each benchmark problem are reported in Table 4.11. Meanwhile, the comparative study of all ATLPSO-ELS variants in each problem category are summarized as #BME and average %Improve in Table 4.12. As reported in Tables 4.11 and 4.12, the search accuracies of all ATLPSO-ELS variants in solving the tested benchmarks outperform the BPSO. These observations imply that any of the proposed strategy, i.e., the two-layer framework, the ATA modules, or the ELS module, is useful to improve the search accuracy of algorithm. Among the four compared variants, ATLPSO-ELS1 exhibits the most inferior performance because of its least average %Improve values in almost all tested problem categories. On the other hand, the complete ATLPSO-ELS exhibits the best average %Improve values of 83.910%, followed by the ATLPSO-ELS2 (i.e., 81.996%), and ATLPSO-ELS3 (i.e., 77.011%). By carefully inspecting the search performances of the ATLPSO-ELS variants in each problem category as presented in Tables 4.11 and 4.12, it is notable that ATLPSOELS3 exhibits superior search accuracy in solving the rotated problems by achieving more promising #BME and average %Improve values. Meanwhile, the ATLPSO-ELS2 performs particularly well in solving the shifted problems, considering that this variant successfully obtains the best average %Improve values in this problem category. Based on the mentioned observations, it can be conjectured that the hybridization of the two-layer framework and ATA modules introduces the desired rotationally invariant property to ATLPSO-ELS3, and this hybridization enables ATLPSO-ELS3 to focus effectively on the fitness landscapes with non-separable characteristic. On the other hand, the combination of both two -layer framework and ELS module in ATLPSO-ELS2 acts as a promising countermeasure against the swarm stagnation, considering that these strategies introduce ATLPSO-ELS2 with

163

Table 4.11 Comparison of ATLPSO-ELS variants with BPSO in 50-D problems

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 F21 F22 F23 F24 F25 F26 F27 F28 F29 F30

BPSO 4.67E+03 (-) 2.08E+04 (-) 2.10E+02 (-) 1.15E+02 (-) 1.14E+02 (-) 3.92E+01 (-) 1.21E+01 (-) 8.09E+00 (-) 2.57E+04 (-) 1.08E+02 (-) 1.70E+02 (-) 2.00E+02 (-) 2.04E+02 (-) 5.80E+01 (-) 2.31E+04 (-) 7.72E+04 (-) 9.59E+09 (-) 2.93E+02 (-) 3.14E+02 (-) 7.21E+02 (-) 1.47E+01 (-) 4.99E+01 (-) 7.05E+02 (-) 2.10E+01 (-) 6.11E+08 (-) 2.64E+04 (-) 5.35E+02 (-) 5.59E+02 (-) 1.07E+03 (-) 1.07E+03 (-)

ATLPSO-ELS1 0.00E+00 (100.000) 3.58E+00 (99.983) 4.78E+01 (77.178) 5.55E+00 (95.155) 6.51E+00 (94.265) 0.00E+00 (100.000) 4.22E-03 (99.965) 6.31E+00 (21.916) 3.90E+00 (99.985) 4.84E+01 (55.324) 1.17E+02 (31.039) 1.55E+02 (22.564) 0.00E+00 (100.000) 5.28E+01 (9.069) 3.00E-02 (100.000) 7.23E+01 (99.906) 9.11E+02 (100.000) 3.88E+01 (86.789) 2.82E+01 (91.027) 0.00E+00 (100.000) 7.55E-01 (94.849) 3.32E+01 (33.537) 3.08E-01 (99.956) 2.11E+01 (-0.325) 4.90E+07 (91.986) 2.28E+01 (99.913) 3.35E+02 (37.433) 3.88E+02 (30.634) 9.56E+02 (10.596) 9.46E+02 (11.881)

Emean (%Improve) ATLPSO-ELS2 8.15E-114 (100.000) 4.43E+00 (99.979) 8.85E+00 (95.778) 0.00E+00 (100.000) 0.00E+00 (100.000) 7.66E-03 (99.980) 1.43E-14 (100.000) 0.00E+00 (100.000) 2.97E+01 (99.884) 2.68E+01 (75.292) 1.47E+02 (13.597) 1.86E+02 (7.113) 3.87E+00 (98.104) 5.60E+01 (3.448) 2.43E-13 (100.000) 6.25E+01 (99.919) 2.67E-01 (100.000) 2.10E-13 (100.000) 1.99E-13 (100.000) 0.00E+00 (100.000) 3.74E-13 (100.000) 3.03E-14 (100.000) 5.00E-03 (99.999) 2.04E+01 (3.010) 2.46E+06 (99.598) 1.67E+00 (99.994) 2.67E+02 (50.038) 3.37E+02 (39.783) 9.36E+02 (12.460) 9.41E+02 (12.355)

164

ATLPSO-ELS3 0.00E+00 (100.000) 0.00E+00 (100.000) 4.71E+01 (77.519) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 4.81E+01 (55.616) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 4.78E+01 (17.644) 4.91E-02 (100.000) 7.87E+01 (99.898) 2.77E+03 (100.000) 7.66E+01 (73.902) 7.08E+01 (77.448) 0.00E+00 (100.000) 5.61E+00 (61.709) 4.12E+01 (17.529) 8.82E-01 (99.875) 2.11E+01 (-0.502) 4.64E+07 (92.413) 3.20E+01 (99.879) 3.52E+02 (34.264) 3.61E+02 (35.408) 9.56E+02 (10.614) 9.33E+02 (13.117)

ATLPSO-ELS 0.00E+00 (100.000) 0.00E+00 (100.000) 1.70E+01 (91.896) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 2.40E+01 (77.828) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 4.21E+01 (27.506) 2.27E-13 (100.000) 6.08E+01 (99.921) 2.67E-01 (100.000) 2.10E-13 (100.000) 1.93E-13 (100.000) 0.00E+00 (100.000) 3.32E-13 (100.000) 2.61E-14 (100.000) 3.86E-03 (99.999) 2.07E+01 (1.467) 1.60E+06 (99.739) 1.11E+00 (99.996) 2.71E+02 (49.337) 3.22E+02 (42.337) 9.23E+02 (13.682) 9.24E+02 (13.950)

Table 4.12 Summarized comparison results of ATLPSO-ELS variants with BPSO in each problem category

Conventional Problems (F1 to F8) Rotated Problems (F9 to F14) Shifted Problems (F15 to F22) Complex Problems (F23 to F26) Composition Problems (F27 to F30) Overall Result (F1 to F30)

BPSO 0 (-) 0 (-) 0 (-) 0 (-) 0 (-) 0 (-)

Emean (average %Improve) ATLPSO-ELS1 ATLPSO-ELS2 2 4 (86.058) (99.467) 1 0 (52.997) (49.573) 1 3 (88.263) (99.990) 0 0 (72.883) (75.650) 0 1 (22.636) (28.659) 4 8 (69.821) (77.011)

ATLPSO-ELS3 7 (97.190) 4 (78.877) 1 (78.811) 0 (72.916) 0 (23.351) 12 (75.544)

ATLPSO-ELS 7 (98.987) 6 (84.222) 8 (99.990) 3 (75.300) 2 (29.827) 26 (83.922)

adequate diversity to locate the shifted global optima in problems with shifted fitness landscapes. Finally, Tables 4.11 and 4.12 reveal that the complete ATLPSO-ELS achieves the best search accuracy in all problem categories as compared to the other ATLPSO-ELS variants. Specifically, the complete ATLPSO-ELS successfully obtains seven (out of eight), six (out of six), eight (out of eight), three (out of four), and two (out of four) best #BME values in solving the conventional, rotated, shifted, complex, and composition problems, respectively. The excellent performance of ATLPSO-ELS in solving all tested problem categories implies that the three proposed strategies, namely, the two-layer framework, the ATA modules, and the ELS module, are integrated effectively in the ATLPSO-ELS. As shown in Table 4.12, none of the contributions of these proposed strategies are severely compromised when ATLPSO-ELS is employed to solve different types of problems.

4.3.5 Comparison with Other State-of-The-Art Metaheuristic Search Algorithms In order to verify the capability of the proposed ATLPSO-ELS further, this subsection compares the search performance of ATLPSO-ELS with five cutting-edge OED-based metaheuristic search (MS) algorithms, namely, OLPSO (Zhan et al., 2011), Differential Evolution with Orthogonal Crossover Operator (OXDE) (Wang et al., 2012), Biogeography-

165

based Optimization with Orthogonal Crossover Operator (OXBBO) (Feng et al., 2013), Orthogonal Learning-based Artificial Bee Colony (OCABC) (Gao et al., 2013), and Orthogonal Teaching Learning based Optimization (OTLBO) (Satapathy et al., 2013). Some characteristics of the OED-based MS algorithms are briefly explained as follows. Both of the OXDE and OXBBO employ the similar orthogonal crossover operators, which are able to perform the search space quantization and their Q values are set larger than two. Unlike the ATLPSO-ELS which executes the OEDLS each time when a particle successfully updates its self-cognitive experience, the OCABC randomly selects one individual (i.e., bee) to perform the orthogonal learning in every iteration of the search process. Being the improved variant of TLBO, the OTLBO randomly select m learners to perform orthogonal learning through a multi-parent crossover operator during the teacher and learner phases. It is noteworthy that except for orthogonal learning, no further modifications were made on OTLPO, given that OTLBO still uses Equations (2.11), (2.12), and (2.13) to update the learner’s knowledge. In other words, the teacher and learner phases of the OTLBO are same with the original TLBO framework. In this subsection, the performance of ATLPSO-ELS is compared with the five MS algorithms across ten 30-D conventional problem. The Emean and SD values of all involved MS algorithms are presented in Table 4.13 and these results are summarized as w/t/l and #BME. It is noteworthy that the results of the compared MS peers were extracted from the literatures (Wang et al., 2012, Feng et al., 2013, Gao et al., 2013, Satapathy et al., 2013). The Emean and SD values of the MS peer are assigned as “NA”, if its results in a particular benchmark are not available. From Table 4.13, it is observed that most OED-based MS algorithms exhibit competitive search accuracy in solving the tested benchmark problems. This observation suggests that the OED technique is indeed useful in enhancing the optimization capability of a MS algorithm. Among the six tested algorithms, it can be observed that the proposed ATLPSO-ELS yields the best search accuracy for successfully solving eight out of ten problems, i.e., two times better than the third-ranked OCABC and TLBO. Moreover, ATLPSO-ELS is also the only algorithm that successfully locates the 166

Table 4.13 Comparisons between ATLPSO-ELS and other OED-based MS variants in optimizing 30-D functions Function Emean SD Schwefel Emean 2.22 SD Emean Schwefel 1.2 SD Schwefel Emean 2.21 SD Emean Rosenbrock SD Emean Step SD Emean Quartic SD Emean Rastrigin SD Emean Ackley SD Emean Grienwank SD w/t/l #BME Sphere

OLPSO

OXDE

OXBBO

OCABC

OTLBO

1.55E−41 (3.32E−41) 5.76E−31 (5.46E−31) 2.17E−04 (2.72E−04) 2.81E+00 (1.47E+00) 4.78E−01 (1.32E+00) 0.00E+00 (0.00E+00) 1.35E−03 (1.40E−03) 1.99E−01 (4.06E−01) 3.52E−15 (1.55E−15) 0.00E+00 (0.00E+00) 6/2/2 2

1.58E−16 (1.41E−16) 4.38E−12 (1.93E−12) 6.41E−07 (4.98E−07) 1.49E+00 (9.62E−01) 1.59E−01 (7.97E−01) 0.00E+00 (0.00E+00) 2.95E−03 (1.32E−03) 4.06E+00 (1.95E+00) 2.99E−09 (1.54E−09) 1.48E−03 (3.02E−03) 7/1/2 1

8.84E−64 (8.92E−64) 5.25E−46 (3.50E−46) 1.81E−11 (1.77E-11) 2.39E−01 (2.58E−01) 2.61E+01 (9.85E−01) 0.00E+00 (0.00E+00) 9.41E−04 (3.76E−04) 1.23E−01 (4.05E−01) 2.66E−15 (0.00E+00) 0.00E+00 (0.00E+00) 6/2/2 2

4.32E−43 (8.16E−43) 1.17E-22 (7.13E−23)

0.00E+00 (0.00E+00 2.11E−E−221 (0.00E+00) 0.00E+00 (0.00E+00 4.01E−215 (0.00E+00)

NA 5.67E−01 (2.73E−01) 7.89E−01 (6.27E−01) 0.00E+00 (0.00E+00) 4.39E−03 (2.03E−03) 0.00E+00 (0.00E+00) 5.32E−15 (1.82E−15) 0.00E+00 (0.00E+00) 4/3/2 4

NA 0.00E+00 (0.00E+00 1.69E−05 (1.22E−05) 0.00E+00 (0.00E+00 1.11E−15 (1.00E−15) 0.00E+00 (0.00E+00) 3/5/1 6

ATLPSOELS 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 4.67E+01 (1.98E+00) 0.00E+00 (0.00E+00) 8.47E+00 (4.13E−01) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 8

global optima of the Schwefel 2.22, Schwefel 2.21, and Ackley functions in 30-D. On the other hand, ATLPSO-ELS records the worst Emean values in the attempts of tackling the Rosenbrock and Quatric functions. Despite the performance of ATLPSO-ELS in these two mentioned benchmarks are not as consistent as the other compared algorithms, Table 4.13 shows that the optimization capability of ATLPSO-ELS is still more competitive than the state-of-art MS algorithms because the former is able to solve more problems than the latter.

4.3.6 Comparison in Real-World Problems In this subsection, the capabilities of the proposed ATLPSO-ELS in solving the (1) the gear train design problem (Sandgren, 1990), (2) the frequency-modulated (FM) sound synthesis problem (Das and Suganthan, 2010), and (3) the spread spectrum radar polyphase code design problem (Das and Suganthan, 2010) are investigated. The general descriptions and the mathematical models of these engineering applications have been presented in details in Section 2.4.2.

167

Table 4.14 Simulation results of ATLPSO-ELS and six other PSO variants in the gear train design problem Emean SD h tmean

APSO 1.28E-08 1.70E-08 + 1.11E+02

FLPSO-QIW 3.34E-10 5.78E-10 1.17E+02

FPSO 7.48E-07 2.43E-06 + 9.21E+01

FIPS 5.59E-09 3.70E-08 + 9.14E+01

OLPSO-L 6.79E-09 1.25E-08 + 3.83E+01

OPSO 6.53E-07 1.97E-06 + 3.42E+01

ATLPSO-ELS 1.65E-09 4.94E-09 3.89E+01

Table 4.15 Simulation results of ATLPSO-ELS and six other PSO variants in the FM sound synthesis problem Emean SD h tmean

APSO 2.06E+01 5.46E+00 + 1.11E+02

FLPSO-QIW 5.23E+00 5.90E+00 1.14E+02

FPSO 1.75E+01 4.64E+00 + 7.90E+01

FIPS 1.19E+01 5.77E+00 9.01E+01

OLPSO-L 1.64E+01 5.94E+00 + 3.55E+01

OPSO 1.95E+01 5.57E+00 + 3.23E+01

ATLPSO-ELS 1.53E+01 1.06E+00 3.30E+01

Table 4.16 Simulation results of ATLPSO-ELS and six other PSO variants in the spread spectrum radar polyphase code design problem Emean SD h tmean

APSO 1.33E+00 1.92E-01 + 4.96E+02

FLPSO-QIW 1.02E+00 6.88E-02 + 9.64E+02

FPSO 1.13E+00 1.30E-01 + 2.48E+02

FIPS 1.04E+00 1.47E-01 + 2.75E+02

OLPSO-L 1.27E+00 1.97E-01 + 1.80E+02

OPSO 1.60E+00 2.33E-01 + 1.59E+02

ATLPSO-ELS 1.01E+00 1.95E-01 1.81E+02

All of the six PSO variants employed in the previous experiments (Section 4.3.3) are compared with the proposed ATLPSO-ELS in the three mentioned engineering design problems. The simulation settings for these real-world problems are summarized in the previous chapter, see Table 3.20. Meanwhile, the simulation results of all tested algorithms over the 30 independent runs for the gear train design, FM sound synthesis, and spread spectrum radar polyphase code design problems are reported in Tables 4.14, 4.15, and 4.16, respectively. Specifically, the values of Emean, SD, h, and the mean computational time (tmean) obtained by each compared algorithms are summarized in these tables. As shown in Table 4.14, all involved PSO variants exhibit excellent search accuracy in solving the gear train design problem. Specifically, the Emean values produced by all tested algorithms are less than 10-6. Among the seven tested algorithms, the proposed ATLPSOELS achieves the second best Emean value. The Wilcoxon test results (h) as reported in Table 168

4.14 show that the search accuracy of ATLPSO-ELS in solving the gear train design problems significantly outperforms all compared peers, except for the first ranked FLPSOQIW. It is noteworthy that although the FLPSO-QIW produces more desirable Emean values than ATLPSO-ELS, the former’s excellent search accuracy in solving the FM sound synthesis problem is compensated by its huge computational overhead (represented by tmean). Specifically, the tmean value required by the FLPSO-QIW to solve the gear train design problem is three times higher than that of ATLPSO-ELS. Meanwhile, Table 4.15 reveals that the ATLPSO-ELS is identified as the third best optimizer in solving the FM sound synthesis problem. Specifically, the Emean value obtained by the ATLPSO-ELS is significantly better (i.e., lower) than the APSO, FPSO, OLPSO-L, and OPSO, suggesting that the former exhibits more superior search accuracy than the latter four in solving this problem. Although the Emean value produced by the ATLPSO-ELS in the FM sound synthesis problem is relatively inferior to those of FLPSO-QIW and FIPS, the former is at least twice as better than the latter two in term of computational overhead. Finally, the simulation results in Table 4.16 reveal that all seven tested algorithm have similar search accuracies in dealing with the spread spectrum radar polyphase design problem because the Emean values produced by the tested algorithms are relatively similar. Among the seven tested algorithms, the proposed ATLPSO-ELS achieves the best Emean value, implying that it has the most superior search accuracy in solving the radar polyphase code design problem. The competitive search performance of ATLPSO-ELS in solving this problem is further verified by the Wilcoxon test, considering that the h values in Table 4.16 reveal that the search accuracy of the ATLPSO-ELS is statistically better than all of the compared peers. Meanwhile, the computational overhead required by the ATLPSO-ELS to solve the spread spectrum polyphase design problem is also promising because the tmean value consumed by this algorithm is comparable with the second-ranked OLPSO-L. From the simulation results presented in Tables 4.14 to 4.16, it is observed that most compared PSO variants are unable to balance the performance improvement in term of search accuracy and the additional computational overhead incurred. For example, although 169

the FLPSO-QIW can generally solve the three engineering design problems with promising Emean values, it might be less feasible for some real-world applications because of its high computational overhead. Conversely, the OLPSO-L and OPSO which generally produce lower tmean have relatively poor optimization capabilities, given that the Emean values of these two algorithms are relatively inferior. It is also worth mentioning that although the OLPSOL performs well in solving the benchmark problem, it fails to maintain its excellent search accuracy in solving the real-world problems. As compared to these PSO variants, the proposed ATLPSO-ELS is able to tackle the three engineering design problems with satisfactory search accuracy, without consuming excessive computational overhead. These experimental findings suggest that the proposed ATLPSO-ELS might emerge as a feasible candidate to tackle the real-world optimization problems.

4.3.7 Discussion From previous subsections, the proposed ATLPSO-ELS has been proven to have more superior search accuracy, search reliability, and search efficiency as compared to the wellestablished PSO variants and state-of-the-art MS algorithms. The competitive search performance of ATLPSO-ELS against its competitors is attributed to the three proposed strategies, namely (1) the two-layer evolution framework, (2) the ATAcurrent and ATAmemory modules, and (3) the ELS module. Focusing on the two-layer evolution framework of ATLPSO-ELS, it is observed that this framework alternately evolves the current swarm and the memory swarm, based on the evolution statuses of swarm members, during the search process. The selective execution of the current swarm evolution and the memory swarm evolution as implemented in the ATLPSO-ELS allows the algorithm to focus on one type of swarm evolution for each iteration. Specifically, the ATLPSO-ELS switches one swarm evolution to another only if the previous swarm evolution is no longer able to improve the fitness of global best particle. Such implementation enables the ATLPSO-ELS to rapidly locate the global optima of the tested benchmarks without incurring unnecessary number of fitness evaluations (FEs). As 170

shown in Table 4.9 and Figure 4.7, the search efficiency of ATLPSO-ELS is proven better than six other compared peers. Another innovation that introduced into the ATLPSO-ELS is the development of the ATAcurrent and ATAmemory modules for the current swarm evolution and the memory swarm evolution, respectively. Based on the heuristic developed in this research work, these two modules are capable to offer the systematic and adaptive task allocation mechanisms in balancing the exploration/exploitation searches of the current swarm members and the memory swarm members. For instance, the ATAmemory module assigns the memory swarm members with higher diversity values into the exploitation section, whereas the members with lower diversities are likely to be assigned with the exploration search. Unlike the peerlearning phase of TPLPSO as described in the previous chapter, the systematic and adaptive mechanism of the ATA modules ensures that there is always a certain amount of ATLPSOELS particles that perform both of the exploration and exploitation searches. As reported in Tables 4.11 and 4.12, the combination of the two-layer framework and the ATA modules introduces the desired rotationally invariant property to ATLPSO-ELS. This hybridization allows the proposed algorithm to effectively solve the rotated problems with non-separable characteristic. Finally, the ELS module of ATLPSO-ELS consists of two unique learning strategies, namely, the OEDLS and SPLS. It is noteworthy that these two strategies are specifically designed to evolve the global best particle and they play significantly different roles during the search process of ATLPSO-ELS. Being an important component in modeling the concept of bidirectional learning mechanism between the global best particle and the particles with improved self-cognitive experiences, OEDLS encourages the exploitation capability of global best particle by extracting the useful information contained in the newly improved particles. These information are subsequently transferred to the global best particle to further improve the quality of this solution. On the other hand, the SPLS serves by an effective countermeasure against stagnation because this strategy introduces additional momentum (i.e., via the perturbation process) to the global best particle when it is trapped in the inferior 171

regions of search space for long time. In other words, the SPLS promotes the exploration capability of the global best particle in ATLPSO-ELS. The experimental findings as presented in Tables 4.11 and 4.12 show that the combination of the two-layer framework and the ELS module are effective to tackle the problems with shifted fitness landscapes. It is conjectured that the mentioned hybridization strategy introduces the ATLPSO-ELS with adequate diversity to locate the shifted global optima during the search process. Despite having impressive search performance, the proposed ATLPSO-ELS suffers with some drawbacks. As shown in Table 4.10, the computational complexity of ATLPSOELS is relatively high as compared to its peer algorithms. This is attributed to the fact that there are too many metrics (i.e., PSD, PFSD, Pc, and Tsize) and parameters (i.e., K1, K2, Z, and m) are introduced to the ATLPSO-ELS to perform the adaptive task allocations on both current and memory swarms. The presence of some metrics such as PSD and PFSD are computationally intensive. Moreover, the parameter tuning process used to achieve the reasonably good performance of ATLPSO-ELS is tedious and time consuming if too many parameters are involved. Another major concern of ATLPSO-ELS is that the proposed ATAmemory module restricts the particle’s search strategy into two types, i.e., exploration and exploitation. In other words, learning strategies employed in this module can only offer the ATLPSO-ELS particle with limited choices of exploration/exploitation strengths during the search process. This limitation might restrict the overall optimization capability of the algorithm in dealing with different types of problems. This is justified by the fact that different types of optimization problems might exhibit different characteristics because of different shapes of fitness landscapes (Li et al., 2012). This condition is also applicable to a specific problem given that different sub-regions of a particular problem can be different (Li et al., 2012). To effectively solve the given problems, each particle needs to be able to adaptively vary its exploration/exploitation strengths according to its search performance and location in the search space.

172

To alleviate the drawbacks of ATLPSO-ELS, the following chapter will introduce an innovative, yet efficient mechanism to further improve the algorithm’s search performance. Specifically, the mentioned mechanism is capable to introduce the learning strategies which could offer different degrees of exploration/exploitation strengths to the particles during the search process. This innovation is expected to improve the search performance of PSO, especially in the problems with complicated fitness landscape, without incurring excessive computational complexity to the algorithmic framework.

4.4 Summary In this chapter, a total of three strategies, namely the two-layer evolution framework, the ATA module, and the ELS module are proposed and incorporated into the PSO. To this end, an ATLPSO-ELS algorithm is developed to evolve the current swarm and the memory swarm of PSO during the optimization process. During the current swarm evolution and the memory swarm evolution of ATLPSO-ELS, two ATA modules, namely ATAcurrent and ATAmemory, are employed to adaptively divide the respective swarm members into the exploration and exploration sections. In other words, these ATA modules are employed by the ATLPSO-ELS to adaptively regulate the exploration/exploitation searches of swarm members during the optimization process. Finally, the ELS module is specifically developed to evolve the global best particle when the predefined conditions are met. This module consists of two learning strategies called OEDLS and SPLS. The former aims to enhance the search accuracy of ATLPSO-ELS, whereas the latter attempts to alleviate the swarm stagnation issue. The simulation results obtained from the extensive comparative studies indicate that the proposed ATLPSO-ELS has superior search accuracy, search reliability, and search efficiency, as compared to other state-of-art PSO and OED-based MS variants. Furthermore, an experimental study is also performed to investigate the contribution of each proposed strategy in ATLPSO-ELS in enhancing the algorithm’s search performance. The experimental results show that different proposed strategies are useful to improve the search 173

accuracy of ATLPSO-ELS in dealing with different types of problems. Moreover, this experimental study also verified that the integration of the three proposed strategies into the ATLPSO-ELS does not severely compromise the contribution of each strategy during the optimization process.

174

CHAPTER 5 PARTICLE SWARM OPTIMIZATION WITH ADAPTIVE TIME-VARYING TOPOLOGY CONNECTIVITY

5.1 Introduction A new PSO variant called ATLPSO-ELS had been proposed in the previous chapter. This algorithm is developed by capitalizing the different dynamic behaviors exhibited by the current swarm and the memory swarm of PSO during the search process. Two adaptive task allocation (ATA) mechanisms are developed in the ATLPSO-ELS to adaptively balance the exploration and exploitation searches of swarm members during the current swarm evolution and the memory swarm evolution. Furthermore, an elitist-based learning strategy (ELS) module, which consists of the OED-based learning strategy (OEDLS) and the stochastic perturbation-based learning strategy (SPLS), is proposed to specifically guide the global best particle of ATLPSO-ELS to seek the optimum solution of a given optimization problem. Specifically, the OEDLS and SPLS govern the exploitation and exploration capabilities of the global best particle of ATLPSO-ELS, respectively. The simulation results obtained from the thorough experimental studies reveal that these proposed modifications indeed contribute to the performance enhancement of ATLPSO-ELS. Despite exhibiting competitive search performance, two notable demerits could be identified from the algorithmic framework of ATLPSO-ELS. First, a considerable amount of algorithmic metrics and parameters have been introduced to realize the ATA mechanisms of ATLPSO-ELS during the current swarm evolution and the memory swarm evolution. Some of these proposed metrics are computationally intensive and they tend to incur the additional amount of complexity to the ATLPSO-ELS. Moreover, a series of labor-intensive and time consuming parameter tuning processes are required to obtain the reasonably promising search performance of ATLPSO-ELS, given that excessive amounts of parameters are introduced to this algorithm. The second drawback of ATLPSO-ELS is that the swarm members of this algorithm are assigned with the learning strategies with limited choices of 175

exploration/exploitation strengths. This deficiency might lead to the questionable performance of ATLPSO-ELS in dealing with different types of optimization problems, especially for those consist of the complicated fitness landscapes. To effectively alleviate the mentioned shortcomings of ATLPSO-ELS, this chapter proposes a new PSO variant known as PSO with Adaptive Time-Varying Topology Connectivity (PSO-ATVTC). Unlike the ATLPSO-ELS, a more innovative and yet efficient mechanism called the ATVTC module has been developed in PSO-ATVTC to adaptively vary the topology connection of each particle with respect to the other population members. This module attempts to introduce the learning strategies which could offer different degrees of exploration/exploitation strengths to the PSO-ATVTC particles during the search process. Apart from this, a new learning framework is also developed to effectively guide the search directions of PSO-ATVTC particles based on their respective topological information obtained from the proposed ATVTC module. The remaining of this chapter is organized in the following manners. First, the methodology of the proposed PSO-ATVTC is thoroughly explained. Next, the comprehensive experimental studies are conducted to evaluate the search performance of the proposed algorithm. The final section concludes the research work introduced in this chapter.

5.2 PSO with Adaptive Time-Varying Topology Connectivity This section begins with the research ideas that inspire the development of the proposed PSO-ATVTC. The general description of the PSO-ATVTC is provided in the subsequent subsections, followed by the implementation details of each algorithmic module employed by the proposed algorithm. The complete framework of the PSO-ATVTC is presented in the final subsection. Some important remarks are also provided to highlight the differences between the proposed work and the existing PSO variants.

176

5.2.1 Research Ideas of PSO-ATVTC This subsection aims to discuss the limitations encountered by the previously proposed ATLPSO-ELS. Some previous research findings that motivate the development of PSOATVTC, especially the ATVTC module, are also elaborated to highlight the contribution of the research work proposed in this chapter. Early studies revealed that different types of optimization problems have different characteristics, attributed to their different shapes of fitness landscape (Li et al., 2012). This scenario might also be applicable for certain problem categories (e.g., composition problem), considering that different sub-regions of these problems can be significantly different (Li et al., 2012). In order to enhance the robustness of PSO in tackling the diverse set of global optimization problems with different types of characteristics, each PSO particle needs to learn the information obtained from its current subregion of fitness landscapes, as well as its search status. These information are useful to adaptively adjust the exploration/exploitation strengths of each particle during the search process and such systematic mechanism could help the algorithm to effectively solve a given optimization problem. By revising the algorithmic framework of ATLPSO-ELS as described in the previous chapter, it is notable that the ATAmemory module tends to restrict the particle’s search strategy into two types, i.e., exploration and exploitation, during the memory swarm evolution of ATLPSO-ELS. This observation suggests that the learning strategies assigned to memory swarm members of ATLPSO-ELS, via the ATAmemory module, have limited choices of exploration/exploitation strengths in performing the search process. This limitation might restrict the overall optimization capability of ATLPSO-ELS in dealing with different types of problems. On the other hand, it could be observed that the ATAcurrent module of ATLPSOELS possesses the ability to tune the exploration/exploitation strengths of the current swarm members based on the tournament size of each particle i (i.e., Tsize,i). Nevertheless, this mechanism tends to incur relatively high computational complexity to the algorithm because the derivation of Tsize,i relies on the population’s fitness spatial diversity (PFSD) metric that is computation-intensive. Based on these mentioned drawbacks, it is suggested that a more 177

innovative and less complicated alternative approach is required to adaptively vary the exploration/exploitation strengths of particles, in order for the PSO to effectively solve different types of optimization problems. Kennedy (1999) investigated the impact of population topology on the transmission rate of best solution within the PSO swarm. The experimental results revealed that the global version of PSO with larger topology connectivity [Figure 2.1(a)] is more exploitative, considering that this PSO variant performs well in the simple problems. On the other hand, the local version of PSO with smaller topology connectivity [Figure 2.1(b)] favors the complex problems, suggesting that this PSO variant is more explorative. These experimental findings imply that if the topology connectivity of each particle in the PSO swarm can be varied via certain topology connectivity modification (TCM) strategies, it is possible to introduce different degrees of exploration/exploitation strengths to each particle during the search process. This suggested approach immediately raises the following major concerns: (1) what are the TCM strategies required to change the particle’s topology connectivity and (2) what are the heuristics required by the particle to detect the local environmental changes and then execute the appropriate TCM strategy to adapt its topology connectivity during the search process. To answer the abovementioned questions, this chapter revisits the previous works proposed by Hsieh et al. (2009) and Zhu et al. (2013). These researchers advocated that the population size of PSO and DE could indirectly influence the algorithm’s effectiveness and efficiency. Following this line of thinking, Hsieh et al. (2009) and Zhu et al. (2013) developed a population manager with an efficient population utilization strategy (EPUS) and an adaptive population tuning scheme (APTS) to improve the search performance of PSO and DE, respectively. Accordingly, these two strategies were able to adaptively adjust the algorithm’s population size based on the online solution search status and the desired population distribution along the tradeoffs (Hsieh et al., 2009, Zhu et al., 2013). Specifically, both of the EPUS and APTS strategies introduced a new individual into the population when a better solution could not be found for some generations. The new individual was expected 178

to provide some useful information to help the population escape the inferior regions of search space. On the other hand, when the algorithm was able to find one or more superior solutions in the evolutionary process, this scenario implies that the existing individuals could be too many to handle the current solution search procedure. Both the EPUS and APTS strategies expel the redundant individuals with inferior performance to conserve the computation loads and speed up the solution search progress. Extensive experimental results verify the effectiveness of these strategies in enhancing the search performances of PSO and DE. The competitive performances demonstrated by the EPUS and ATPS strategies have provided this chapter with some insights to adaptively control the topology connectivity of a PSO particle. Specifically, inspired by the mechanisms of the EPUS and APTS strategies in tuning the algorithm’s population size, this chapter develops an ATVTC module to adaptively adjust the exploration/exploitation strengths of a particle. It is important to mention that, unlike the EPUS and APTS strategies, the subject to be varied in the proposed ATVTC module is the particle’s topology connectivity instead of the algorithm’s population size. A total of three TCM strategies, namely Increase, Decrease, and Shuffle, are introduced into the ATVTC module to adaptively tune the exploration/exploitation strengths of each particle during the search process. In the subsequent subsections, the general description of the proposed PSO-ATVTC is presented, followed by the implementation details of all essential components utilized in this algorithm.

5.2.2 General Description of PSO-ATVTC The general description of the proposed PSO-ATVTC is provided in this subsection. As explained earlier, the key benefit of PSO-ATVTC is the synergy of a novel ATVTC module and a new learning framework into the PSO in offering a better searching performance to the PSO. More particularly, these innovative strategies aim to improve the algorithm’s search performance in tackling the problems with complicated fitness landscapes, without incurring excessive computational complexity to the algorithmic framework. The general descriptions 179

of the two main algorithmic components of PSO-ATVTC, i.e., the ATVTC module and the new learning framework, are provided as follows. As explained in the earlier subsection, one of the merits offered by the PSO-ATVTC is the capability of this algorithm to adaptively tune the exploration/exploitation strengths of different particles in different locations of fitness landscape and in different search stages during the optimization process. This appealing feature of PSO-ATVTC is realized by adaptively varying the topology connectivity of each particle during the search process, via the proposed ATVTC module. Specifically, a total of three TCM strategies, known as the Increase, Decrease, and Shuffle strategies are introduced into the ATVTC module to adjust the particle’s topology connectivity. It is noteworthy that each of these TCM strategies has different impacts in adjusting the particle’s exploration/exploitation strengths, as will be further elaborated in the next subsection. Considering that the proposed ATVTC module is used to adaptively adjust the topology connectivity of each PSO-ATVTC particle, it could be anticipated that different PSO-ATVTC particles are assigned with different neighborhood members. A new learning framework is therefore proposed into the PSO-ATVTC to effectively guide the search direction of each PSO-ATVTC particle, based on the information acquired from their respective neighborhood members. Similar with the TPLPSO and ATLPSO-ELS proposed in the previous chapters, the new learning framework employed by the PSO-ATVTC evolves the particles via two learning phases, which comprised of a new velocity update mechanism and a new neighborhood search (NS) operator. It is noteworthy that the NS operator acts as the alternative learning of PSO-ATVTC, considering that it is only triggered to further evolve the particle if the latter fails to improve its personal best fitness via the new velocity update mechanism.

5.2.3 ATVTC Module The ATVTC module is one of the key factors that determines the search performance of PSO-ATVTC, considering that this module is able to adaptively change the particle’s 180

topology connectivity with time during the search process. Specifically, the ATVTC module aims to adaptively assign different exploration/exploitation strengths to different PSOATVTC particles via three TCM strategies (i.e., Increase, Decrease, and Shuffle) based on the search feedback status. As mentioned in the previous subsection, the idea of ATVTC module is in fact inspired from the research works of Hsieh et al. (2009) and Zhu et al. (2013). Unlike the previous literatures, the ATVTC module manipulates the particle’s topology connectivity instead of the population size. Moreover, this proposed module also employs a Shuffle strategy that is absent in the EPUS and APTS modules developed by the Hsieh et al. (2009) and Zhu et al. (2013), respectively. The idea of employing the ATVTC module to adaptively vary the particle’s exploration/exploitation strengths is explained as follows. In the initial stage of the PSOATVTC, the topology of each particle i (i.e., Ti) is randomly initialized in the range of [TCmin, TCmax], where TCmin = 1 and TCmax = S – 1 represent the minimum and maximum topology connectivity, respectively; S denotes the algorithm’s population size. It is important to mention that, the topology connectivity of different particles could be randomly initialized into different values, to ensure that different exploration/exploitation strengths could be introduced to the PSO-ATVTC particles during the initial stage of searching. For instance, the particle i may have TCi = 1 (i.e., lower topology connectivity) and tends to be more explorative, whereas the particle j is initialized with TCj = 3 (i.e., higher topology connectivity) to have stronger exploitation strength. Based on the value of TCi obtained, the particle i randomly selects TCi members from the PSO-ATVTC population as its initial neighborhood members (i.e., neighbori). Unlike the neighborhood structure adopted by the previously proposed DMS-PSO (Liang and Suganthan, 2005) and DNLPSO (Nasir et al., 2012), the particles of the proposed PSOATVTC are not connected in the bidirectional manner. For example, in the case of TCi = 1, if particle i has chosen particle j with TCj = 3 as its neighbor, it is not necessary for particle j to select particle i as one of its neighbors. Particle j could select other particles (e.g. particles k, m, and l) as its neighbor members (i.e. neighborj) instead. Figure 5.1 illustrates the possible 181

Figure 5.1: Possible topology connectivity of each PSO-ATVTC particle during the initialization of ATVTC module.

topology connectivity of each PSO-ATVTC particle during the initialization of the ATVTC module. During the search process, the ATVTC module keep tracks of the search performance of each PSO-ATVTC particle and then makes appropriate adjustments on the particle’s connectivity. This strategy enables the ATVTC module to adaptively assign different exploration/exploitation strengths to different particles in different locations of the search space and in different search stages of optimization process. A total of three TCM strategies, namely the Increase strategy, the Decrease strategy, and the Shuffle strategy, are adopted in the ATVTC module to achieve the aforementioned objective. Each of these TCM strategies is triggered by different scenarios encountered by the PSO-ATVTC during the search process and these scenarios are described as follows. Scenario 1, where particle i fails to improve the objective function value (ObjV) of the global best particle Pg [i.e., OjbV(Pg)] for Z successive fitness evaluations (FEs), signifies that particle i could be trapped in the inferior regions of the search space. In other words, the current information provided by the existing neighborhood members of particle i might be too little to handle its current solution searching. To address this scenario, the Increase strategy is triggered by the ATVTC module. Specifically, the ATVTC module increases the TCi of particle i by one, allowing particle i to randomly select a new neighbors from the population. The selected new neighbor shares its updated information to particle i to help the 182

Figure 5.2: Graphical illustration of the Increase strategy performed by the ATVTC module when scenario 1 is met.

Figure 5.3: Graphical illustration of the Decrease strategy performed by the ATVTC module when scenario 2 is met.

latter to escape from the mentioned inferior regions. Figure 5.2 illustrates the execution of the Increase strategy in scenario 1, where the ATVTC module increases the TCi of particle i from two to three. Particle i selects particle m as its new neighbor, and neighbori now consists of three members (i.e., particles j, k, and m). Notably, only the neighborhood structure of particle i is changed in this process. The topology connectivity of the remaining particles remains unaltered. Scenario 2, where particle i successfully updates OjbV(Pg) for Z successive FEs, implies that the existing members of neighbori of particle i could be too redundant to handle the current solution searching. The Decrease strategy is therefore executed by the ATVTC module to deal with this scenario. Specifically, the ATVTC module reduces the TCi of particle i by one and randomly expels one member from neighbori. Figure 5.3 illustrates the 183

execution of the Decrease strategy in scenario 2, where the ATVTC module decreases the TCm of particle m from two to one. Particle m randomly selects particle i and then expels it from neighborm. Particle m now consists of one member (i.e., particle l) in its updated neighborm. Similar with scenario 1, the neighborhood structure of the other particles remains unchanged. It is important to mention that in the proposed ATVTC module, both of the Increase and Decrease strategies are only applied to vary the particle i's topology connectivity, if its TCi is in between the TCmin and TCmax (i.e., TCmin < TCi < TCmax). In the extreme cases of TCmin and TCmax, it is impossible to further decrease and increase the particle i's topology connectivity, respectively. Thus, in scenario 3, where the TCi of particle i is trapped in the upper (i.e., TCmax = S - 1) or lower (i.e., TCmin = 1) boundaries for Z successive FEs, the Shuffle strategy is introduced into the ATVTC module to randomly generate a new TCi for particle i. New neighborhood members are then assigned to particle i by randomly selecting

(a)

(b) Figure 5.4: Graphical illustration of the Shuffle strategy performed by the ATVTC module in scenario 3 when (a) TCk = TCmin and (b) TCn = TCmax. 184

TCi members from the population. The Shuffle strategy could be interpreted as a random perturbation process because it provides the new topological information for particle i and allows it to perform the search in the new direction provided by the new neighbori. Figures 5.4(a) and 5.4(b) illustrates the two possible examples of scenario 3. For Figure 5.4(a), where the particle k’s topology connectivity is trapped in the lower boundary, the Shuffle strategy randomly changes the TCk of particle from one (TCmin) to three. This action allows particle k to abandon its original neighbor (i.e., particle l) and randomly select three population members (i.e., particles j, m, and n) as its new neighbork. In Figure 5.4(b), where the particle n’s connectivity reaches to the upper boundary, the Shuffle strategy randomly changes the TCn of particle n from five (TCmax) to two. Thus, particle n resets its original neighborn and randomly selects two population members (i.e., particles i and j) as its new neighbors. Figure 5.5 illustrates the overall mechanism of the proposed ATVTC module. Initially, the topological connectivity of all particles in the population is randomly initialized, and each particle selects TC population members as its neighbor. For example, particle i with TCi = 2 selects particles j and k as its neighbors. After a certain number of FEs, particle m meets scenario 1 and the Increases strategy is triggered by the ATVTC module to increase TCm from two to three. Particle m then selects particle k as its new neighbor, and neighborm now consists of particles i, k, and l. Next, particle j with TCj = 1 meets scenario 3 because its TCj is stagnated in the lower boundary for Z successive FEs. Thus, the Shuffle strategy is executed by the ATVTC module to randomly generate a new TCj for particle j (e.g., TCj = 3). Once the new TCj is obtained, particle j immediately discards its original neighbor (i.e., particle k) and then randomly selects three particles (i.e., particles l, m, and n) as its new neighbors. This condition is then followed by particle l that encounters scenario 2, and the Decreases strategy of ATVTC module is employed to reduce TCl from two to one. Thus, particle l randomly selects particle k and expels it from its neighbork. Next, particle n meets scenario 3, given that its TCn is stagnated in the upper boundary for Z successive FEs. Thus, the ATVTC module performs the Shuffle strategy by randomly assigning a new

185

Figure 5.5: Graphical illustration of the ATVTC module mechanism.

TCn for particle n (e.g. TCn = 2). Particle n then resets its neighborn and randomly selects two particles (i.e. particles k and m) as its new neighbors. These processes are repeated until the PSO-ATVTC meets its termination criterion. The implementation of the proposed ATVTC module is illustrated in Figure 5.6. As shown in this figure, a total of four variables, i.e., IMi, DMi, UMi, and LMi, are employed by the ATVTC module to monitor the search status of each particle i. Specifically, the variables IMi and DMi record the numbers of consecutive FEs where particle i successfully and fails to update the ObjV (Pg), respectively. Meanwhile, the variables UMi and LMi documents the number of consecutive FEs where the TCi of particle i reaches in the upper (TCmax) and lower (TCmin) boundaries of topology connectivity, respectively. When the values stored in these variables exceed the predefined threshold Z, the ATVTC module will adaptively regulate the exploration/exploitation strengths of the involved particle by systematically tuning its topology connectivity via the appropriate TCM strategy. Apart from these four variables (i.e., IMi, DMi, UMi, and LMi), two exemplars that play major roles to guide the search direction of particle i through the proposed learning framework are also observed in Figure 5.6. These

186

ATVTC_module(IMi, DMi, UMi, LMi, TCi, neighbori, k, Pg, ObjV(Pg)) Input: Variables used to monitor the search status of particle i (IMi, DMi, UMi, LMi,), topology connectivity of particle i (TCi), neighborhood members of particle i (neighbori), number of fitness evaluation consumed (fes), global best position (Pg), ObjV of global best position [ObjV(Pg)] 1: if (DMi > Z) and (TCi < TCmax) then /*Scenario 1, Increase strategy*/ 2: TCi = TCi + 1; 3: Randomly select one member from the population and update the neighbori; 4: [cexp,i sexp,i ] = Generate_Exemplars(neighbori); 5: Perform fitness evaluation on cexp,i and sexp,i; 6: Update the Pg and ObjV(Pg) if cexp,i or sexp,i has better fitness; 7: fes = fes + 2, 8: DMi = UMi = LMi = 0; 9: else if (IMi > Z) and (TCi > TCmin) then /*Scenario 2, Decrease strategy*/ 10: TCi = TCi - 1; 11: Randomly select and expel one member from neighbori; 12: [cexp,i sexp,i ] = Generate_Exemplars(neighbori); 13: Perform fitness evaluation on cexp,i and sexp,i; 14: Update the Pg and ObjV(Pg) if cexp,i or sexp,i has better fitness; 15: fes = fes + 2 ; 16: IMi = UMi = LMi = 0; 17: else if (LMi > Z) or (UCi > Z) then /*Scenario 3, Shuffle strategy*/ 18: Randomly generate a new TCi in the range of [TCmin, TCmax]; 19: Reset neighbori. Randomly select TCi members from the population and update the neighbori; 20: [cexp,i sexp,i ] = Generate_Exemplars(neighbori); 21: Perform fitness evaluation on cexp,i and sexp,i; 22: Update the Pg and ObjV(Pg) if cexp,i or sexp,i has better fitness; 23: fes = fes + 2 ; 24: IMi = DMi = UMi = LMi = 0; 25: end if Output: Updated IMi, DMi, UMi, LMi, TCi, neighbori, fes, Pg, ObjV(Pg) Figure 5.6: ATVTC module of the PSO-ATVTC.

exemplars are known as the cognitive exemplar (cexp,i) and the social exemplar (sexp,i) of particle i. Specifically, the sexp,i exemplar is derived from the elite members of particle i'’s neighborhood via the roulette wheel selection, whereas the cexp,i exemplar is computed from the non-elite members through the search strategy inspired by Mendes et al. (2004). From Figure 5.6, it is notable that the sexp,i and cexp,i exemplars of particle i will be updated by the ATVTC module each time the topology connectivity of particle i is changed via the Increase, Decrease, or Shuffle strategies (see lines 4, 12, and 20). This strategy allows the PSOATVTC particles to perform the search process based on the latest information provided by

187

their respective neighborhood members. The formulations of the sexp,i and cexp,i exemplars will be further elaborated in Section 5.2.4(a).

5.2.4 Proposed Learning Framework As mentioned earlier, each PSO-ATVTC particle consists of its unique set of neighborhood members because its topology connectivity is adaptively tuned by the proposed ATVTC module based on its search status. Intuitively, the information acquired from the unique neighborhood structure of each particle need to be fully utilized in order to effectively guide the particle’s search direction. To achieve this purpose, this research work develops a new learning framework, which consists of a new velocity update mechanism and a new neighborhood search (NS) operator, into the proposed PSO-ATVTC. This section begins by explaining the methodologies used to generate the sexp,i and cexp,i exemplars of each PSO-ATVTC particle. Notably, these two exemplars are crucial in guiding the particle i via the proposed learning framework. In the following subsections, the detailed descriptions of (1) the new velocity update mechanism and (2) the new NS operator adopted in the proposed learning strategy of PSO-ATVTC are provided.

5.2.4(a) Derivation of The Cognitive Exemplar and The Social Exemplar As shown in Equation (2.1), the self-cognitive component and the social component of the PSO are represented by the second component of (Pi,d – Xi,d) and the third component of (Pn,d – Xi,d), respectively. In general, the exemplar of social component, i.e., Pn,d = Pg,d, has better fitness (or lower ObjV) than the exemplar of self-cognitive component, i.e., Pi,d. Based on these observations, two exemplars called the cognitive exemplar (cexp,i) and the social exemplar (sexp,i) are generated from the neighborhood members of particle i (neighbori) to guide particle i during the search process of PSO-ATVTC. The derivations of these mentioned exemplars are started by first sorting neighbori according to the personal best fitness of each member. The fitter members with the personal best fitness ranked at the first quartile range (n_upperi) are used to generate the sexp,i exemplar, whereas the members 188

in the remaining three quartile quartiles (n_loweri) are used to construct the cexp,i exemplar. Two different approaches used to generate the sexp,i and cexp,i exemplars are explained as follows. The sexp,i exemplar of particle i is generated from n_upperi via the roulette wheel selection technique. Specifically, each member k in the n_upperi is assigned with a weightage value Wk computed as:

Wk 

ObjVmax  ObjV ( Pk ) , k [1, K ] ObjVmax  ObjVmin

(5.1)

where ObjVmax and ObjVmin represent the maximum (worst) and minimum (best) ObjV values of the members in n_upperi; ObjV(Pk) denotes the ObjV of the member k in n_upperi; K represents the number of members in n_upperi. Equation (5.1) reveals that the member k with better personal best fitness [i.e., lower ObjV(Pk)] values are always assigned with the larger Wk value. Therefore, they have a greater chance of being selected to construct the sexp,i exemplar. To prevent the derivation of sexp,i exemplar is solely contributed by the best member of n_upperi, one dimensional component dr of the sexp,i exemplar, i.e., sexap,i (dr) is randomly selected. The value of sexap,i (dr) is then replaced with the dr-th component of a randomly selected member from n_upperi. On the other hand, the idea of deriving the cexp,i exemplar is inspired from Mendes et al. (2004) and it is calculated as follows:

cexp,i 

 ck rk Pk Pk n _ loweri  ck rk

(5.2)

Pk n _ loweri

where Pk refers to the neighborhood members of particle i stored in the n_loweri, rk is a random number in the range of [0, 1], and ck is the acceleration coefficient that is equally distributed among the Ni members from n_loweri, which is calculated as ck  call N i , where ck = 4.1 (Mendes et al., 2004). Unlike the former case where only the fitter members of n_upperi have higher likelihood to participate in constructing the sexp,i exemplar, Equation (5.2) ensures all members in the n_loweri have equal chances to contribute themselves during

189

[cexp,i sexp,i ] = Generate _Exemplars(neighbori) Input: Neighborhood members of particle i (neighbori) 1: Sort the members in neighbori according to their personal best fitness; 2: Assign the members with better personal best fitness or lower ObjV values (in the first quartile range) into the n_upperi; 3: Assign the remaining members with worse personal best fitness or higher ObjV values into the n_loweri; 4: /*generate the sexp, iexemplar*/ 5: for each member k in n_upperi do 6: Calculate Wk for each n_upperi member k using Equation (5.1); 7: end for 8: Randomly select a dimension dr; 9: for each dimension d do 10: if d  dr then 11: Perform the roulette wheel selection based on the Wk of each member of n_upperi ; 12: sexp,i (d)  d-th component of the selected member from n_upperi ; 13: else /*d = dr*/ 14: Randomly select one member from n_upperi ; 15: sexp,i (dr)  dr-th component of the selected member from n_upperi ; 16: end if 17: end for 18: /*generate the cexp,i exemplar*/ 19: Calculate cexp,i from n_loweri using Equation (5.2); Output: the cognitive exemplar (cexp,i) and the social exemplar (sexp,i) of particle i Figure 5.7: Derivation of the cexp,i and sexp,i exemplars in the PSO-ATVTC.

the derivation of exemplar. Figure 5.7 demonstrates the mechanisms of generating the sexp,i and cexp,i exemplars of particle i.

5.2.4(b) Proposed Velocity Update Mechanism In this subsection, the new velocity update mechanism employed by the proposed learning framework of PSO-ATVTC to guide the particle’s search direction is described. Specifically, the mentioned velocity update mechanism utilizes the cexp,i exemplar and the global best particle Pg to adjust the velocity of particle i. Given that the derivation of the cexp,i exemplar involves the probabilistic mechanism as shown in Equation (5.2), two possible cases might be encountered, namely, (1) the cexp,i exemplar has a better fitness (i.e., lower ObjV) than the personal best fitness of particle i [i.e, ObjV (cexp,i) < ObjV (Pi)] or (2) the cexp,i exemplar has a worse fitness (i.e., higher ObjV) than the personal best fitness of particle i [i.e, ObjV (cexp,i)  ObjV (Pi)]. For case 1, the particle i

190

is allowed to attract towards the fitter cexp,i exemplar, considering that the latter has better fitness and hence is more likely to offer a more prominent search direction to guide the former. For case 2, the particle i is encouraged to be driven away from the inferior cexp,i exemplar because it is unlikely for the latter to benefit the former’s fitness improvement. This repel mechanism aims to provide more exploratory moves to the particle i, which allows it to wander around the unexplored subregions of the search space. The new velocity update mechanism that used to update the velocity of each PSO-ATVTC particle i (i.e., Vi) is mathematically described as follow:

Vi  c1r1 (c exp,i  X i )  c 2 r2 ( Pg  X i ), Vi   Vi  c1r3 (c exp,i  X i )  c 2 r4 ( Pg  X i ),

ObjV (cexp,i )  ObjV ( Pi ) otherwise

(5.3)

where r1, r2, r3, and r4 are the random numbers range between 0 and 1. Once the new velocity of particle i is obtained from Equation (5.3), the new position of particle i (i.e., Xi) is calculated by using Equation (2.2). The ObjV of particle i is then evaluated and compared afterwards with those of the personal best position of particle i (i.e., Pi) and the global best particle (i.e., Pg). If the new position of particle i has a better fitness than those of Pi and Pg [i.e., ObjV (Xi) < ObjV (Pi) and ObjV (Xi) < ObjV (Pg)], the ObjV and position vector of the latter will be replaced by those of the former. According to the “two step forward, one step back” scenario that is described by van den Bergh and Engelbrecht (2004), the particle i that successfully updates its personal best position Pi may provide some useful information on certain components of the improved Pi. To extract such information for the enhancement of the existing global best particle Pg, an elitist-based knowledge extraction (EKE) module is proposed and applied on particle i that successfully updates its Pi. Specifically, the EKE will iteratively check each dimension of Pg, by replacing the dimension with the corresponding dimensional value of Pi if the Pg is improved by doing so. Figure 5.8 shows the implementation of the EKE module. It is noteworthy that the EKE module described in this subsection serves the similar purpose as the OEDLS module of ATLPSO-ELS [see Section 4.2.6(b)] because the former module allows the Pg particle to learn the useful information from the improved dimensions 191

EKE(Pi, Pg, ObjV(Pg), fes) Input: Particle i's personal best position (Pi), global best position (Pg) and the associated ObjV [ObjV(Pg)], number of fitness evaluation consumed (fes) 1: for each dimension d do 2: Pgtemp = Pg and ObjV (Pgtemp) = ObjV (Pg); //where Pgtemp is a temporary particle 3: Pgtemp (d) = Pi (d); 4: Perform fitness evaluation on the updated Pgtemp; 5: if ObjV (Pgtemp) < ObjV (Pg) then 6: Pg (d) = Pgtemp (d); 7: end if 8: fes = fes + 1; 9: end for Output: Updated Pg, ObjV(Pg), fes Figure 5.8: EKE of the PSO-ATVTC.

of Pi. As compared with OEDLS, the EKE module requires less complicated implementation to identify the useful component of improved Pi because the latter does not generate the orthogonal array (OA) and perform the factor analysis (FA). Moreover, unlike the SPLS module of ATLPSO-ELS [see Section 4.2.6(b)], the EKE module does not provide any random perturbation on the Pg particle. This is because similar mechanism, i.e., the Shuffle strategy, has been introduced in the previously described ATVTC module (see Section 5.2.3). In contrary to the SPLS of ATLPSO-ELS which only focuses on the perturbation of Pg particle, the Shuffle strategy of PSO-ATVTC performs the perturbation on the topology connectivity of all population members. The latter strategy is expected to provide more diversity to the PSO-ATVTC swarm and hence improve the algorithm’s robustness towards the premature convergence.

5.2.4(c) Proposed Neighborhood Search Operator Considering that the particle i is not guaranteed to always improve its fitness when it is evolved through the proposed velocity update mechanism, it is necessary to develop an alternative learning phase into the proposed learning framework of PSO-ATVTC. To this end, a neighborhood search (NS) operator is introduced to offer the alternative search direction to the particle i when it fails to achieve the fitness improvement at the earlier learning phase. 192

oexp,i = NS_Generate _Exemplars(cexp,i, sexp,i, Pg, ObjV(Pg), fes) Input: The cognitive exemplar (cexp,i) and the social exemplar (sexp,i) of particle i, global best position (Pg) and the associated ObjV [ObjV(Pg)], number of fitness evaluation consumed (fes) 1: for each dimension d do 2: if rand < 0.5 then 3: oexp,i (d) = sexp,i (d); 4: else 5: oexp,i (d) = cexp,i (d); 6: end if 7: end for 8: Perform fitness evaluation on oexp,i ; 9: Update Pg and ObjV(Pg) if oexp,i has better fitness; 10: fes = fes +1 ; Output: The oexp,i exemplar of particle i, the updated Pg, ObjV(Pg), fes Figure 5.9: Derivation of the oexp,i exemplar in the PSO-ATVTC.

Prior the execution of the NS operator, another exemplar called the oexap,i exemplar is constructed by the operator to guide the particle i. The derivation of oexp,i exemplar is in fact contributed by the previously described cexp,i and sexp,i exemplars via a simple crossover process. Specifically, if a randomly generated number is smaller than 0.5, the d-th dimensional component of oexp,i, i.e., oexp,i(d), is donated by the sexp,i(d). Otherwise, it is obtained from the d-th dimensional component of cexp,i. The procedure to generate the oexp,i exemplar is summarized in Figure 5.9. Similar with the cexp,i exemplar, the fitness of the oexp,i exemplar can either outperform or underperform the personal best fitness of particle i. Therefore, similar strategy is employed by the NS operator to handle these two cases, i.e., (1) if ObjV (oexp,i) < ObjV (Pi), particle i is attracted toward the oexp,i exemplar, and (2) if ObjV (oexp,i)  ObjV (Pi), particle i is driven away from the oexp,i exemplar. Mathematically, the NS operator adjusts the Pi position of particle i as follows:

 Pi  cr5 (o exp,i  Pi ), Pitemp   Pi  cr6 (o exp,i  Pi ),

ObjV (oexp,i )  ObjV ( Pi ) otherwise

(5.4)

where Pitemp is the adjusted self-cognitive experience of particle i; c is the acceleration coefficient and it is set as 2.0 (Liang et al., 2006); r5 and r6 are random numbers in the range

193

NS_Operator(cexp,i, sexp,i, Pi, Pg, ObjV(Pg), fes) Input: The cognitive exemplar (cexp,i) and the social exemplar (sexp,i) of particle i, personal best position of particle i (Pi) and the associated ObjV [ObjV(Pi)], global best position (Pg) and the associated ObjV [ObjV(Pg)], number of fitness evaluation consumed (fes) 1: oexp,i = NS_Generate _Exemplars(cexp,i, sexp,i, Pg, ObjV(Pg), fes); 2: Calculate the Pitemp of particle i using Equation (5.5); 3: Perform fitness evaluation on Pitemp; 4: fes = fes + 1; 5: previous_positioni = Pi, previous_ObjVi = ObjV (Pi); 6: Update Pi, Pg, ObjV (Pi), and ObjV (Pg) if Pitemp is fitter than Pi and Pg; 7: if ObjV(Pi) < previous_fitnessi and Pg  Pi then 8: Perform the EKE(Pi, Pg, ObjV(Pg), fes); 9: end if Output: Updated Pi, ObjV(Pi), Pg, ObjV(Pg), fes Figure 5.10: NS operator of the PSO-ATVTC.

of [0, 1]. The ObjV of Pitemp is evaluated and then compared with those of Pi and Pg [i.e., ObjV (Pi) and ObjV (Pg)]. If Pitemp has a better fitness than Pi and Pg [i.e., ObjV (Pitemp) < ObjV (Pi) and ObjV (Pitemp) < ObjV (Pg)], the updated Pitemp replaces both Pi and Pg. Moreover, if the newly updated Pitemp has better fitness than the original Pi, the EKE module is applied to extract the useful information from Pitemp to refine the Pg particle. Figure 5.10 shows the implementation of the NS operator.

Some remarks about the proposed learning framework of PSO-ATVTC By inspecting the Equations (5.3) and (5.4), it can be observed that the new velocity update mechanism introduced in Section 5.2.4(b) is employed to update the current velocity and position values of particle i. Meanwhile, the NS operator, as proposed in Section 5.2.4(c), focuses on evolving the self-cognitive experience of particle i. Based on these observations, it can be deduced that the proposed learning framework of PSO-ATVTC in fact inherited from the two-layer evolution framework of ATLPSO-ELS (see Section 4.2.2). Specifically, the Equations (5.3) and (5.4) of PSO-ATVTC can be considered as the current swarm evolution and the memory swarm evolution of the algorithm, respectively.

194

5.2.5 Complete Framework of PSO-ATVTC The complete framework of the proposed PSO-ATVTC is implemented by integrating the previously explained ATVTC module and the proposed learning framework, as illustrated in Figure 5.11. From this figure, it is notable that the search status of each particle is constantly

PSO-ATVTC Input: Population size (S), dimensionality of problem space (D), objective function (F), the initialization domain (RG), problem’s accuracy level (  ), maximum number of fitness evaluation (FEmax) 1: Generate initial swarm and set up parameter for each particle; 2: Reset fes = 0; 3: for each particle i do 4: Randomly generate a new TCi in the range of [TCmin, TCmax]; 5: Initialize neighbori by randomly selecting TCi members from the population as neighborhood members; 6: Reset IMi = DMi= UMi = LMi = 0; 7: end for 8: while fes < FEmax do 9: for each particle i do 10: Perform the ATVTC_module(IMi, DMi, UMi, LMi, TCi, neighbori, fes, Pg, ObjV(Pg)); 11: /*Perform the proposed learning framework*/ 12: Update velocity Vi and position Xi of particle i using Equations (5.3) and (2.2), respectively; 13: Perform fitness evaluation on the updated Xi; 14: fes = fes + 1; 15: previous_positioni = Pi, previous_ObjVi = ObjV (Pi); 16: Update Pi, Pg, ObjV(Pi), and ObjV(Pg) if updated Xi is fitter than Pi and Pg; 17: if Pg is improved then 18: IMi = IMi + 1, DMi = 0; 19: else /*Pg is not improved*/ 20: DMi = DMi + 1, IMi = 0; 21: end if 22: if TCi = TCmax then 23: UMi = UMi + 1, LMi = 0; 24: else if TCi = TCmin then 25: LMi = LMi + 1, UMi = 0; 26: end if 27: if ObjV (Pi) < previous_ObjVi and Pg  Pi then 28: Perform the EKE(Pi, Pg, ObjV (Pg), fes); 29: else /*Particle i fails to improve its personal best fitness and needs to perform the NS operator*/ 30: Perform the NS_Operator (cexp,i, sexp,i, Pi, Pg, ObjV(Pg), fes); 31: end if 32: end for 33: end while Output: The best found solution, i.e., the global best particle (Pg) Figure 5.11: Complete framework of the PSO-ATVTC.

195

monitored via the variables of IMi, DMi, UMi, and LMi during the search process. Based on these feedbacks of these search statuses, the exploration/exploitation strengths of each PSOATVTC particle is adaptively regulated by systematically varying the particle’s topology connectivity via the appropriate TCM strategy. Although the idea of varying the algorithm’s exploration/exploitation strengths through the dynamic neighborhood topology structure has been introduced earlier, most of these existing PSO variants did not vary the particle’s topology connectivity in adaptive manner. For example, both of the DMS-PSO (Liang and Suganthan, 2005) and DNLPSO (Nasir et al., 2012) randomly change the neigborhood members of the particle in a predetermined time interval, without considering the particle’s search history. On the other hand, the FPSO and PSO variants proposed by Montes de Oca et al. (2009) and Suganthan (1999) linearly decrease and increase the particle’s topology connectivity with time, respectively. The lack of adaptive mechanisms in these PSO variants to adaptively vary the particle’s topology connectivity tends to restrict the algorithm’s search performance. As mentioned earlier, each PSO-ATVTC particle consists of different neighborhood structure because its topology connectivity is adaptively varied by the ATVTC module. According to Li et al. (2012), the particles with different neighborhood structures could be interpreted as the particles that perform the search via the learning strategies with different exploration/exploitation strengths. Unlike the existing PSO variants equipped with multiple learning strategies framework [e.g., SALPSO (Wang et al., 2011) and SLPSO (Li et al., 2012)], the proposed PSO-ATVTC is more flexible in adjusting the particle’s exploration/exploitation strengths. This is justified by the fact that the SALPSO and SLPSO consist of predetermined numbers (i.e., four) of learning strategies, which in turn implies that only limited choices of exploration/exploitation strengths are introduced to these PSO variants. In contrary, the PSO-ATVTC particle possess more types of communication structures because its topology connectivity is allowed to vary from TCmin = 1 to TCmax = S – 1. In other words, the proposed PSO-ATVTC offers its particles with more choices of exploration/exploitation strengths in performing the search during the optimization process. 196

5.3 Simulation Results and Discussions This section evaluates the optimization capability of the proposed PSO-ATVTC via the benchmark problems and the real-world problems introduced in Section 2.4.1. The experimental setups of performance evaluations are first provided. This is followed by a parameter sensitivity analysis, which investigates the effects of parameter Z on the search performance of PSO-ATVTC. Subsequently, the comparative studies between the proposed PSO-ATVTC and the peer algorithms in solving the tested benchmark problems and the realworld problems are performed to investigate the effectiveness of the proposed work.

5.3.1 Experimental Setup In this research work, a total of six well-established PSO variants are used to extensively compare with the proposed PSO-ATVTC in solving the 30 benchmark problems (see Table 2.6) in 50 dimensions (50-D). SLPSO is chosen for the comparison because this PSO variant is capable of offering different exploration/exploitation strengths to its particles via the multiple learning frameworks. Meanwhile, The FLPSO-QIW and OLPSO are chosen for the comparison considering that their learning strategies share specific similarities with the proposed learning framework of PSO-ATVTC, i.e., these strategies employ the non-global best solution to guide the swarm. PSO-ATVTC is compared with the APSO because the latter is a representative PSO variant developed with the parameter adaptation approach. Considering that the PSO-ATVTC adjusts the particle’s exploration/exploitation strengths by adaptively varying the particle’s neighborhood structure, two PSO variants with modified topology structures, i.e., FlexiPSO and FPSO are employed in this comparative study to investigate the effectiveness of the proposed algorithm. The parameter settings of all tested algorithms are extracted from their respective literature and summarized in Table 5.1. It is noteworthy that the search performances of the APSO, FLPSO-QIW, and OLPSO-L in solving the problems with (1) higher dimensional search space (i.e., 50-D) and (2) different fitness landscapes (i.e., shifted, complex, and composition problems) remain competitive by utilizing the parameter settings recommended 197

Table 5.1 Parameter settings of the involved PSO variants Algorithm APSO (Zhan et al., 2009)

Population topology Fully-connected

FLPSO-QIW (Tang et al., 2011)

Comprehensive learning

Parameter settings  : 0.9  0.4 , c1  c 2 : [3.0,4.0] ,  : 1.0-0.1,   [0.05,0.1] 1 = 0.9,  2 = 0.2, ̌ = ̌ = 1.5, ̂ = 2.0, ̂ = 1.0, m  1 , Pi  [0.1, 1] , K1  0.1 , K 2  0.001 ,  1  1 ,  2  0

 : 0.5  0.0 , c1 , c2 , c3 : [0.0,2.0] ,

FlexiPSO (Kathrada, 2009)

Fully-connected and local ring

FPSO (Montes de Oca et al., 2009)

Time-varying

  0.1 ,   0.01%  : 0.9  0.4 ,  ci  4.1

OLPSO-L (Zhan et al., 2011)

Orthogonal learning

 : 0.9  0.4 , c  2.0 , G = 5

SLPSO (Li et al., 2012)

Adaptive

 : 0.5  0.0 , c  1.496,   [0,1] , R = 4

PSO-ATVTC

Adaptive

 : 0.9  0.4 , c1  c2  c  2.0 , call = 4.1, Z = 5, TCmin = 1, TCmax = S - 1

by their respective authors, as proven by the experimental results tabulated in Tables A1, A2, and A4 to A6. For the proposed PSO-ATVTC, the values of  , c, and call are set according to the recommendations in previous studies Mendes et al. (2004) and Liang et al. (2006). A parameter sensitivity analysis is also conducted in the following subsection to investigate the effects of parameter Z on the search performance of PSO-ATVTC. To reduce the random discrepancy, all tested algorithms were independently run 30 times in the following performance evaluations. The maximum fitness evaluation numbers (FEmax) and population size (S) of all tested algorithms are set as 3.00E+05 and 30, respectively. Please refer to Section 3.3.1 for the justification of these settings.

5.3.2 Parameter Sensitivity Analysis As explained earlier, the proposed ATVTC module changes the topology connectivity of particle i when the variables IMi, DMi, UMi, and LMi exceed the predefined threshold Z. In other words, the parameter Z of the proposed PSO-ATVTC is used to determine how

198

frequent the ATVTC module is triggered to vary the topology connectivity of a particle via the selected TCM strategy (i.e., Increase, Decrease, or Shuffle). The dependence of the ATVTC module on Z implies that different values of Z may affect the search performance of PSO-ATVTC. In this subsection, a parameter sensitivity analysis is conducted on ten selected benchmark problems with different characteristics, i.e., the functions F1, F6, F12, F14, F16, F21, F23, F26, F27, and F29. This experiment aims (1) to investigate the influences of parameter Z on the search performance of PSO-ATVTC and (2) to determine the optimal value of Z that ensures the competitive search performance of PSO-ATVTC in solving various types of benchmark problems. Specifically, the ten mentioned benchmark problems are solved by PSO-ATVTC by using Z with an integer value from 1 to 10. Each different Z value is run 30 times and the simulations are performed on three different dimensions (i.e., 10-D, 30-D, and 50-D) to investigate if the optimal value of Z is sensitive to the change in dimensionality of search space. The search accuracy of PSO-ATVTC (represented by the mean error value Emean) with different values of Z in 10-D, 30-D, and 50-D, are tabulated in Tables 5.2, 5.3, and 5.4, respectively. The best result for each tested problem is indicated in boldface text. It is notable that the simulation results of functions F1, F6, F12, and F14 are discarded in Tables 5.2 to 5.4. This is because the proposed PSO-ATVTC successfully locates the global optima of these tested functions (i.e., Emean = 0), regardless the values of Z. This implies that search accuracies of PSO-ATVTC in solving majority of the conventional and rotated functions are insensitive to the parameter Z. On the other hand, Tables 5.2 to 5.4 report that the PSO-ATVTC produces Emean values that change along with Z in solving the functions F16, F21, F23, F26, F27, and F29. This observation further implies the search accuracies of the proposed algorithm in solving most of the shifted, complex, and composition functions depend on the Z value. From Tables 5.2 and 5.4, it is notable that the search accuracy of PSO-ATVTC tends to be compromised when the values of parameter Z are set too high (i.e., Z = 8, 9, 10) or too low (i.e., Z = 1, 2). 199

Table 5.2 Effects of the parameter Z on PSO-ATVTC in 10-D Value of Z 1 2 3 4 5 6 7 8 9 10

F16 2.54E-10 3.79E-11 2.24E-10 3.34E-10 2.14E-09 4.07E-09 4.35E-09 1.19E-07 1.09E-08 8.69E-08

F21 4.54E-14 3.97E-14 3.41E-14 4.54E-14 4.55E-14 3.42E-14 4.54E-14 3.97E-14 4.55E-14 5.68E-14

F23 1.24E+00 5.33E-01 3.05E-01 9.37E-01 9.18E-01 4.73E-01 3.61E-01 5.23E-01 5.02E-01 4.72E-01

F26 3.21E-01 3.61E-01 2.59E-01 3.54E-01 4.06E-01 3.17E-01 3.34E-01 4.14E-01 5.38E-01 5.89E-01

F27 2.45E+02 1.82E+02 1.01E+02 1.66E+02 1.13E+02 1.82E+02 2.13E+02 1.98E+02 2.07E+02 2.77E+02

F29 9.75E+02 9.49E+02 9.00E+02 1.07E+03 9.16E+02 9.27E+02 9.05E+02 9.05E+02 9.00E+02 9.00E+02

Table 5.3 Effects of the parameter Z on PSO-ATVTC in 30-D Value of Z 1 2 3 4 5 6 7 8 9 10

F16 1.51E-01 1.39E-01 1.13E-01 1.23E-01 1.41E-01 1.53E-01 2.57E-01 3.24E-01 3.27E-01 4.63E-01

F21 1.87E-13 1.82E-13 2.04E-13 2.61E-13 2.67E-13 3.01E-13 2.89E-13 3.18E-13 3.06E-13 2.89E-13

F23 2.21E-02 1.87E-02 7.40E-03 1.53E-02 1.38E-02 1.67E-02 1.92E-02 2.46E-02 2.90E-02 1.77E-02

F26 1.12E+00 1.28E+00 9.88E-01 1.15E+00 1.42E+00 1.32E+00 1.27E+00 1.45E+00 1.47E+00 1.49E+00

F27 3.91E+02 3.06E+02 2.85E+02 2.97E+02 2.62E+02 2.90E+02 3.09E+02 3.17E+02 3.53E+02 3.59E+02

F29 9.00E+02 9.01E+02 9.00E+02 9.02E+02 9.00E+02 9.00E+02 9.02E+02 9.00E+02 9.00E+02 9.02E+02

Table 5.4 Effects of the parameter Z on PSO-ATVTC in 50-D Value of Z 1 2 3 4 5 6 7 8 9 10

F16 1.04E-01 7.37E-02 5.92E-02 5.95E-02 9.01E-02 1.49E-01 9.17E-02 1.66E-01 1.61E-01 2.36E-01

F21 2.95E-13 2.95E-13 2.78E-13 3.35E-13 3.44E-13 3.52E-13 3.69E-13 3.80E-13 4.14E-13 4.03E-13

F23 1.72E-02 1.08E-02 8.32E-11 1.08E-02 8.04E-03 1.87E-02 7.40E-03 3.90E-03 1.03E-02 4.40E-03

F26 1.29E+00 1.03E+00 9.92E-01 1.31E+00 1.71E+00 1.80E+00 1.91E+00 2.13E+00 2.41E+00 2.46E+00

F27 3.44E+02 2.68E+02 2.51E+02 2.96E+02 2.74E+02 3.07E+02 3.30E+02 2.95E+02 3.45E+02 3.44E+02

F29 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02

Specifically, when Z is set too high, the TCM strategies in the ATVTC module are infrequently triggered, which lessens the sensitivity of the ATVTC module to the search status of particle and subsequently fails to apply the appropriate adjustments on the exploration/exploitation strengths of the particle during the search process. Such undesirable parameter settings tend to entrap the PSO-ATVTC swarm in the inferior regions of search space for a long time and this eventually leads to the poor optimization outcomes. When the value of parameter Z is set too low, the TCM strategies in the ATVTC module tend to be overemphasized. In this scenario, the exploration/exploitation strengths of the particles are 200

frequently forced to change, given the oversensitivity of the ATVTC module to the slight changes in the search status of the particle. The search accuracy of PSO-ATVTV is subsequently exacerbated given that low Z values tend to disturb the convergence of the algorithm toward the promising regions of the search space. Finally, the simulation results obtained from the parameter sensitivity analysis (Tables 5.2 to 5.4) reveal that PSO-ATVTC solves most of the tested benchmarks with best accuracy (i.e., lowest Emean) when the parameter Z is set at 3 in 10-D, 30-D, and 50-D. Based on these experimental findings, it could be concluded that the optimal settings of parameter Z is robust towards the changes of (1) search space’s dimensionality and (2) fitness landscape’s characteristic. The outcomes of the parameter sensitivity analysis suggest that the parameter Z of PSO-ATVTC can be set as 3 in the following performance evaluations.

5.3.3 Comparison of PSO-ATVTC with Other Well-Established PSO Variants The experimental results obtained by all involved PSO variants are reported in this subsection. The results of mean error (Emean), standard deviation (SD), and Wilcoxon test produced by the seven tested PSO variants are listed in Table 5.5. Table 5.9 summarizes the search reliability and search efficiency of all PSO algorithms in terms of the success rate (SP) and success performance (SP) values, respectively. The best results among the algorithms are bolded in the tables. Notably, the SR and SP values of functions F10, F16, and F24 to F30 are discarded in Table 5.9 because none of the tested algorithms are able to solve these functions within the predefined  in at least one run. Please refer to Section 3.3.3 for the explainations of w/t/l, #BME, +/=/-, #S/#PS/#NS and #BSP, as presented in Tables 4.5 and 4.9.

5.3.3(a) Comparison of the Mean Error Results Table 5.5 shows that the PSO-ATVTC outperforms its peer algorithms by a large margin in majority of the tested functions, which implies that the proposed algorithm has the most impressive search accuracy among the seven tested algorithms. Specifically, PSO-ATVTC 201

Table 5.5 The Emean, SD, and Wilcoxon test results of PSO-ATVTC and six compared PSO variants for the 50-D benchmark problems F1

F2

F3

F4

F5

F6

F7

F8

F9

F10

F11

F12

F13

F14

F15

F16

F17

F18

F19

F20

Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h

APSO 2.50E-01 1.81E-01 + 1.46E+03 4.82E+02 + 4.62E+01 1.53E+00 + 5.80E-01 6.29E-01 + 3.60E-02 3.22E-02 + 1.70E-01 8.21E-02 + 6.60E-02 2.57E-02 + 5.44E-01 1.88E-01 + 1.26E+03 3.22E+02 + 5.15E+01 1.39E+01 + 1.83E+02 5.61E+01 + 2.59E+02 6.15E+01 + 2.10E+02 1.01E+02 + 6.32E+01 4.24E+00 + 2.27E-01 9.70E-02 + 1.08E+03 5.18E+02 + 1.97E+03 3.83E+03 + 5.92E-01 7.76E-01 + 7.20E-03 1.06E-02 + 0.00E+00 0.00E+00 =

FLPSO-QIW 2.90E-81 5.97E-81 + 2.62E+02 8.90E+01 + 4.22E+01 2.39E-01 + 2.60E+00 1.52E+00 + 5.58E+00 2.36E+00 + 5.75E-04 2.21E-03 + 3.43E-14 1.07E-14 + 1.88E-05 8.29E-05 + 2.62E+02 7.62E+01 + 4.55E+01 3.16E+00 + 1.26E+02 1.76E+01 + 1.28E+02 2.13E+01 + 1.52E+00 5.39E-01 + 4.86E+01 3.40E+00 + 1.44E-13 4.15E-14 = 6.97E+02 1.66E+02 + 1.05E+02 4.86E+01 + 5.88E+00 2.51E+00 + 1.20E+01 3.16E+00 + 2.05E-03 3.49E-03 +

FlexiPSO 1.78E-04 5.23E-05 + 1.42E+00 6.67E-01 + 4.48E+01 1.04E+00 + 2.12E-04 6.24E-05 + 2.07E-04 7.51E-05 + 8.34E-03 9.48E-03 + 3.55E-03 5.36E-04 + 1.12E-01 1.16E-02 + 4.92E+00 3.67E+00 + 4.59E+01 3.60E+00 + 1.49E+02 3.42E+01 + 2.16E+02 8.26E+01 + 2.67E+02 9.17E+01 + 6.60E+01 4.59E+00 + 3.65E-04 6.12E-04 + 3.48E+02 6.90E+02 + 1.85E+02 4.26E+02 + 2.06E-04 6.95E-05 + 2.06E-04 6.44E-05 + 2.76E+02 3.86E+02 +

FPSO 7.02E+01 6.98E+01 + 3.44E+03 1.33E+03 + 5.68E+01 7.08E+00 + 1.85E+01 1.02E+01 + 1.60E+01 9.56E+00 + 1.86E+00 9.28E-01 + 1.80E+00 1.10E+00 + 3.35E+00 2.35E+00 + 3.23E+03 1.79E+03 + 5.62E+01 7.00E+00 + 1.80E+02 5.01E+01 + 1.53E+02 3.74E+01 + 7.28E+00 5.62E+00 + 5.18E+01 3.93E+00 + 1.71E+04 1.47E+04 + 2.50E+04 6.28E+03 + 1.11E+09 2.65E+09 + 2.08E+02 4.59E+01 + 1.63E+02 2.82E+01 + 1.46E+03 4.63E+02 +

202

OLPSO-L 4.86E-33 5.15E-33 + 5.71E+02 1.85E+02 + 4.30E+01 3.18E+00 + 3.32E-01 6.03E-01 + 1.17E+00 1.15E+00 + 0.00E+00 0.00E+00 = 5.09E-15 1.79E-15 + 0.00E+00 0.00E+00 = 1.92E+03 4.17E+02 + 4.24E+01 3.73E+00 + 9.80E+01 5.16E+01 + 1.78E+02 4.94E+01 + 7.58E-01 2.68E-01 + 4.58E+01 4.77E+00 + 5.68E-14 0.00E+00 9.30E+02 3.21E+02 + 1.33E+01 1.94E+01 = 1.43E+00 1.10E+00 + 3.00E+00 1.78E+00 + 1.47E-01 1.07E-01 +

SLPSO 3.22E-28 8.50E-28 + 2.79E+01 8.74E+01 + 6.11E-26 2.33E-25 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 3.43E-15 5.07E-15 + 2.41E-08 1.32E-07 = 6.67E-05 3.57E-04 + 3.65E+01 2.29E+00 + 1.99E-01 1.09E+00 = 0.00E+00 0.00E+00 = 2.22E-17 8.45E-17 = 2.04E+00 1.12E+01 + 1.36E-04 8.39E-05 + 2.07E+03 9.87E+02 + 1.54E+02 5.97E+01 + 3.57E-04 2.04E-04 + 5.29E-04 3.03E-04 + 0.00E+00 0.00E+00 =

PSO-ATVTC 0.00E+00 0.00E+00 0.00E+00 0.00E+00 2.13E+01 9.10E-01 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 2.34E+01 1.98E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 1.48E-13 3.11E-14 5.92E-02 1.33E-02 6.68E-01 6.13E-01 1.48E-13 3.11E-14 1.82E-13 2.54E-14 0.00E+00 0.00E+00

Table 5.5 (Continued) F21

F22

F23

F24

F25

F26

F27

F28

F29

F30

Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h #BME w/t/l +/=/-

APSO 6.04E-02 1.71E-02 + 8.79E-01 5.86E-01 + 1.49E+00 1.47E-01 + 2.07E+01 1.65E-01 1.32E+07 4.09E+06 + 4.13E+00 1.19E+00 + 4.60E+02 7.79E+01 + 5.13E+02 8.95E+01 + 1.10E+03 9.90E+01 + 1.08E+03 9.29E+01 + 1 27/1/1 27/1/1

FLPSO-QIW 2.19E-13 5.74E-14 3.87E-02 7.38E-02 = 6.02E-03 1.04E-02 + 2.11E+01 4.16E-02 + 1.89E+07 4.92E+06 + 4.01E+00 1.46E+00 + 1.78E+02 1.39E+02 1.79E+02 8.40E+01 = 9.22E+02 3.40E+01 + 9.34E+02 1.05E+01 + 0 27/0/3 25/3/2

FlexiPSO 3.84E-03 5.06E-04 + 1.06E+01 3.88E+00 + 1.52E-01 7.18E-02 + 2.05E+01 1.02E-01 1.32E+07 8.09E+06 + 2.25E+00 7.43E-01 + 5.00E+02 1.15E+02 + 5.78E+02 1.01E+02 + 1.18E+03 1.03E+02 + 1.16E+03 1.08E+02 + 1 29/0/1 29/0/1

FPSO 9.29E+00 3.25E+00 + 2.96E+01 5.70E+00 + 2.06E+02 1.44E+02 + 2.11E+01 4.92E-02 + 1.03E+08 8.26E+07 + 4.35E+01 8.59E+00 + 4.38E+02 1.28E+02 + 4.47E+02 1.15E+02 + 9.77E+02 3.38E+01 + 9.70E+02 4.97E+01 + 0 30/0/0 30/0/0

OLPSO-L 8.05E-14 1.51E-14 4.85E-02 1.48E-01 = 3.76E-02 4.00E-02 + 2.12E+01 5.06E-02 + 1.82E+07 5.14E+06 + 2.98E+00 8.26E-01 + 1.38E+02 7.31E+01 2.19E+02 9.83E+01 + 9.48E+02 8.46E+00 + 9.54E+02 8.84E+00 + 5 25/2/3 23/4/3

SLPSO 2.42E-03 4.49E-04 + 1.18E-01 2.16E-02 + 1.17E-01 3.73E-02 + 2.09E+01 1.04E-01 3.61E+06 8.97E+05 + 1.44E+00 3.73E-01 = 3.93E+02 1.14E+02 + 4.81E+02 1.35E+02 + 9.00E+02 4.84E-01 + 9.09E+02 2.45E+01 + 7 22/6/2 19/9/2

PSO-ATVTC 2.79E-13 4.22E-14 1.42E-14 2.01E-14 8.32E-11 8.94E-11 2.10E+01 1.28E-01 7.38E+05 1.73E+05 9.92E-01 5.17E-01 2.51E+02 1.19E+02 1.40E+02 1.90E+01 9.00E+02 0.00E+00 9.00E+02 0.00E+00 25

has achieved 25 best Emean values out of the 30 employed benchmarks, which is 3.57 times better than what SLPSO has achieved in terms of #BME. PSO-ATVTC successfully identifies the global optima of all conventional (F1 to F8) rotated (F9 and F14) problems, except for the functions F3 and F10. Specifically, PSOATVTC is the only algorithm that manages to solve the functions F1, F2, F7, and F8 with Emean = 0.00E+00. The search accuracies exhibited by the FLPSO-QIW, OLPSO-L, and SLPSO in solving the conventional problems are also promising because these PSO variants manage to locate at least three global optima or near-global optima of the tested problems. For example, SLPSO solves the functions F1, and F3 to F8 with competitive Emean values. Meanwhile, OLPSO-L successfully finds the global optima and near global optima of functions F1, F6, F7, and F8.

203

Despite exhibiting relatively promising search accuracies in conventional problems, Table 5.5 reports the majority of the tested PSO variants have experienced different levels of performance degradation in dealing with the rotated problems. For instance, the SLPSO fails to locate the global optima of the rotated Rastrigin function (F11) and the rotated Weierstrass function (F14) although it successfully solves the unmodified ones (F4 and F8) with promising Emean values. Similar finding could also be observed from the OLPSO-L in solving the conventional and rotated Grienwank functions (i.e., F6 and F13). As compared to the six other PSO variants, the proposed PSO-ATVTC has exhibited excellent robustness towards the rotation operation, considering that it successfully finds the global optima of all rotated problems, except for function F10. Specifically, PSO-ATVTC is the only algorithms that successfully solves the rotated functions of F9, F11, F13, and F14 with Emean = 0.00E+00. It is also notable that although the SLPSO solves the conventional Rosenbrock function (F3) with much better accuracy than the proposed PSO-ATVTC, the latter outperforms the former in solving the rotated Rosenbrock function (F10). This observation further verified the excellent capability of the PSO-ATVTC in dealing with the problems with rotated search spaces. It is also reported from Table 5.5 that the search accuracies of all involved algorithms are deteriorated when solving the shifted problems (F15 to F22). This is because most tested algorithms are unable to find the global optima of majority shifted problems, except for the Shifted Griewank function (F20) which is successfully solved by APSO, SLPSO, and PSO-ATVTC with Emean = 0.00E+00. It is noteworthy that, unlike its compared peers, the PSO-ATVTC has demonstrated an excellent robustness toward the problems with shifted fitness landscapes because the proposed algorithm achieves six best Emean values in eight shifted problems. Notably, PSO-ATVTC is the only optimizer to solve the shifted problems of F16, F17, F18, F19, and F22 with the accuracy levels of 10-2, 10-1, 10-13, 10-13, and 10-14, respectively. Although the Emean values produced by the FLPSO-QIW and OLPSO in functions F15 and F21 are slightly better than those of PSO-ATVTC, the outperformances of the former two against the latter in these two functions are relatively marginal. 204

Finally, Table 5.5 shows the further performance degradations of all tested algorithms when solving the complex (F23 to F26) and composition (F27 to F30) problems. Specifically, all tested algorithms fail to find neither the global optima nor the near-global optima of the mentioned problems, except for function F23 which is successfully solved by the proposed PSO-ATVTC with an accuracy level of 10-11. The inclusion of the rotating and shifting operations (F23 to F25), expended mechanism (F26), or the composition operation (F27 to F30) into the conventional problems have tremendously increased the complexities and have complicated the search for the global optima of such problems. Among the seven tested algorithms, the proposed PSO-ATVTC is reported to be least susceptible to the mentioned modifications because it exhibits the most superior search accuracy in solving the complex and composition problems. Specifically, PSO-ATVTC produces six best Emean values in eight of these tested problems. Based on the experimental results as reported in Table 5.5, it is observed that the proposed PSO-ATVTC generally exhibits more competitive search accuracy than its peer algorithms. Moreover, the values of #BME and w/t/l obtained by the PSO-ATVTC against its peers in each problem category are promising. These observations suggest that the search mechanisms offered to the proposed algorithm are more resilient towards any modification made on the problem’s fitness landscape. Therefore, it can be concluded that the proposed PSO-ATVTC has better capability to tackle the optimization problems with different characteristics as compared to its peer algorithms.

5.3.3(b) Comparison of the Non-Parametric Statistical Test Results In this subsection, a set of non-parametric statistical tests are conducted on the tested algorithms. The descriptions of the employed non-parametric statistical tests and their respective procedures are explained in Sections 2.5.5 and 3.3.3(b), respectively. First, the non-parametric pairwise comparison results between the PSO-ATVTC and its peers obtained from the Wilcoxon test are presented in Tables 5.5 and 5.6. Specifically, Table 5.5 shows that the Wilcoxon test results, which are represented by the h values, are 205

Table 5.6 Wilcoxon test for the comparison of PSO-ATVTC and six other PSO variants PSO-ATVTC vs. R+ R− p-value

APSO 427.0 8.0 9.31E-08

FLPSO-QIW 436.0 29.0 3.24E-06

FlexiPSO 453.0 12.0 1.30E-07

FPSO 465.0 0.0 1.86E-09

OLPSO-L 427.5 37.5 1.14E-05

SLPSO 414.5 50.5 5.94E-05

Table 5.7 Average rankings and the associated p-values obtained by the PSO-ATVTC and six other PSO variants via the Friedman test Algorithm Average ranking Chi-square statistic p-value

PSO-ATVTC 1.55

SLPSO 2.68

OLPSO-L 3.63

FLPSO-QIW 3.72 102.65 0.00E+00

FlexiPSO 4.65

APSO 5.38

FPSO 6.38

largely consistent with the reported Emean values given the similar results of w/t/l and +/-/=. Table 5.6 further verifies the significant improvement of PSO-ATVTC over its six compared PSO variants in the pairwise comparative studies because all p-values obtained from Wilcoxon test are reported less than  = 0.05. The multiple comparisons (García et al., 2009, Derrac et al., 2011) emerge as another important non-parametric statistical analyses to rigorously evaluate the significance of outperformance margins of PSO-ATVTC over its peers. Table 5.7 reports the average rankings of all tested algorithms and the associated p-value computed from the Friedman test. Accordingly, PSO-ATVTC has the smallest average rank of 1.55 and it emerges as the best optimizer among the seven tested algorithms. Notably, the p-value reported in Table 5.7 (i.e., 0.00E+00) is smaller than  = 0.05 and therefore it is confirmed that a significant global difference exists among the seven compared PSO variants. Based this observation, three post-hoc statistical analyses (García et al., 2009, Derrac et al., 2011) called Bonferroni-Dunn, Holm, and Hochberg tests, are subsequently carried out to further identify the concrete differences for the control algorithm (i.e., PSO-ATVTC). The associated z values, unadjusted p-values, and adjusted p-values (APVs) obtained from the three mentioned post-hoc procedures are reported in Table 5.8. Accordingly, all employed post-hoc tests successfully identify the significant performance improvement of PSO ATVTC over the FPSO, APSO, FlexiPSO, FLPSO-QIW, and OLPSO- L because the APVs 206

Table 5.8 Adjusted p-values obtained by comparing the PSO-ATVTC with six other PSO variants using the Bonferroni-Dunn, Holm, and Hochberg procedures PSO-ATVTC vs. FPSO APSO FlexiPSO FLPSO-QIW OLPSO-L SLPSO

z 8.67E+00 6.87E+00 5.56E+00 3.88E+00 3.74E+00 2.03E+00

Unadjusted p 0.00E+00 0.00E+00 0.00E+00 1.03E-04 1.88E-04 4.22E-02

Bonferroni-Dunn p 0.00E+00 0.00E+00 0.00E+00 6.15E-04 1.13E-03 2.53E-01

Holm p 0.00E+00 0.00E+00 0.00E+00 3.08E-04 3.75E-04 4.22E-02

Hochberg p 0.00E+00 0.00E+00 0.00E+00 3.08E-04 3.75E-04 4.22E-02

produced are smaller than  = 0.05. Table 5.8 also proves that both of the the Holm and Hochberg tests are more powerful post-hoc procedures than the Bonferrioni-Dunn test because the former two tests confirm the significant outperformance of PSO-ATVTC over SLPSO.

5.3.3(c) Comparison of the Success Rate Results Table 5.9 reports the experimental results of success rate (SR) analysis to compare the search reliabilities of the seven tested algorithms. Accordingly, the proposed PSO-ATVTC has the most superior search reliability, having completely solves 19 out of 30 of the employed benchmarks with SR = 100%. Notably, the search reliability exhibited by the PSO-ATVTC is 1.35 times and 2.71 times better than the second-ranked SLPSO and the third-ranked FLPSO-QIW because the latter two completely solve only 14 and 7 tested benchmarks, respectively. Meanwhile, FPSO is identified as the optimizer with the worst search reliability because it is unable to completely solve any of the employed benchmarks with SR = 100%. As shown in Table 5.9, the proposed PSO-ATVTC is able to solve all the conventional problems completely with SR = 100% at the predefined accuracy level  , except for function F3. It is also important to mention that SLPSO and PSO-ATVTC are the only two algorithms that are able to completely solve the functions F1, and F4 to F8. The search reliabilities exhibited by both the FLPSO-QIW and OLPSO-L in conventional problems are also promising because these two algorithms successfully solve the functions F1, and F6 to F8 with SR = 100%.

207

Table 5.9 The SR and SP values of PSO-ATVTC and six compared PSO variants for the 50D benchmark problems F1

SR SP F2 SR SP F3 SR SP F4 SR SP F5 SR SP F6 SR SP F7 SR SP F8 SR SP F9 SR SP F11 SR SP F12 SR SP F13 SR SP F14 SR SP F15 SR SP F17 SR SP F18 SR SP F19 SR SP F20 SR SP F21 SR SP F22 SR SP F23 SR SP #S/#PS/#NS BSP

APSO 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 6.67 2.60E+06 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 86.67 2.51E+05 100.00 1.20E+04 0.00 Inf 0.00 Inf 0.00 Inf 1/2/27 0

FLPSO-QIW 100.00 6.04E+04 0.00 Inf 0.00 Inf 6.67 3.46E+06 0.00 Inf 100.00 5.00E+04 100.00 4.79E+04 100.00 6.67E+04 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 5.85E+04 0.00 Inf 0.00 Inf 0.00 Inf 100.00 4.88E+04 100.00 4.72E+04 70.00 1.03E+05 83.33 2.04E+05 7/3/20 0

FlexiPSO 0.00 Inf 0.00 Inf 0.00 Inf 100.00 9.72E+04 100.00 9.88E+04 60.00 1.57E+05 100.00 1.59E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 1.04E+05 100.00 1.15E+05 56.67 2.45E+03 100.00 1.70E+05 0.00 Inf 0.00 Inf 6/2/22 1

FPSO 13.33 9.68E+04 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 6.67 1.60E+05 3.33 3.36E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 6.67 2.59E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0/4/26 0

OLPSO-L 100.00 1.52E+05 0.00 Inf 0.00 Inf 73.33 2.97E+05 40.00 6.75E+05 100.00 1.24E+05 100.00 1.25E+05 100.00 1.66E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 1.37E+05 0.00 Inf 13.33 1.27E+06 6.67 3.45E+06 10.00 1.33E+06 100.00 1.15E+05 90.00 1.75E+05 13.33 2.20E+06 6/7/17 0

SLPSO 100.00 1.44E+05 0.00 Inf 100.00 8.91E+04 100.00 8.27E+04 100.00 7.67E+04 100.00 8.79E+04 100.00 7.40E+04 100.00 1.72E+05 100.00 1.83E+05 96.67 1.15E+05 100.00 1.19E+05 100.00 1.38E+05 96.67 2.01E+05 0.00 Inf 0.00 Inf 100.00 1.44E+05 100.00 1.58E+05 100.00 7.96E+03 100.00 1.92E+05 0.00 Inf 0.00 Inf 14/2/14 1

PSO-ATVTC 100.00 7.57E+03 100.00 2.23E+04 0.00 Inf 100.00 5.84E+03 100.00 5.60E+03 100.00 1.17E+04 100.00 6.97E+03 100.00 6.87E+03 100.00 2.15E+04 100.00 2.23E+04 100.00 2.16E+04 100.00 1.71E+04 100.00 1.46E+04 100.00 4.59E+04 10.00 2.80E+06 100.00 4.29E+04 100.00 1.22E+05 100.00 2.06E+03 100.00 1.93E+04 100.00 5.12E+04 100.00 6.54E+04 19/1/10 19

The search reliabilities of most tested algorithms are degraded when they are used to solve the rotated problems with non-separable characteristic. Table 5.9 reports that the SR values produced by most tested algorithms (i.e., APSO, FLPSO-QIW, FlexiPSO, FPSO, and OLPSO-L) are equal to 0.00%, implying that these compared PSO variants never solve any of the rotated problems. Both of the FLPSO-QIW and OLPSO-L exhibit the most drastic

208

performance degradations (in terms of search reliability) because these two PSO variants never solve the rotated Grienwank function (F13) and the rotated Weierstrass function (F14), albeit they have successfully solved the conventional Grienwank function (F6) and the conventional Weierstrass function (F8) with SR

= 100%. Meanwhile, the searching

reliability of SLPSO in rotated problems is marginally compromised because it suffers the demerit of being trapped in the local optima of certain rotated problems. Specifically, SLPSO obtains the SR values of 96.67% in functions F11 and F14, implying that it is stuck in the local optima of these two rotated functions for one out of 30 independent runs. Among the seven tested algorithms, the proposed PSO-ATVTC is the only algorithm that manages to maintain its excellent search reliability in the rotated problems because it successfully solves five out of six rotated problems completely, i.e., functions F9, F11 to F14. Notably, PSOATVTC is the only algorithm that solves the functions F11 and F14 with SR = 100%. Table 5.9 reveals the similar performance deterioration, in term of search reliability, experienced by the tested algorithms in dealing with the shifted problems. In general, the SR values produced by the tested algorithms in the shifted problems are better than those in rotated problems. For example, both of the FLPSO-QIW and OLPSO-L successfully solve some shifted problems completely or partially (e.g., functions F15, F18, and F19 to F22), albeit these two PSO variants fail to solve any rotated problems within the predefined accuracy level  . Similar observations could also be found from the APSO and FlexiPSO. These experimental findings suggest that most tested algorithms have better capability to tackle the problems with shifted fitness landscapes than those with the rotated search spaces. Similar with the rotated problems, the search reliability of the proposed PSO-ATVTVC remains steady in the shifted problems. More particularly, the PSO-ATVTC exhibits the most impressive search reliability among the seven tested algorithm by achieving seven best SR values out of eight tested functions. It is also worth mentioning that PSO-ATVTC is the only algorithm that completely solves the shifted functions F22 with SR = 100% and partially solves the shifted function F17 with SR = 10%. The search reliabilities of FLPSOQIW, FlexiPSO, OLPSO-L, and SLPSO in shifted problems are also competitive, 209

considering that these PSO variants could completely or partially solve at least four of the tested problems. Finally, for the complex and composition problems, it is reported in Table 5.9 that the search reliabilities of the majority tested algorithms are severely compromised by the challenging fitness landscapes of these two problems categories. Specifically, all of the involved algorithms fail to completely or partially solve any of the complex and composition problems, except for function F23, where PSO-ATVTC achieves the best SR value of 100%. Although PSO-ATVTC fails to solve the remaining complex and composition problems, the proposed algorithm is proven better than its peers by achieving six best Emean value out of the eight tested problems, as reported in Table 5.5.

5.3.3(d) Comparison of the Success Performance Results Table 5.9 reports the SP values, i.e., the computation cost required by a tested algorithm to solve a given problem within the predefined accuracy level  . Meanwhile, Figure 5.12 presents a total of ten representative convergence curves, i.e., two from conventional (F1 and F7), rotated (F9 and F12), shifted (F18 and F22), complex (F23 and F25), and composition (F28 and F30) problems to visualize the search efficiencies of the tested algorithms. Table 5.9 shows that the proposed PSO-ATVTC exhibits the most competitive search efficiencies in the conventional problems and the rotated problems. Specifically, PSO-ATVTC achieves seven and four best (i.e., smallest) SP values out of the eight conventional problems and the six rotated problems, respectively. These observations imply that PSO-ATVTC requires the least computation cost to solve the conventional and rotated problems within the predefined  as compared with the six other tested algorithms. The excellent convergence characteristic of the PSO-ATVTC in these two problems categories are qualitatively affirmed by the convergence curves as shown in Figure 5.12. Specifically, the convergence curves of PSO-ATVTC in the conventional functions F1 and F7 [as depicted in Figures 5.12(a) and 5.12(b), respectively], as well as the rotated functions F9 and F12 [as depicted in Figures 5.12(c) and 5.12(d), respectively] are sharply dropped off at the 210

(a)

(b)

(c)

(d)

(e)

(f)

Figure 5.12: Convergence curves of 50-D problems: (a) F1, (b) F7, (c) F9, (d) F12, (e) F18, and (f) F22.

early stage of optimization process. It is also notable that the convergence curves of functions F2, F4 F5, F6, F8, F9, and F11 to F14 are similar with those in Figures 5.12(a) to 5.12(d). Based on these observations, it can be deduced that the proposed PSO-ATVTC

211

(g)

(h)

(i)

(j)

Figure 5.12 (Continued): Convergence curves of 50-D problems: (g) F23, (h) F25, (i) F28, and (j) F30.

indeed has excellent capability to locate the global optima of most conventional and rotated problems with small amount of fitness evaluations (FEs). As reported in Table 5.9, PSO-ATVTC successfully maintains its excellent search efficiency in solving the shifted problems. Specifically, the proposed algorithm achieves six (out of eight) best SP values in functions F15, F17, F18, and F20 to F22. The competitive convergence speeds demonstrated by the PSO-ATVTC in shifted problems are also verified by the convergence curves in Figures 5.12(e) and 5.12(f). More particularly, the convergence curve of function F18 [as depicted in Figure 5.12(e)] reveals that all tested algorithms, including the PSO-ATVTC, are trapped in the local optima of search space during the early stage of optimization. Nevertheless, the proposed PSO-ATVTC is the only algorithm that

212

successfully breaks out from these inferior regions of function F18 and solves this function with excellent accuracy. Meanwhile, the convergence curves of function F22 [as depicted in Figure 5.12(f)] illustrate the rapid characteristic of PSO-ATCVTC, especially in the early and middle stages of optimization. It is noteworthy that the convergence curves of function F19 similar with that in Figure 5.12(e), whereas the convergence curves of function F15, F16, F17, F20, and F21 are comparable with the one illustrated in Figure 5.12(f). As shown in Table 5.5, another interesting observation that is worth focusing is that the algorithm with the smallest SP values does not always guarantee the best search accuracy. For instance, although the FlexiPSO has smaller SP values than PSO-ATVTC in functions F19, the latter yields significantly better Emean values than the former in this function. Finally, for majority of the complex and compositions problems, no SP values are available for search efficiency comparison, except for function F23, where the proposed PSO-ATVTC achieves the best SP values. The convergence curves of functions F23, F25, F28, and F30 [represented by Figures 5.12(g) to 5.12(j), respectively] qualitatively visualize the competitive convergence characteristic of PSO-ATVTC in these two problem categories as compared with its peers. Specifically, the convergence curve of function F23 [represented by Figure 5.12(g)] reveal the rapid convergence characteristic of PSO-ATVTC towards the near-global optima regions function F23 after the algorithm successfully escapes from the inferior regions of search space. Meanwhile, the convergence curves of functions F25, F28, and F30 [represented by Figures 5.12(h) to 5.12(j), respectively] show that PSO-ATVTC converges faster than its peers during the early (i.e. F28 and F30) or middle (i.e., F25) stages of search process. It is notable that the convergence curves of functions F24 and F27 are similar with that in Figure 5.12(i), whereas the convergence curves of function F26 and F29 are comparable with the one illustrated in Figure 5.12(h). These promising convergence characteristics enable the PSO-ATVTC to locate and exploit the promising regions of the problem’s search space earlier than its peers. This explains the capability of PSO-ATVTC in yielding the promising solutions in both complex and composition problems as compared with most of its peers. 213

5.3.3(e) Comparison of the Algorithm Complexity Results In this subsection, the computational complexities of the seven tested algorithms are evaluated in Table 5.10 by using the AC analysis as described in Figure 2.11. Table 5.10 shows that PSO-ATVTC records the smallest AC value, implying that the proposed algorithm has incurred the least computational complexity at D = 50. Moreover, it is important to emphasize that the AC value of PSO-ATVTC is three times smaller than that of SLPSO, despite the facts these two PSO variants share some similarities in term of search mechanisms. To recap, both of the PSO-ATVTC and SLPSO are equipped with their respective mechanisms to adaptively assign different exploration/exploitation strengths to different particles during the search process. The simulation results in Table 5.10 reveal that the mechanisms employed by the PSO-ATVTC in adaptively tuning the particle’s exploration/exploitation strengths are more effective and less complicated than those of SLPSO. Meanwhile, the AC value recorded by the APSO, FlexiPSO, FPSO, and OLPSO-L are in the similar range with that of PSO-ATVTC. This suggests that the modifications proposed in the PSO-ATVTC are not more complex than the former four algorithms, albeit the search performance (i.e., Emean, SR, and SP) achieved by the PSO-ATVTC significantly outperforms those of APSO, FlexiPSO, FPSO, and OLPSO-L. Finally, although the FLPSOQIW exhibits promising search performance in some selected benchmarks, Table 5.10 shows that the computational complexities of this PSO variant is much higher than the proposed PSO-ATVTC. Specifically, the AC value yielded by the FLPSO-QIW is 1.56E+04 and it is 8.62 times higher than that of PSO-ATVTC (i.e., 1.81E+03). Based on the experimental results in Table 5.10, it can be concluded that the proposed PSO-ATVTC emerges as better optimizer than its compared peers. The innovations introduced into the algorithmic framework of PSO-ATVTC to adaptively adjust the particle’s exploration/exploitation strengths successfully enhance the algorithm’s search performance without incurring excessive complexity to the algorithm.

214

Table 5.10 AC Results of the PSO-ATVTC and six other PSO variants in D = 50 T0 T1 ̂2 AC

APSO 1.88E−01 4.19E+00 1.37E+03 7.27E+03

FLPSO-QIW 1.88E−01 4.19E+00 2.95E+03 1.56E+04

FlexiPSO 1.88E-01 4.19E+00 6.19E+02 3.27E+03

FPSO 1.88E-01 4.19E+00 5.83E+02 3.08E+03

OLPSO-L 1.88E−01 4.19E+00 4.60E+02 2.42E+03

SLPSO 1.88E−01 4.19E+00 1.04E+03 5.51E+03

PSO-ATVTC 1.88E−01 4.19E+00 3.44E+02 1.81E+03

5.3.4 Effect of Different Topology Connectivity Modification Strategies Considering that the ATVTC module is one of the key factors in determining the search performance of PSO-ATVTC, it is interesting to investigate the impact on the PSO-ATVTC when the ATVTC module is absent. In addition, it is also worth to investigate the effects of different TCM strategies on the performance of PSO-ATVTC when each of the mentioned strategies is individually employed to vary the particle’s topology connectivity. To comprehensively investigate the effectiveness of the ATVTC module and each of its TCM strategy, this research work studies the search performance of (1) PSO-ATVTC without the ATVTC module (PSO-ATVTC1), (2) PSO-ATVTC with Decrease strategy only (PSO-ATVTC2), (3) PSO-ATVTC with Increase strategy only (PSO-ATVTC3), (4) PSOATVTC with Shuffle strategy only (PSO-ATVTC4), and (5) complete PSO-ATVTC. For PSO-ATCTV2 and PSO-ATVTC3, the topology connectivity of each particle is first initialized with TCmax and TCmin respectively. The topology connectivity of all particles in these two PSO-ATVTC variants is then linearly decreased and increased with fitness evaluations (FEs) number. Meanwhile, the topology connectivity of each particle in the PSO-ATVTC1 and PSO-ATVTC4 is randomly initialized on the range of [TCmin, TCmax]. Unlike the PSO-ATVTC1 that maintains the topology connectivity throughout the optimization process, the PSO-ATVTC4 will perform the Shuffle strategy when the particle’s topology connectivity cannot be further increased or decreased. All of the PSO-ATVTC variants employ the same proposed learning framework, to ensure that any performance deviations observed are due to the types of TCM strategies adopted.

215

The simulation results (i.e., Emean and %Improve values) of all PSO-ATVTC variants in each tested benchmark are presented in Table 5.11. In addition, the comparative study of all PSO-ATVTC variants in each problem category are summarized as #BME and average %Improve in Table 5.12. As shown in Tables 5.11 and 5.12, all PSO-ATVTC variants achieve performance improvement against the BPSO in solving the 30 employed benchmarks. These observation imply that the combination of the proposed learning framework with any of the TCM strategy in ATVTC module have contributed to the enhancement of the algorithm’s search accuracy. Among the five compared variants, the complete PSO-ATVTC achieves the best overall average %Improve value of 87.606%, followed by PSO-ATVTC4 (i.e., 84.783%), PSO-ATVTC1 (i.e., 83.228%), PSO-ATVTC2 (i.e., 82.886%), and PSO-ATVTC3 (i.e., 80.027%). The excellent performance of the complete PSO-ATVTC is also verified by the #BME values reported in Table 5.12, considering that it successfully achieves 27 best Emean value, i.e., 1.80 times better than the second-ranked PSO-ATVTC4. To be more specific, the experimental results in Tables 5.11 and 5.12 reveal that the search accuracies of all PSO-ATVTC variants are equally competitive in solving the conventional problems. Meanwhile, it could be observed from Tables 5.11 and 5.12 that all PSO-ATVTC variants perform excellent in rotated problems, except for PSO-ATVTC3. Based on the relatively large Emean values produced by PSO-ATVTC3 in rotated problems, it is conjectured that the combination of the proposed learning framework and the Increase strategy has compromised the algorithm’s capability in jumping out from the local optima basin of the rotated search space. Despite having competitive performance in the conventional problems, Tables 5.11 and 5.12 report that the search accuracy of the PSOATVTC1, PSO-ATVTC2, and PSO-ATVTC4 are still unsatisfactory in most of the shifted, complex, and composition problems. The combinations of the proposed learning framework with any of the individual TCM strategies are still insufficient for the algorithm to handle the problems with shifted or more complicated fitness landscape. It is also important to mention that although PSO-ATVTC3 has poorer average %Improve values than PSO-ATVTC1, 216

Table 5.11 Comparison of PSO-ATVTC variants with BPSO in 50-D problems

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 F21 F22 F23 F24 F25 F26 F27 F28 F29 F30

BPSO 4.67E+03 (-) 2.08E+04 (-) 2.10E+02 (-) 1.15E+02 (-) 1.14E+02 (-) 3.92E+01 (-) 1.21E+01 (-) 8.09E+00 (-) 2.57E+04 (-) 1.08E+02 (-) 1.70E+02 (-) 2.00E+02 (-) 2.04E+02 (-) 5.80E+01 (-) 2.31E+04 (-) 7.72E+04 (-) 9.59E+09 (-) 2.93E+02 (-) 3.14E+02 (-) 7.21E+02 (-) 1.47E+01 (-) 4.99E+01 (-) 7.05E+02 (-) 2.10E+01 (-) 6.11E+08 (-) 2.64E+04 (-) 5.35E+02 (-) 5.59E+02 (-) 1.07E+03 (-) 1.07E+03 (-)

PSO-ATVTC1 0.00E+00 (100.000) 0.00E+00 (100.000) 2.01E+01 (90.407) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 2.83E+01 (73.875) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 1.18E-05 (100.000) 3.14E-02 (100.000) 1.51E+00 (100.000) 7.20E+01 (75.447) 6.36E+01 (79.753) 0.00E+00 (100.000) 7.62E-01 (94.798) 1.36E+01 (72.764) 1.93E-02 (99.997) 2.12E+01 (-0.704) 1.55E+07 (97.463) 8.54E+00 (99.968) 2.60E+02 (51.472) 3.94E+02 (29.585) 9.00E+02 (15.833) 9.00E+02 (16.185)

Emean (%Improve) PSO-ATVTC2 PSO-ATVTC3 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 4.48E+01 9.80E+00 (78.621) (95.321) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 2.41E+01 4.18E+01 (77.760) (61.434) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 6.40E+01 0.00E+00 (-10.321) (100.000) 3.84E-08 1.39E-08 (100.000) (100.000) 3.88E+03 3.01E-03 (94.975) (100.000) 1.09E+00 1.25E+00 (100.000) (100.000) 4.30E+01 5.50E-07 (85.345) (100.000) 5.21E+01 7.25E-07 (83.418) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 5.50E-04 5.40E-05 (99.996) (100.000) 8.51E+00 2.17E-01 (82.953) (99.564) 7.37E-03 2.20E-02 (99.999) (99.997) 2.11E+01 2.09E+01 (-0.445) (0.650) 2.53E+07 5.96E+06 (95.868) (99.025) 7.74E+00 1.31E+00 (99.971) (99.995) 3.92E+02 3.47E+02 (26.709) (35.158) 5.16E+02 4.65E+02 (7.671) (16.759) 9.31E+02 9.00E+02 (12.890) (15.833) 9.44E+02 9.00E+02 (12.067) (16.185)

217

PSO-ATVTC4 0.00E+00 (100.000) 0.00E+00 (100.000) 1.61E+01 (92.339) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 2.63E+01 (75.723) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 1.03E-07 (100.000) 1.30E-02 (100.000) 4.31E+00 (100.000) 1.77E+01 (93.956) 8.20E+00 (97.390) 0.00E+00 (100.000) 4.70E-04 (99.997) 1.10E+01 (78.016) 1.72E-02 (99.998) 2.12E+01 (-0.626) 9.54E+05 (99.844) 3.87E+00 (99.985) 3.01E+02 (43.731) 3.85E+02 (31.106) 9.00E+02 (15.833) 9.00E+02 (16.185)

PSO-ATVTC 0.00E+00 (100.000) 0.00E+00 (100.000) 2.13E+01 (89.828) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 2.34E+01 (78.420) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 1.48E-13 (100.000) 5.92E-02 (100.000) 6.68E-01 (100.000) 1.48E-13 (100.000) 1.82E-13 (100.000) 0.00E+00 (100.000) 2.79E-13 (100.000) 1.42E-14 (100.000) 8.32E-11 (100.000) 2.10E+01 (-0.105) 7.38E+05 (99.879) 9.92E-01 (99.996 2.51E+02 (53.106) 1.40E+02 (75.035) 9.00E+02 (15.833) 9.00E+02 (16.185)

Table 5.12 Summarized comparison results of PSO-ATVTC variants with BPSO in each problem category

BPSO Conventional Problems (F1 to F8) Rotated Problems (F9 to F14) Shifted Problems (F15 to F22) Complex Problems (F23 to F26) Composition Problems (F27 to F30) Overall Results (F1 to F30)

0 (-) 0 (-) 0 (-) 0 (-) 0 (-) 0 (-)

Emean (average %Improve) PSOPSOATVTC1 ATVTC2 7 8 (98.801) (99.415) 5 5 (95.646) (96.293) 1 2 (90.345) (93.964) 0 0 (74.181) (73.848) 2 2 (28.269) (16.599) 15 17 (83.228) (82.886)

PSOATVTC3 7 (97.328) 4 (75.185) 1 (99.317) 1 (74.917) 0 (19.218) 13 (80.027)

PSOATVTC4 7 (99.042) 5 (95.954) 1 (96.170) 0 (74.800) 2 (26.714) 15 (84.783)

PSOATVTC 7 (98.729) 6 (96.403) 7 (100.000) 3 (74.943) 4 (40.040) 27 (87.606)

PSO-ATVTC2, and PSO-ATVTC4 in the rotated problems, the former outperforms the latter three in solving the shifted and complex problems. These observations suggest that the search dynamics of PSO-ATVTC3 might be very different with those of PSO-ATVTC1, PSO-ATVTC2, and PSO-ATVTC4. Finally, it can be observed from Tables 5.11 and 5.12, that the complete PSOATVTC outperforms other PSO-ATVTC variants in all problem categories. To be specific, the complete PSO-ATVTC successfully achieves seven (out of eight), six (out of six), seven (out of eight), three (out of four), and four (out of four) best Emean values in the conventional, rotated, shifted, complex, and composition problems, respectively. Unlike the PSO-ATVTC1 and PSO-ATVTC2 which only perform well in certain composition functions, the search accuracy exhibited by the complete PSO-ATVTC is consistently good across the four tested problems. Such observation is reasonable because the ATVTC module of the complete PSOATVTC has integrated all the TCM strategies employed by the PSO-ATVTC2, PSOATVTC3, and PSO-ATVTC4. This merit enables the complete PSO-ATVTC to adaptively modify the particle’s topology connectivity (and hence exploration/exploitation strengths) based on the particle’s search status. The superior performance of the complete PSOATVTC in all tested problems reveals that both of the ATVTC module and the proposed learning framework are integrated effectively. None of the contributions of these two

218

modifications are compromised when PSO-ATVTC is used to solve different types of problems.

5.3.5 Comparison with Other State-of-The-Art Metaheuristic Search Algorithms In this subsection, the search performance of the proposed PSO-ATVTC is compared with five cutting-edge MS algorithms. The compared MS algorithms are known as the RCCRO, RCCBBO, GSO, OXDE, and OCABC. The characteristics of the RCCRO, RCCBBO, and GSO have been explained in Section 3.3.5(b), whereas Section 4.3.5 provides the descriptions of OXDE and OCABC. A total of ten 30-D conventional problems are employed in this subsection to compare the search performance of PSO-ATVTC and five other MS algorithms. The parameter settings of the compared MS algorithms are set based on the recommendations of their respective authors. The Emean and SD values yielded by all tested algorithms are reported in Table 5.13 and these results are summarized as w/t/l and #BME. It is important to mention that the results of the compared MS peers are extracted from the literatures (Wang et al., 2012, Gao et al., 2013, Gong et al., 2010, Lam et al., 2012, He et al., 2009). Considering that the simulation result of OCABC in solving the Schwefel 1.2 problem is not available, the Emean and SD values of this MS peer are denoted as “NA”. From Table 5.13, it can be observed that the proposed PSO-ATVTC yields the search accuracy in solving majority of the tested functions. Specifically, PSO-ATVTC produces the lowest Emean values in eight out of the problems, i.e., 2.67 times better than the second-ranked OCABC which manages to solve three problems with Emean = 0.00E+00. It is also important to mention that PSO-ATVTC is the only algorithm that successfully locates the global optima of the Sphere, Schwefel 2.22, Schwefel 1.2, and Schwefel 2.21 functions. Although the search accuracy of PSO-ATVTC in tackling the Rosenbrock and Quartic functions are not as consistently good as the other compared MS peers, the former algorithms outperforms the latters in the remaining functions.

219

Table 5.13 Comparisons between PSO-ATVTC and other tested MS variants in optimizing 30-D functions Function Emean SD Schwefel Emean 2.22 SD Schwefel Emean 1.2 SD Schwefel Emean 2.21 SD Emean Rosenbrock SD Emean Step SD Emean Quartic SD Emean Rastrigin SD Emean Ackley SD Emean Grienwank SD w/t/l #BME Sphere

RCCRO

RCCBBO

GSO

OXDE

OCABC

6.43E−07 (2.09E−07) 2.19E−03 (4.34E−04) 2.97E−07 (1.15E−07) 9.32E−03 (3.66E−03) 2.71E+01 (3.43E+01) 0.00E+00 (0.00E+00) 5.41E−03 (2.99E−03) 9.08E−04 (2.88E−04) 1.94E−03 (4.19E−04) 1.12E−02 (1.62E−02) 8/1/1 2

1.39E−03 (5.50E−04) 7.99E−02 (1.44E−02) 2.27E+01 (1.03E+01) 3.09E−02 (7.27E−03) 5.54E+01 (3.52E+01) 0.00E+00 (0.00E+00) 1.75E−02 (6.43E−03) 2.62E−02 (9.76E−03) 2.51E−02 (5.51E−03) 4.82E−01 (8.49E−02) 7/1/2 1

1.95E−08 (1.16E−08) 3.70E−05 (8.62E−05) 5.78E+00 (3.68E+00) 1.08E−01 (3.99E−02) 4.98E+01 (3.02E+01) 1.60E−02 (1.33E−01) 7.38E−02 (9.26E−02) 1.02E+00 (9.51E−01) 2.66E−05 (3.08E−05) 3.08E−02 (3.09E−02) 9/0/1 0

1.58E−16 (1.41E−16) 4.38E−12 (1.93E−12) 6.41E−07 (4.98E−07) 1.49E+00 (9.62E−01) 1.59E−01 (7.97E−01) 0.00E+00 (0.00E+00) 2.95E−03 (1.32E−03) 4.06E+00 (1.95E+00) 2.99E−09 (1.54E−09) 1.48E−03 (3.02E−03) 7/1/2 2

4.32E−43 (8.16E−43) 1.17E-22 (7.13E−23) NA 5.67E−01 (2.73E−01) 7.89E−01 (6.27E−01) 0.00E+00 (0.00E+00) 4.39E−03 (2.03E−03) 0.00E+00 (0.00E+00) 5.32E−15 (1.82E−15) 0.00E+00 (0.00E+00) 4/3/2 3

PSOATVTC 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 1.34E+01 (1.01E+00) 0.00E+00 (0.00E+00) 8.55E+00 (3.13E−01) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 0.00E+00 (0.00E+00) 8

5.3.6 Comparison in Real-World Problems This subsection aims to perform the comparative study of the proposed PSO-ATVTC in solving (1) the gear train design problems (Sandgren, 1990), (2) the frequency-modulated (FM) sound synthesis problem (Das and Suganthan, 2010), and (3) the spread spectrum radar polyphase code design problem (Das and Suganthan, 2010). The general descriptions of these three engineering problems have been provided in Section 2.4.2. All of the six PSO variants employed in the previous experiments (Section 5.3.3) are compared with the proposed PSO-ATVTC in solving the gear train design, FM sound synthesis, and spread radar polyphase code design problems. The simulation settings of these three engineering design problems have been summarized in Table 3.20. Meanwhile, the simulation results [i.e., Emean, SD, h, and mean computational time (tmean)] yielded by all tested algorithm over the 30 independent runs for these three real-world problems are reported in Tables 5.14 to 5.16.

220

Table 5.14 Simulation results of PSO-ATVTC and six other PSO variants in the gear train design problem Emean SD h tmean

APSO 1.28E-08 1.70E-08 + 1.11E+02

FLPSO-QIW 3.34E-10 5.78E-10 + 1.17E+02

FlexiPSO 2.36E-09 5.78E-10 + 9.24E+01

FPSO 7.48E-07 2.43E-06 + 9.21E+01

OLPSO-L 6.79E-09 1.25E-08 + 3.83E+01

SLPSO 8.80E-10 6.78E-10 + 9.04E+01

PSO-ATVTC 1.33E-10 2.76E-10 1.53E+01

Table 5.15 Simulation results of PSO-ATVTC and six other PSO variants in the FM sound synthesis problem Emean SD h tmean

APSO 2.06E+01 5.46E+00 + 1.11E+02

FLPSO-QIW 5.23E+00 5.90E+00 1.14E+02

FlexiPSO 2.17E+01 5.75E+00 + 7.96E+01

FPSO 1.75E+01 4.64E+00 + 7.90E+01

OLPSO-L 1.64E+01 5.94E+00 + 3.55E+01

SLPSO 1.64E+01 5.20E+00 + 3.46E+01

PSO-ATVTC 5.77E+00 3.72E+00 2.37E+01

Table 5.16 Simulation results of PSO-ATVTC and six other PSO variants in the spread spectrum radar polyphase code design problem Emean SD h tmean

APSO 1.33E+00 1.92E-01 + 4.96E+02

FLPSO-QIW 1.02E+00 6.88E-02 = 9.64E+02

FlexiPSO 1.22E+00 2.48E-01 + 2.55E+02

FPSO 1.13E+00 1.30E-01 + 2.48E+02

OLPSO-L 1.27E+00 1.97E-01 + 1.80E+02

SLPSO 1.28E+00 1.50E-01 + 2.45E+02

PSO-ATVTC 1.03E+00 2.13E-01 1.09E+02

As shown in Tables 5.14, it is observed that the proposed PSO-ATVTC is the best optimizer in solving the gear train design problem because it achieves the best Emean value among the seven tested algorithms. The excellent performance of PSO-ATVTC in this engineering design problem is also verified by the Wilcoxon test because the h values in Table 5.14 reports that the search accuracy of PSO-ATVTC is statistically better than the other six compared peer algorithms. Besides exhibiting the best search accuracy in solving the gear train design, the proposed PSO-ATVTC also outperforms the six other compared peers in term of mean computational times by producing the smallest tmean value. In other words, PSO-ATVTC requires the least computation overhead to solve the mentioned engineering design problem among the seven tested algorithms. Meanwhile, the simulation results in Tables 5.15 and 5.16 show that all seven tested algorithms exhibit comparable search accuracies in dealing with the FM sound synthesis

221

problem and the spread spectrum radar polyphase design problem because the Emean values produced by the tested algorithms are relatively similar. Tables 5.15 and 5.16 reveal that the proposed PSO-ATVTC is identified as the second best optimizer in solving the two mentioned design problems. Specifically, the Emean values obtained by the PSO-ATVTC significantly outperforms those of APSO, FlexiPSO, OLPSO-L, and SLPSO, suggesting that the former exhibits more superior search accuracies than the latter four in solving both the FM sound synthesis and spread spectrum radar polyphase design problems. It is also important to emphasize that although the FLPSO-QIW produces slightly better Emean values than the PSO-ATVTC, the former’s excellent search accuracies in solving the two mentioned engineering design problems are severely compensated by its huge computational overheads. Specifically, the tmean values required by the FLPSO-QIW to solve the FM sound synthesis and spread spectrum radar polyphase design problems are 4.81 times and 8.84 times higher than those of PSO-ATVTC, respectively. Based on the simulation results presented in Tables 5.14 and 5.16, it is observed that majority of the compared PSO variants are incapable to balance the performance improvement in term of search accuracy and the extra computational overhead incurred. For example, although the FLPSO-QIW can generally solve the three employed engineering design problems with competitive Emean value, it might be less feasible for some real-world applications because of its high tendency to incur excessive computational overhead. Conversely, the FlexiPSO, FPSO, OLPSO-L, and SLPSO which generally consume lower computational overheads (i.e., lower tmean values) have relatively poor optimization capabilities to solve the three employed engineering design problems, as verified by their inferior Emean values. It is also important to mention that although the OLPSO-L and SLPSO exhibit competitive performance in solving the benchmark problems, these two PSO variants fail to maintain their excellent search performance to tackle the real-world problems. As compared to its peer algorithms, the proposed PSO-ATVTC tackles the gear train design, FM sound synthesis, and spread radar polyphase code design problems with promising search accuracy, without incurring the huge amount of computational overhead. These 222

experimental findings suggest that it is possible for the proposed PSO-ATVTC emerges as a feasible optimization tool to tackle the real-world optimization problems.

5.3.7 Discussion From the previously reported experimental results, the proposed PSO-ATVTC is verified to have more superior search accuracy, search reliability, and convergence speed, as compared to other state-of-the-art PSO variants and well-established MS algorithms. Focusing on the algorithmic design of the proposed PSO-ATVTC, it could be deduced that the outstanding search performance of PSO-ATVTC is attributed to the two major modifications proposed in this chapter, i.e., the ATVTC module and the proposed learning framework. Particularly,

the

ATVTC

module

aims

to

assign

the

appropriate

exploration/exploitation strengths of each PSO-ATVTC particle, by adaptively varying its topology connectivity. The idea of this module is motivated by the following experimental findings obtained from early studies, namely, (1) different regions of the optimization problems may have different shapes of fitness landscapes (Li et al., 2012) and (2) PSO with different topology connectivity has different exploration/exploitation strengths (Kennedy, 1999). Moreover, the successfulness of the EPUS (Hsieh et al., 2009) and APTS (Zhu et al., 2013) strategies in tuning the population sizes of PSO and DE has inspired this research work to design a systematic mechanism that is able to adaptively vary the topology connectivity of each PSO-ATVTC particle. Based on the search performance of each particle and its location in the given problem’s fitness landscape, the proposed ATVTC module can increase, decrease, or randomly change the particle’s topology connectivity, thereby being able to adaptively vary the particle’s exploration/exploitation strengths with time. The importance of integrating all TCM strategies (i.e., Increase, Decrease, and Shuffle) into the ATVTC module has been carefully studied in Section 5.3.4. As shown in Tables 5.11 and 5.12, PSO-ATVTC with the complete ATVTC module outperforms those with the incomplete ATVTC module, namely, the PSO-ATVTC1, PSO-ATVTC2, PSOATVTC3, and PSO-ATVTC4. The inferior performance of PSO-ATVTC1 is due to the 223

absence of ATVTC module. Without this module, the topology connectivity of each PSOATVTC1 particle remains unchanged in the entire search process. Therefore, the exploration/exploitation strengths assigned to each PSO-ATVTC1 particle is the same, regardless of the particle’s search performance and location in the search space. The inability of PSO-ATVTC1 to vary the particle’s exploration/exploitation strengths has inevitably restricted the algorithm’s search performance. Although the PSO-ATVTC2 and PSOATVTC3 are capable to change dynamically the particle’s topology connectivity with time, the TCM strategies adopted by these two variants are unidirectional. Specifically, PSOATVTC2 gradually reduces the particle’s topology connectivity with time, whereas PSOATVTC3 varies the subject in the reverse manner. The consequences of employing such unidirectional TCM strategies include the search behaviors of the particles in PSO-ATVTC2 and PSO-ATVTC3 getting more explorative and exploitative, respectively, as the search process is progressing. As mentioned earlier, the particles in different location of the complex search space need to perform the searching with different exploration/exploitation strengths. The unidirectional TCM strategy adopted by the PSO-ATVTC2 and PSOATVTC3 could have assigned inappropriate exploration/exploitation strengths on certain particles, which lead to the performance deteriorations of algorithms. Finally, although the PSO-ATVTC4 could vary the particle’s topology connectivity in bidirectional manner (i.e., increasing and decreasing), no systematic mechanism has been designed in this variant to decide which TCM strategy is more suitable for a particular particle. Unlike PSO-ATVTC, the PSO-ATVTC4 randomly increases or decreases the particle’s topology connectivity, without considering its search performance. Such stochastic mechanism has higher risk to assign mistakenly the PSO-ATVTC4 particle with inappropriate topology connectivity and subsequently compromise the algorithm’s to tune the particle’s exploration/exploitation strengths. The outperformance of PSO-ATVTC over PSO-ATVTC4, as reported in Tables 5.11 and 5.12, reveals that the systematic mechanism employed by the former algorithm in adjusting the particle’s topology connectivity is more reliable than the stochastic mechanism employed by the latter one. 224

Meanwhile, the proposed learning framework employed by the PSO-ATVTC consists of a new velocity update mechanism and a new NS operator. In addition, two exemplars of cexp,i and sexp,i are generated to update the particle i's velocity and position. Both of these exemplars are generated from particle i's neighborhood through Equation (5.2) and roulette wheel selection respectively, to ensure that good-quality exemplars are used to guide the particle i to a more prominent search space. Meanwhile, the NS operator is executed if the particle i fails to update its personal best fitness through the new velocity update mechanism. In the NS operator, another exemplar, namely, oexp,i is used to further evolve the particle i. The oexp,i exemplar is derived from the cexp,i and sexp,i through simple crossover process. The crossover process ensures that different oexp,i exemplar is generated each time when the NS operator is triggered, thereby guiding the particle i to explore more unvisited regions in the search space. To further refine the solution quality of the global best solution, an EKE module is employed by the PSO-ATVTC as a unique learning strategy for the global best particle. Specifically, the EKE module will be triggered to extract the useful information from the PSO-ATVTC particles to evolve the global best particle when the formers successfully improve their personal best fitness via the new velocity update mechanism or the NS operator. Despite having competitive search performance, it is important to mention that the proposed algorithms (i.e., TPLPSO, ATLPSO-ELS, and PSO-ATVTC) and most of the existing PSO variants tend to restrict the whole population or individual particle to perform one type of search strategy (i.e., exploration or exploitation). In other words, most of the existing PSO variants perform task allocation (i.e., task of selecting search strategy) at the population level and the individual level. While these population-level and individual task allocation approaches are promising in improving the algorithm’s performance, limited amounts of efforts have been made to study the feasibility of PSO in performing the task allocation at dimension level, i.e., to assign each particle with different search strategies in different dimensional components of the search space. Considering that each particle might have different characteristics in different dimensional components, an innovative research 225

work will be proposed in the following chapter to investigate if the idea of dimension-level task allocation is useful to improve the search performance of PSO.

5.4 Summary In this chapter, a new PSO variant called PSO-ATVTC is developed. An innovative, yet efficient mechanism known as the ATVTC module is introduced into the PSO-ATVTC to adaptively vary the particle’s topology connectivity with time via the three proposed TCM strategies (i.e., Increase, Decrease, and Shuffle). This enables the PSO-ATVTC to assign the appropriate exploration/exploitation strengths to different particles based on their respective search status and location in the search space. Considering that different PSO-ATVTC particles have different topology connectivity, a new learning framework is therefore developed in the PSO-ATVTC to effectively guide the particles’ search directions based on the information acquired from their respective neighborhood members. The proposed learning framework consists of a new velocity update mechanism and a new NS operator, which both involve the derivation of more promising exemplars as the guidance of PSOATVTC particles. The simulation results of extensive performance comparison indicate that the proposed PSO-ATVTC dominates its PSO and MS peers in terms of the search accuracy. The simulation results further prove that all Increase, Decrease, and Shuffle strategies in the ATVTC module are useful to solve different types of problems categories. The integration of these three TCM strategies in the ATVTC module does not severely compromise their respective contributions during the optimization process. Finally, it is also important to emphasize that, despite exhibiting excellent search performance in solving the problems with various types of fitness landscapes, the modifications proposed in the PSO-ATVTC do not incur significantly high computation complexity and overhead based on the reported results. These experimental findings suggest that the promising potential of the proposed PSOATVTC to emerge as a feasible optimization tool to effectively tackle the real-world optimization problems. 226

CHAPTER 6 PARTICLE SWARM OPTIMIZATION WITH DUAL-LEVEL TASK ALLOCATION

6.1 Introduction Previous chapter had mentioned that the research works proposed so far in this thesis (i.e., TPLPSO, ATLPSO-ELS, and PSO-ATVTC) tend to restrict (1) the entire (or a predefined portion of) population or (2) an individual particle to perform one type of search task, in the effort of balancing the driving forces of algorithm’s exploration/exploitation searches. Similar observations could also be found in most of the existing PSO variants. For example, all particles of the Frankenstein PSO (FPSO) (Montes de Oca et al., 2009) exhibit exploitative behavior in the early stage of optimization, considering that the FPSO population is initialized with the fully-connected topology. As the search process is progressing, the topology connectivity of each FPSO particle is decreasing with time and the search behavior of this PSO variant tends to be more explorative. On the other hand, the Self-Learning PSO (SLPSO) proposed by Li et al. (2012) employs an adaptive learning framework, which enables a particle to choose one of the four search tasks (i.e., exploration, exploitation, convergence, and jumping out) based on particle’s location in the subregion of search space. It is important to mention that once the search task of each SLPSO particle has been determined, the corresponding particle needs to perform the selected task in all of its dimensional components. Based on these observations, it could be concluded that most of the existing PSO variants perform task allocation (i.e., the mechanism of assigning search task) at the population level and the individual level. While these population-level and individual-level task allocation approaches are promising in improving the algorithm’s search performance, limited amounts of efforts have been invested to study the feasibility of PSO in performing the task allocation at dimension level. Recent study of Jin et al. (2013) revealed that it could be more appropriate that every particle in the swarm to select different search task for different dimensional components, based on its unique characteristic in the search space. In 227

the light of this finding, a new variant of PSO, namely the PSO with dual-level task allocation (PSO-DLTA), is proposed in this chapter. The proposed PSO-DLTA is an innovative framework, which consists of two types of task allocation modules, i.e., the dimension-level task allocation (DTA) module and the individual-level task allocation (ITA) module. Unlike the TPLPSO, ATLPSO-ELS, and PSO-ATVTC proposed in the previous chapters, PSO-DLTA offers a distinct feature of assigning different search tasks to different dimensional components of a particle based on its characteristic in each dimensional components of the search space. The remaining of this chapter is presented in the following aspects. First, the details framework descriptions of the proposed PSO-DLTA are provided. Extensive experimental studies are subsequently performed to demonstrate the effectiveness of PSO-DLTA. Apart from this, the optimization capabilities of the four PSO variants proposed in this thesis (i.e., TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA) are also investigated and compared. This final section concludes the research work proposed in this chapter.

6.2 PSO with Dual-Level Task Allocation The working mechanism of the proposed PSO-DLTA is systematically presented in the following manners. First, the research ideas that motivate the development of PSO-DLTA are provided. Next, the general description of PSO-DLTA is presented in sufficient details, followed by the mechanisms of each algorithmic module employed by the proposed algorithm (i.e., the DTA and ITA modules). Finally, the complete framework of the PSODLTA is revealed and some important remarks of this algorithm are highlighted.

6.2.1 Research Ideas of PSO-DLTA As mentioned earlier, majority of the PSO variants such as FPSO and SLPSO perform the task allocations either in population or individual levels. Nevertheless, according to the previous studies performed by van den Bergh and Engelbrecht (2004), different particles could have good values in different dimensional components of their position vectors. In 228

other words, each particle exhibits different characteristics in different dimensions of search space during the optimization process and these characteristics could be utilized to further improve the algorithm’s search performance. Motivated by the findings of van den Bergh and Engelbrecht (2004), this chapter is devoted to investigate the feasibility of PSO in performing the dimension-level task allocation, i.e., to assign each particle with different search tasks in different dimensional components of the search space. It is important to mention that some of the well-established PSO variants, e.g., CLPSO (Liang et al., 2006), FLPSO-QIW (Tang et al., 2011), and DNLPSO (Nasir et al., 2012) allow the particle to learn from different exemplars in different dimensional of the search space. Nevertheless, so far there are no systematic mechanisms that have been designed in these variants to clearly specify which particular dimension of particle should perform the exploration search or the exploitation search. Recently, Jin et al. (2013) performed a study to analyze the importance and effect of randomness on the search behavior of PSO. As shown in their works, the particles’ movements in the PSO with random coefficient are more complex than those in PSO without random coefficient. This is because the random coefficient in PSO allows the swarm to keep higher degree of diversity during the search process. The study of Jin et al. (2013) suggests that the presence and the absence of randomness in PSO guarantee a good balance between straying off-course (i.e., exploration) and staying close (i.e., exploitation) to the global optimal solution, respectively. Based on the outcomes of their study, Jin et al. (2013) proposed three dimension selection techniques in the PSO to tackle the “two steps forward, one step back” phenomenon (van den Bergh and Engelbrecht, 2004). The core idea of the dimension selection techniques are motivated by the fact that if a particle’s dimension is far away from the global best particle Pg, then learning from Pg is more urgent than the dimensions that are similar (i.e., closer) to the Pg. In other words, the dimension selection techniques proposed by Jin et al. (2013) allow every particle in the swarm to determine (1) which of its dimensions that need to learn from the Pg (i.e., exploitation) and (2) which of those are kept 229

unchanged in order to maintain the swarm diversity and prevent the premature convergence (i.e., exploration). Although the initial motivation of the study performed by Jin et al. (2013) is to investigate the effects of randomness and why it is indispensable for PSO, their latter discovery is equally significant, if not more, which has then inspired the core idea in developing the DTA module of PSO-DLTA. Specifically, the development of the three mentioned dimension selection techniques by Jin et al. (2013) and their successfulness in enhancing the search performance of PSO imply that the dimension-level task allocation emerge as another viable alternative to address the intense conflict between the exploration/exploitation searches of PSO. Based on the abovementioned discussions, this research works opines that there is scope for further enhancement of PSO by using the dimension-level task allocation approach, as compared to the population and individual-levels task allocation approaches which have been more frequently explored. However, instead of using the random coefficent to balance the exploration/exploitation strengths, this research work has developed some new dimension-level task allocation mechanism in the DTA module, to enable the particles locate the more promising regions of search space. The following subsections will present the general description of the proposed PSO-DLTA, as well as the implementation details of both DTA and ITA modules.

6.2.2 General Description of PSO-DLTA This subsection intends to provide a general description of the proposed PSO-DLTA. As mentioned in the earlier subsection, PSO-DLTA is developed by integrating the BPSO with two algorithmic modules known as the DTA module and the ITA module. Each of these modules offers different mechanisms in balancing the exploration/exploitation strengths of PSO-DLTA particles during the search process. The descriptions of the DTA module and the ITA module are provided as follows.

230

As explained earlier, one of the appealing characteristic demonstrated by the PSODLTA as compared to most existing PSO variants is the capability of the former algorithm to perform the dimension-level task allocation during the search process. This unique feature of PSO-DLTA is realized by the DTA module. Specifically, the DTA module first computes the distances between a target particle and the global best particle in every dimension of the search space. Based on these computed distances, the proposed PSO-DLTA could adaptively assign different search tasks to different dimensional components of a particle by employing a set of predefined heuristic rules in the DTA module (see Figure 6.1). Considering that there is no guarantee that the DTA module will always produces the better solutions, the ITA module is employed as the alternative learning phase of PSODLTA. Specifically, the ITA module enables the PSO-DLTA particles that fail to achieve fitness improvement in their previous learning phase (i.e., DTA module) to seek for the optimal solution of a given problem, by offering these particles with the new search directions. Unlike the DTA module, the ITA module promotes the individual-level task allocation. In other words, all dimensional components of a particular particle perform the same search task in this alternative learning phase.

6.2.3 Dimension-Level Task Allocation Module This subsection begins with the descriptions of the metrics and rules employed by the proposed DTA module to assign the particle with different search tasks in different dimensional components of the search space. Next, the working mechanisms of each involved search tasks are discussed in depth. Finally, the complete implementation of the DTA module is provided.

6.2.3(a) Metric and Rules of DTA Module in Performing Task Allocations The proposed DTA module works as follows. Initially, the absolute distances between the particle i (i.e., Xi) and the global best particle (i.e., Pg) in each d-th dimension of the search space are computed as: 231

Rule 1: IF m_distancei  1.00E-Z THEN Particle i performs the Relocation search. Rule 2: IF m_distancei > 1.00E-Z AND distancei,d > m_distancei THEN Particle i performs the Exploitation search in d-th dimension. Rule 3: IF m_distancei > 1.00E-Z AND distancei,d  m_distancei THEN Particle i performs the Exploration search in d-th dimension. Figure 6.1: IF-THEN rule employed by the DTA module in performing the task allocation in each dimension.

distancei,d  X i,d  Pg ,d

(6.1)

where distancei,d denotes the absolute distance between the current position vector of particle i and the position vector of Pg particle in a specific d-th dimension of the search space, represented by Xi,d and Pg,d, respectively. Based on the distancei,d values obtained, the mean distance of the current position vector of particle i from the position vector of Pg particle (i.e., m_distancei) are subsequently computed by using the following equation: m_distancei 

1 D distancei,d D d 1



(6.2)

where D represents the dimension of the search space. It must not be confused with the symbols of d and D because the former refers to a specific dimension of the search space, whereas the latter connotes the total dimension available in the search space of a given optimization problem. It is important to mention that the derivation of both distancei,d and m_distancei metrics are in fact inspired from the research works proposed by Jin et al. (2013). Unlike their approach that uses the m_distancei metric to decide which dimensions of the particle i are selected to update the velocity without random coefficient, the proposed DTA module employs the distancei,d and m_distancei to determine the search tasks of particle i in each dimension. Specifically, the DTA module performs the dimension-level task allocation based on the three proposed IF-THEN rules as summarized in Figure 6.1. The parameter Z is an integer number ranges from 1 to 10 and it is used to determine the permissible similarity

232

between the two compared particles before the relocation search is involved. In the following subsections, the working mechanism of each search task employed by the DTA module (i.e., relocation, exploration, and exploitation) will be described in sufficient details.

6.2.3(b) Relocation Search As mentioned by the Rule 1 in Figure 6.1, the proposed DTA module assigns particle i to perform the relocation search when the mean distance between the current position vector of particle i and the position vector of global best particle Pg (m_distancei) are significantly small, i.e., m_distancei  1.00E-Z. The relocation search can be considered as a jumping out mechanism that aims to assist the particle i to escape from the potential inferior regions of the search space. It involves the perturbation on the selected dimension of particle i. The necessity for particle i to perform the relocation search is justified by the fact that when the condition of m_distancei  1.00E-Z is fulfilled, it could be deduced that particle i has clustered on the best solution found so far (i.e., Pg particle) and therefore it is extremely similar with the Pg particle. In most cases, the Pg solution found does not necessary equal to the global optimum of the optimization problem. It could be the local optimum that is located potentially distant from the actual global optimum and thereby it tends to deceive the particle i. Once the particle i converges on this misleading local optimum, little opportunity is afforded by it to explore for other solution possibilities due to the diversity lose. To address the aforementioned issue, the relocation search is thus proposed to provide particle i an extra momentum to escape from the Pg particle, when the similarities between these two particles meet the predefined threshold, i.e., m_distancei  1.00E-Z. Considering that the particle i is likely to have some good structures of the global optimum when the Rule 1 in Figure 6.1 is met, these good structures need to be preserved. Based on the recommendation of Zhan et al. (2009), only one dimension of the current position vector of particle i is randomly selected to perform the perturbation as shown:

233

old X inew ,d _ r  X i ,d _ r  r1  ( X max,d _ r  X min,d _ r )  N (0,1)

(6.3)

old where X inew ,d _ r and X i ,d _ r denote the updated and previous position vectors of particle i in

the randomly selected d_r dimension, respectively; r1 is a random number in the range of [-1, 1]; Xmax,d_r and Xmin,d_r represent the upper and lower bounds of the problem space in d_r-th dimension, respectively; N(0, 1) is a random number generated from the normal distribution with mean 0 and variance 1. According to Equation (6.3), the degree of perturbation on the selected d_r dimension of particle i depends on the random values generated by r1 and N(0, 1). The smaller values of r1 and N(0, 1) introduces smaller perturbation on particle i and vice versa. Considering that every dimension of particle i has the same probability to be chosen, the relocation search can be regarded to perform on every dimension in a statistical sense (Zhan et al., 2009). It is important to mention that, the main purpose of the relocation search is to help particle i pushes itself out to a potentially better region when it is identified to cluster around the potential local optimum. The new position of particle i, i.e., X inew ,d _ r will replace the previous position, X iold ,d _ r , regardless the former has better or worse fitness than the latter. In other word, no greedy selection is employed by the DTA module to update the particle i’s position during the relocation search. If another better region is found by the perturbed particle i, it will become the new global best particle. The rest of the swarm will then follow this newly updated Pg particle to jump out from the local optimum and converge to the new promising regions.

6.2.3(c) Exploitation Search According to Rule 2 in Figure 6.1, the d-th component of particle i (Xi,d) is assigned with the exploitation task when the absolute distance between the current position vector of particle i and the position vector of Pg particles in d-th dimension (Pg,d) of the search space, distancei,d is larger than the mean distance, m_distancei (i.e., distancei,d > m_distancei). It is thought that

234

if the Xi,d is far away from the Pg,d, the learning of d-th component of particle i from Pg,d is more urgent as compared to those nearer to Pg,d. In other words, the exploitation search is prioritized when particle i is less similar to Pg, in d-th dimensional component. In the proposed DTA module, two trial position vectors will be generated when the d-th component of particle i is assigned with the exploitation search. To be specific, the first 1 trial position vector of particle i, X itrial is computed by the following equation: ,d 1 X itrial  X i ,d  Vi ,d  c1r3 ( Pi,d  X i,d )  c2 r4 ( Pg ,d  X i ,d ) ,d

(6.4)

where Vi,d and Pi,d refer to the velocity and personal best position vectors of particle i in d-th component; r3 and r4 are the random numbers in the range of [0, 1]; c1 and c2 are the acceleration coefficients that control the influences of self-cognitive (i.e., Pi) and social (i.e., Pg) components, respectively;  is the inertia weight of particle i. It is worth to point out that Equation (6.4) is derived by substituting the Equation (2.1) into Equation (2.2), whereby the group best experience of particle i, Pn is assigned as the global best particle Pg. In other 1 words, the first trial position vector X itrial ,d is derived from the velocity and position update

equations of BPSO with fully-connected topology, as previous studies revealed that the searching behavior of fully-connected BPSO is more exploitative (Kennedy, 1999, Kennedy and Mendes, 2002). 2 Meanwhile, the second trial position vector of particle i, X itrial is computed as ,d

follows: elite non elite elite elite elite   Panon ), ObjV ( Panon )  ObjV ( Panon ) 3,d 2 3 Pa1,d  r6 ( Pa 2,d 2 X itrial   elite ,d non elite elite  Panon ), otherwise  2, d Pa1,d  r7 ( Pa3,d

(6.5)

where Pelite is the elite group of PSO-DLTA which stores the particles with personal best fitness ranked at the first quartile range; Pnon-elite is the non-elite group which used to store the remaining 3/4 of PSO-DLTA population with inferior personal best fitness; a1 is the index of particle that is randomly selected from elite group; a2 and a3 are the indexes of elite particles that are randomly selected from non-elite group; ObjV( Panon ) and 2

235

elite ObjV( Panon ) refer to the objective function values (ObjV) of particles a2 and a3, 3

respectively; r6 and r7 are two random numbers in the range of [0, 1]. It is important to mention that the search behavior of Equation (6.5) shares some similarity with the mutation operator of the differential evolution (DE) (Das and Suganthan, 2011). However, unlike the classical DE which randomly selects the individuals from population during mutation, the proposed approach, which is inspired from the social learning strategy (Montes de Oca et al., 2011), encourages the d-th component of particle i to perform the exploitation around those randomly selected elite particles with better fitness. As shown in Equation (6.5), the particle i generates its second trial position vector in d-th 2 dimension, i.e., X itrial , by performing the neighborhood search around the fitter particle, i.e., ,d

those selected from the elite group. As fitter particles tend to have similar structure with the Pg, the social learning strategy encourages the particle i to exploit the already found optimal solution. This strategy could enhance the likelihood of particle i to generate a more 2 promising value of X itrial , without incurring the computational cost of acquiring that ,d

knowledge individually from scratch.

6.2.3(d) Exploration Search Finally, Rule 3 in Figure 6.1 states that when distancei,d  m_distancei, particle i is assigned with exploration search in the d-th dimension of search space. This is attributed to the fact that, when the scenario of distancei,d  m_distancei is met, it implies that particle i is relatively similar with Pg in the d-th dimension. Thus, an exploration search is required to keep the diversity of the swarm and thus prevent the premature convergence. Similar with the exploitation search, two trial positions vectors are derived when the d-th component of particle i is assigned with the exploration search. The first trial position 1 vector, X itrial ,d is calculated as follows: 1 X itrial  X i ,d  Vi,d  c1r8 ( Pi,d  X i ,d )  c2 r9 ( Pl ,d  X i,d ) ,d

236

(6.6)

where r8 and r9 are the random numbers in the range of [0, 1]. Similar with Equation (6.4), the derivation of Equation (6.6) is also originated from the Equations (2.1) and (2.2). Nevertheless, unlike Equation (6.4), the neighborhood best position Pn is assigned as the 1 local best position Pl, implying that X itrial is computed from the velocity and position update ,d

equations of BPSO with local ring topology. To recap, the local version of BPSO tends to exhibit stronger explorative searching behavior (Kennedy, 1999, Kennedy and Mendes, 2002). 2 On the other hand, the second trial position vector of particle i, X itrial is computed ,d

as follows:

X

trial2 i ,d

non elite elite   r10 ( Paelite 5, d  Pa 6 , d ), Pa 4,d   nonelite elite  r11 ( Paelite  6, d  Pa 5, d ), Pa 4,d

elite ObjV ( Paelite 5 )  ObjV ( Pa 6 )

(6.7)

otherwise

where r10 and r11 are two random numbers in the range of [0, 1]; a4 is the index of particle that is randomly selected from non-elite group; a5 and a6 are the indexes of particles that are elite randomly selected from elite group; ObjV( Paelite 5 ) and ObjV( Pa 6 ) refer to the ObjV values

of particles a5 and a6 which are selected from the elite group. Similar with Equation (6.5), the derivation of Equation (6.7) is also motivated from the concept of social learning. However, in contrast to Equation (6.5), Equation (6.7) encourages the particle i to perform the neighborhood search around the particles selected from non-elite group, considering that the particles with inferior fitness tend to locate distant away from the Pg. The search strategy as proposed in Equation (6.7) offers the diversity to particle i to explore the uncovered regions in the search space and thus prevent the swarm stagnation.

6.2.3(e) Crossover Operation As shown in the previous subsections, two trial position vectors of Xitrial1 and Xitrial2 are produced by the Equations (6.4) to (6.7) when the condition of m_distancei > 1.00E-Z is met, according to Rules 2 and 3 in Figure 6.1. The ObjV values of these two trial position vectors,

237

Crossover_operation (Xitrial1, Xitrial2, ObjV(Xitrial1), ObjV(Xitrial2)) Input: Trial position vectors (i.e., Xitrial1and Xitrial2) and the corresponding objective function values [i.e., ObjV (Xitrial1) and ObjV(Xitrial2)] 1: Calculate the weightage value, Wi,k of trial position vectors Xitrial1and Xitrial2 with Equation (6.8); 2: Randomly select a dimension, dr; 3: for each dimension d do 4: if d  dr then 5: Perform the roulette wheel selection based on the Wk of each trial position vector; 6: Xitrial3 = d-th component of the selected trial position vector; 7: else if d = dr then 8: Xitrial3 = dr-th component of Xitrial1or Xitrial2 whichever has the inferior fitness; 9 end if 10 end for Output: Trial position vector Xitrial3 Figure 6.2: Crossover operation in the DTA module.

i.e., ObjV(Xitrial1) and ObjV(Xitrial2) are then evaluated and compared with the current ObjV value of particle i, ObjV(Xi). Since both of the Xitrial1and Xitrial2 are generated in stochastic manner, two possible outcomes could be anticipated when comparing the values of ObjV(Xi), ObjV(Xitrial1) and ObjV(Xitrial2), i.e., (1) the best trial position vector among the Xitrial1and Xitrial2 are superior to particle i, i.e., min[ObjV(Xitrial1), ObjV(Xitrial2) ] < ObjV(Xi), and (2) the best trial position vector among the Xitrial1and Xitrial2 are inferior to particle i, i.e., min[ObjV(Xitrial1), ObjV(Xitrial2)]  ObjV(Xi). For scenario 1, the best trial position vector found among the Xitrial1and Xitrial2 is used to update the current position of particle Xi. Meanwhile for scenario 2, the third trial position vector, Xitrial3, is generated by combining the useful information contained in Xitrial1and Xitrial2 through the crossover operation, as illustrated in Figure 6.2. As the trial position vector with better fitness is more likely to have useful information than the inferior one, the former is assigned with higher weightage value, Wi,k as shown in Equation (6.8), where k = 1 and 2 refers to the index of trial particles.

Wi , k

1/[1  ObjV ( X itrialk )],   1  ObjV ( X itrialk ) ,  

238

ObjV ( X itrialk )  0 otherwise

(6.8)

To prevent the derivation of the Xitrial3 solely from the fittest trial position vectors found among the Xitrial1and Xitrial2, one dimensional component dr of the Xitrial3, i.e., Xitrial3(dr) is randomly selected. The value of Xitrial3(dr) is then replaced with the dr-th component of Xitrial1 or Xitrial2, whichever has the inferior fitness. In what follows, the objective function value of the Xitrial3, i.e., ObjV(Xitrial3) is evaluated and compared with the values of ObjV(Xitrial1) and ObjV(Xitrial2). The fittest trial position vectors found among the Xitrial1, Xitrial2, and Xitrial3 is then selected to update the current position vector of particle Xi.

6.2.3(f) Complete Implementation of DTA Module Based on the procedures described in the previous subsection, the complete implementation of the proposed DTA module is presented in Figure 6.3. It is important to mention that for particle i which successfully improves its personal best fitness via the DTA module, it might have some useful information contained in certain components of the newly improved personal best position Pi. Intuitively, these information should be extracted out to further improve the global best particle Pg. Thus, when a particle i successfully finds a better Pi, the elitist-based knowledge (EKE) module, as presented in Figure 5.8, will be triggered. Specifically, the EKE module will iteratively check each dimension of Pg, by replacing its dimension with the corresponding dimensional value of Pi if Pg is improved by doing so. As explained earlier, the employment of EKE module enables the Pg to learn useful information from those dimensions of Pi that have been improved and this module is expected to improve the algorithm’s search accuracy and convergence speed.

6.2.4 Individual-Level Task Allocation Module As mentioned earlier, the search mechanism in the DTA module is stochastic. Thus, there is no guarantee that the personal best fitness value of particle i, could be improved through the DTA learning phase. When this scenario happens, an alternative learning phase is offered to particle i to further evolve its personal best position Pi, through the proposed ITA module.

239

DTA_module(Vi, Xi, ObjV(Xi), (Pi), ObjV(Pi), Pg, ObjV(Pg), fes, Pelite, Pnon-elite) Input: Particle i's velocity (Vi), current position (Xi) and the corresponding objective function value [f(Xi)], personal best position (Pi) and the corresponding objective function value [ObjV(Pi)], global best position (Pg) and the corresponding objective function value [ObjV(Pg)], number of fitness, evaluation consumed (fes), elite group (Pelite), non-elite group (Pnon-elite) 1: Calculate the distancei,d and m_distancei of particle i with Equations (6.1) and (6.2) respectively; 2: if m_distancei  1.00E-Z then /*Rule 1, Relocation*/ 3: Calculate the new position vector of particle i, i.e. Xi, using Equation (6.3); 4: else if m_distancei > 1.00E-Z then 5: for each dimension d do 6: if distancei,d > m_distancei then /*Rule 2, Exploitation*/ 1 2 Calculate the trial position vectors of X itrial and X itrial with Equations ,d ,d

7:

(6.4) and (6.5) respectively; else if distancei,d  m_distancei then /*Rule 3, Exploration*/

8:

1 2 Calculate the trial position vectors of X itrial and X itrial with Equations ,d ,d

9:

(6.6) and (6.7) respectively; end if end for Perform the fitness evaluation on trial position vectors Xitrial1 and Xitrial2 ; fes = fes + 2; if min[ObjV(Xitrial1), ObjV(Xitrial2)]< ObjV(Xi) then Update the current position and fitness of particle i with those from trial position vectors Xitrial1 and Xitrial2, whichever has better fitness; 16: else if min[ObjV(Xitrial1), ObjV(Xitrial2)]  ObjV(Xi) then /*perform crossover*/ 17: Perform Crossover_operation (Xitrial1, Xitrial2, ObjV(Xitrial1), ObjV(Xitrial2)) to produce Xitrial3; 18: Perform the fitness evaluation on trial particle Xitrial3; 19: fes = fes + 1; 20: Update the current position and fitness of particle i with those from trial position vectors Xitrial1,Xitrial2, and Xitrial3, whichever has better fitness; 21: end if 22: end if 23: Update Pi, Pg, ObjV(Pi), and ObjV(Pg); 24: if ObjV(Pi) is improved then /*Extract useful information from the improve Pi*/ 25: Perform the EKE(Pi, Pg, ObjV (Pg), fes); 26: end if Output: Updated Vi, Xi, ObjV(Xi), Pi, ObjV(Pi), Pg, ObjV(Pg), fes 10: 11: 12: 13: 14: 15:

Figure 6.3: DTA module of the proposed PSO-DLTA. Specifically, the ITA module first generates two trial position vectors Pitrial1 and Pitrial2 by performing the neighborhood search around the personal best position of particle i, Pi as follows: elite nonelite elite elite elite   Pbnon ), ObjV ( Pbnon )  ObjV ( Pbnon ) r12 Pi  r13 Pb1  r14 ( Pb 2 3 2 3 Pitrial 1   elite nonelite elite   Pbnon ), otherwise 2 r15 Pi  r16 Pb1  r17 ( Pb3 (6.9)

240

nonelite elite elite elite   r20 ( Pbelite r18 Pi  r19 Pb 4 5  Pb 6 ), ObjV ( Pb5 )  ObjV ( Pb 6 ) Pitrial 2   nonelite elite   r23 ( Pbelite 6  Pb5 ), otherwise r21 Pi  r22 Pb 4

(6.10)

where r12 to r22 are the random numbers in the range of [0, 1], with r12+r13+r14 = 1, r15+r16+r17 = 1, r18+r19+r20 = 1, and r21+r22+r23 = 1; b1, b5 and b6 are the indexes of particles selected from elite group; b2, b3, and b4 are the indexes of particles selected from non-elite group. Equation (6.9) is more exploitative because it guides the Pi towards the fitter particle (i.e., Pelite). In contrast, Equation (6.10) leads the Pi towards the particle with inferior fitness (i.e., Pnon-elite) and therefore is more explorative. Unlike the DTA module, the ITA module assigns the same search task to particle i in all dimensional components, as indicated by Equations (6.9) and (6.10). The objective function values of both trial position vectors, [i.e., ObjV(Pitrial1) and ObjV(Pitrial2)] are then evaluated and compared with those of particle i [i.e., ObjV(Pi)] and global best particle Pg [i.e., ObjV(Pg)]. Similar with the DTA module, the search mechanisms proposed in the ITA module [i.e., Equations (6.9) and (6.10)] are also stochastic and therefore will not always guarantee to improve the ObjV(Pi) and ObjV(Pg). If ObjV(Pitrial1) or ObjV(Pitrial2) is smaller (i.e., better) than the ObjV(Pi) and ObjV(Pg), the best position vectors found among the Pitrial1 and Pitrial2 will replace both of the Pi and Pg. Similar with the DTA module, if the fitness of Pi is improved by the ITA module, the EKE module in Figure 5.8 will be triggered to extract the useful information found from the updated Pi to refine the Pg. On the other hand, if both of the trial position vectors Pitrial1 and Pitrial2 fail to improve the fitness value of Pi and Pg, the relocation search [as described in Equation (6.3)] is performed on the Pg particle. This relocation search aims to push the Pg particle to a potentially better region via the perturbation mechanism. If another better region is found by the perturbed Pg particle, the rest of swarm will follow it to jump out and converge to the new promising region. Unlike the DTA module, greedy search is applied in the ITA module, i.e., the perturbed Pg particle, Pgper will replace the orginal Pg particle only if the former has better fitness than the latter. The implementation of the ITA module is presented in Figure 6.4. 241

Algorithm 4: ITA_module (Pi, f(Pi), Pg, f(Pg), fes, Pelite, Pnon-elite) Input: Particle i's personal best position (Pi) and the corresponding objective function value [ObjV(Pi)], global best position (Pg) and the corresponding objective function value [ObjV(Pg)], number of fitness evaluation consumed (fes), elite group (Pelite), non-elite group (Pnon-elite) 1: Calculate the trial position vectors of Pitrial1 and Pitrial2 with Equations (6.9) and (6.10), respectively; 2: Perform the fitness evaluation on Pitrial1 and Pitrial2; 3: fes = fes + 2; 4: Update Pi, Pg, ObjV(Pi), and ObjV(Pg); 5: if ObjV(Pi) is improved then /*Extract useful information from the improve Pi*/ 6: Perform the EKE(Pi, Pg, ObjV(Pg), fes); 7: else /*Perform relocation search on Pg*/ 8: Randomly select a dimension, dr; 9: Perform relocation search on Pg at dr-th dimension using Equation (6.3); 10: Perform the fitness evaluation on the perturbed Pg, i.e., Pgper; 11: fes = fes + 1; 12: if ObjV(Pgper) < ObjV(Pg) then 13: Pg = Pgper, ObjV(Pg) = ObjV(Pgper); 14: end if 15: end if Output: Updated (Pi), ObjV(Pi), Pg, ObjV(Pg), fes Figure 6.4: ITA module of the proposed PSO-DLTA.

6.2.5 Complete Framework of PSO-DLTA By integrating the algorithmic modules described in the previous subsections, the complete implementation of the proposed PSO-DLTA is illustrated in Figure 6.5. It is notable that the complete framework of PSO-DLTA is similar with the learning methodologies of TPLPSO (Figure 3.5) and PSO-ATVTC (Figure 5.11) because the alternative learning phase of these algorithms are invoked only when the particle fails to update its personal best fitness during the previous learning phase. This is because the proposed algorithm considers that when a PSO-DLTA particle successfully updates its fitness via the DTA module, this particle is on the right track in locating the promising solution regions of a given problem. Considering that the PSO-DLTA particle is likely to maintain its promising trajectory if the DTA module successfully improves the particle’s solution in the consecutive iterations, the ITA module can be omitted to preserve the algorithm’s computation cost. Another noteworthy observation is that, by inspecting the Equations (6.4) to (6.7), it can be observed that the DTA module introduced in Section 6.2.3 is employed to update the

242

PSO-DLTA Input: Population size (S), dimensionality of problem space (D), objective function (F), the initialization domain (RG), problem’s accuracy level (  ) , maximum number of fitness evaluation (FEmax) 1: Generate initial swarm and set up parameter for each particle; 2: while fes < FEmax do 3: for each particle i do 4: Identify the elite group Pelite and non-elite group Pnon-elite from the population; 5: Execute the DTA_module(Vi, Xi, ObjV(Xi), (Pi), ObjV(Pi), Pg, ObjV(Pg), fes, Pelite, Pnon-elite); 6: if ObjV(Pi) is not improved then 7: Execute the ITA_module (Pi, ObjV(Pi), Pg, ObjV(Pg), fes, Pelite, Pnon-elite); 8: end if 9: end for 10: end while Output: The best found solution, i.e. the global best particle’s position (Pg) Figure 6.5: Complete framework of the PSO-DLTA.

current velocity and position values of particle i. Meanwhile, the ITA module, as proposed in Section 6.2.4, focuses on evolving the self-cognitive experiences of particle i, according to Equations (6.9) and (6.10). Based on these observations, it can be deduced that the complete framework of PSO-DLTA is inherited from the two-layer evolution framework of ATLPSOELS (see Section 4.22). Specifically, the DTA module and the ITA module of PSO-DLTA can be considered as the current swarm evolution and the memory swarm evolution of the algorithm, respectively.

6.3 Simulation Results and Discussions In this section, a set of comprehensive experimental studies are performed to investigate the search performance of the proposed PSO-DLTA. This section is presented in the following manners. First, the simulation settings of all algorithms involved in the performance evaluation are specified. A parameter sensitivity analysis is subsequently performed to determine the optimal parameter setting of PSO-DLTA. In order to investigate the effectiveness of PSO-DLTA, extensive amounts of comparative studies between the proposed algorithms and its peer variants are performed by solving the benchmark problems and the real-world problems described in earlier chapter (see Section 2.41).

243

6.3.1 Experimental Setup In this research work, the experimental studies are performed by comparing the proposed PSO-DLTA with six PSO variants on 30 benchmark problems in 50 dimensions (50-D). PSODDS, which is proposed by Jin et al. (2013), is chosen for the comparison because this PSO variant inspires the development of PSO-DLTA. Unlike PSO-DLTA, it assigns different search tasks via the proposed dimension selection techniques. Among the three selection techniques proposed by Jin et al. (2013), the PSO with Distance-based Selection is reported to yield the best search performance and thus it is selected as the compared peer. Meanwhile, the FLPSO-QIW, OLPSO-L, and MoPSO are chosen for comparison because the learning strategies of these PSO variants share specific similarities with PSO-DLTA, i.e., these learning strategies employ the non-global best solutions to guide the search. Finally, to investigate the proposed modifications, PSO-DLTA is also compared with APSO and FlexiPSO, where the latter two are the representative PSO variants developed from the parameter adaptation and modified topology approaches, respectively.

Table 6.1 Parameter settings of the involved PSO variants Algorithm APSO (Zhan et al., 2009)

Population topology Fully-connected

Parameter settings  : 0.9  0.4 , c1  c 2 : [3.0,4.0] ,  : 1.00.1,   [0.05,0.1] 1 = 0.9,  2 = 0.2, ̌ = ̌ = 1.5, ̂ = 2.0, ̂ = 1.0, m  1 , Pi  [0.1, 1] ,

FLPSO-QIW (Tang et al., 2011)

Comprehensive learning

FlexiPSO (Kathrada, 2009)

Fully-connected and local ring

 : 0.5  0.0 , c1 , c2 , c3 : [0.0,2.0] ,

OLPSO-L (Zhan et al., 2011)

Orthogonal learning

 : 0.9  0.4 , c  2.0 , G = 5

MoPSO (Beheshti et al., 2013)

Fully Connected

No parameters are involved

PSODDS (Jin et al., 2013)

Fully Connected

  0.7298, c1  c2  2.05

PSO-DLTA

Fully Connected and Local Ring

 : 0.9  0.4 , c1  c2  2.0 , Z = 8

K1  0.1 , K 2  0.001 ,  1  1 ,  2  0

244

  0.1 ,   0.01%

The parameter settings of all involved algorithms, as summarized in Table 6.1, are inherited from the original literatures. The simulation results reported in Tables A1, A2, and A4 to A6 have verified these settings are optimized. For PSO-DLTA, a parameter sensitivity analysis will be performed in the following subsections to study the impact of parameter Z on the algorithm’s optimization capability. All involved PSO variants are run independently 30 times to reduce the random discrepancy. As explained in Section 3.3.1, the maximum fitness evaluation numbers (FEmax) and population size (S) of all tested algorithms are set as 3.00E+05 and 30, respectively.

6.3.2 Parameter Sensitivity Analysis As described in the DTA module (Section 6.2.3), the particle i will perform the relocation search if it is significantly similar with the global best particle Pg (i.e., when the condition of m_distancei  1.00E-Z is met). By studying the algorithmic framework of PSO-DLTA, it could be deduced that the parameter Z determines the permissible similarity between the two compared particles before the relocation search is invoked. In other words, parameter Z governs the rate of introducing the extra diversity into the PSO-DLTA population, which could potentially affect the algorithm’s search behavior. In this section, a parameter sensitivity analysis is performed to seek for the answers to the following two questions: (1) how parameter Z influences the search behavior of PSO-DLTA, and (2) how parameter Z is best set. The experimental setups of parameter sensitivity analysis are presented as follows. The impacts of Z on the proposed PSO-DLTA are investigated by using ten benchmark problems with different characteristics, i.e., the functions F4, F7, F13, F14, F15, F19, F23, F26, F27, and F30. These problems are solved by PSO-DLTA using Z with an integer value from 1 to 10. Each different Z value is run 30 times and the simulations are performed on three different dimensions (i.e., 10-D, 30-D, and 50-D) to investigate if the optimal parameter setting of Z varies with the change of dimensionality in search space. The search accuracy of PSO-DLTA (represented by the mean error value Emean) with different values of 245

Table 6.2 Effects of the parameter Z on PSO-DLTA in 10-D Value of Z 1 2 3 4 5 6 7 8 9 10

F15 3.79E-15 7.58E-15 1.89E-15 0.00E+00 1.89E-15 1.89E-15 0.00E+00 1.89E-15 0.00E+00 0.00E+00

F19 8.76E-10 9.44E-05 1.71E-14 4.23E-09 3.38E-05 3.13E-07 1.22E-04 5.68E-15 7.58E-15 3.53E-05

F23 6.78E-01 5.73E-01 5.60E-01 5.97E-01 6.84E-01 7.27E-01 7.12E-01 5.63E-01 6.02E-01 6.62E-01

F26 4.16E-01 4.32E-01 3.92E-01 4.80E-01 4.27E-01 4.59E-01 4.20E-01 3.35E-01 4.06E-01 4.97E-01

F27 1.89E+02 1.23E+02 1.58E+02 1.28E+02 1.34E+02 1.38E+02 1.15E+02 1.10E+02 1.57E+02 1.76E+02

F30 7.82E+02 7.93E+02 7.98E+02 8.34E+02 8.44E+02 7.70E+02 7.99E+02 7.48E+02 8.06E+02 8.41E+02

Table 6.3 Effects of the parameter Z on PSO-DLTA in 30-D Value of Z 1 2 3 4 5 6 7 8 9 10

F15 1.58E-07 1.44E-08 9.45E-10 1.92E-10 3.27E-10 1.87E-10 3.80E-10 9.80E-11 3.95E-10 5.36E-10

F19 2.66E-04 9.03E-05 7.95E-04 3.32E-04 1.48E-04 4.18E-05 2.05E-07 3.97E-08 4.60E-08 9.03E-08

F23 2.83E-02 4.06E-02 2.23E-02 2.51E-02 2.93E-02 4.25E-02 2.16E-02 2.11E-02 2.20E-02 3.56E-02

F26 1.02E+00 1.14E+00 1.21E+00 1.29E+00 1.24E+00 1.04E+00 1.34E+00 9.30E-01 1.03E+00 1.05E+00

F27 3.25E+02 2.48E+02 2.48E+02 4.25E+02 3.73E+02 2.53E+02 2.47E+02 2.42E+02 2.63E+02 3.04E+02

F30 9.08E+02 9.12E+02 9.00E+02 9.02E+02 9.09E+02 9.07E+02 9.02E+02 9.00E+02 9.04E+02 9.09E+02

Table 6.4 Effects of the parameter Z on PSO-DLTA in 50-D Value of Z 1 2 3 4 5 6 7 8 9 10

F15 2.46E-07 3.14E-08 9.02E-09 1.44E-08 1.28E-08 1.25E-08 8.66E-09 9.15E-09 1.08E-08 1.88E-08

F19 1.54E-06 6.92E-07 8.57E-08 1.74E-07 1.09E-07 1.85E-07 5.99E-08 5.62E-08 6.57E-08 9.10E-08

F23 1.63E-02 3.40E-02 1.95E-02 1.34E-02 2.04E-02 1.63E-02 1.17E-02 1.19E-02 1.35E-02 2.12E-02

F26 1.74E+00 2.07E+00 1.95E+00 1.71E+00 1.61E+00 1.59E+00 1.56E+00 1.50E+00 1.55E+00 1.63E+00

F27 3.27E+02 2.60E+02 2.38E+02 3.35E+02 3.11E+02 2.86E+02 2.53E+02 2.11E+02 2.29E+02 2.94E+02

F30 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02 9.00E+02

Z in 10-D, 30-D, and 50-D are reported in Tables 6.2, 6.3, and 6.4, respectively. The best result for each tested problems is indicated in boldface text. As shown in Tables 6.2 to 6.4, the simulations results of functions F4, F7, F13, and F14 are omitted because the proposed PSO-DLTA successfully locates the global optima or the near global of these functions, regardless the value of Z is chosen. It could be deduced that the search accuracy of PSO-DLTA in solving the conventional and rotated problems are relatively insensitive to the parameter Z. On the other hand, it is evident that parameter Z influences the Emean values of PSO-DLTA in solving the shifted problems (i.e., F15 and F19), 246

the complex problems (i.e., F23 and F26), and the composition problems (i.e., F27 and F30). Specifically, it could be observed from Tables 6.2 to 6.4 that the search accuracies of PSODLTA in these problems are slightly compromised when the value of Z is set too high (i.e., Z = 9, 10) or too low (i.e., Z = 1 to 5). The mentioned search behaviors exhibited by PSO-DLTA are rationalized as follows. When the value of Z is set too high, the PSO-DLTA particle tends to stuck at the inferior regions of search space for long time before it meets the condition to perform the relocation search. In other words, the DTA module fails to provide sufficient diversity in assisting the PSO-DLTA particle to escape from the inferior regions of search space if the excessively high value of Z is selected. On the other hand, it could be conjectured that the relocation search in the DTA module tends to be overemphasized when the value of Z is set too low. As a result, the PSO-DLTA particle will be forced to escape from the best solution found so far (i.e., global best particle Pg) before it manages to converge towards the sufficiently promising regions of search space. This extreme scenario has inevitably exacerbated the search accuracy of PSO-DLTA, considering that the low value of Z inhibits the convergence of PSO-DLTA towards the optimal region of search space. Finally, it could be observed from Tables 6.2 to 6.4 that the search accuracies of PSO-DLTA in solving the functions F15, F19, F23, F26, F27, and F30 are compelling when the parameter Z is set to 8 in 10-D, 30-D, and 50-D. Based on these experimental findings, it could be concluded that the search performance of PSO-DLTA with the mentioned optimal parameter setting (i.e., Z = 8) is robust towards the changes of (1) search space’s dimensionality and (2) fitness landscape’s characteristic. The outcomes of this parameter sensitivity analysis suggest that the parameter Z of PSO-DLTA can be set as 8 in the following performance evaluations.

6.3.3 Comparison of PSO-DLTA with Other Well-Established PSO Variants The simulation results of all tested algorithms are reported in this section. Specifically, the results of mean error (Emean), standard deviation (SD), and Wilcoxon test (h) results are 247

summarized in Table 6.5, whereas the complete analyses of success rate (SR) and success performance (SP) are presented in Table 6.9. Boldface text in the tables indicates the best results among the algorithms. The SR and SP values of functions F3, F10, F16, and F24 to F30 are discarded in Table 6.9, given that none of the involved PSO variants are able to solve these functions within the predefined accuracy level  in at least one run. Please refer to Section 3.3.3 for the definitions of w/t/l, #BME, +/=/-, #S/#PS/#NS, and #BSP as shown in Tables 6.5 and 6.9.

6.3.3(a) Comparison of the Mean Error Results From Table 6.5, it is evident that PSO-DLTA has the most superior search accuracy because the Emean values produced by the proposed algorithm outperform its peers in majority of the employed benchmarks. Specifically, PSO-DLTA achieves 21 best Emean out of the 30 tested functions, i.e., 1.50 times better than the second-ranked MoPSO. On the other hand, PSODDS is identified as the worst optimizer, considering that the Emean values produced by this algorithm are the worst in almost all tested problems. Table 6.5 identifies both of the proposed PSO-DLTA and the MoPSO as the best optimizers to solve the conventional (F1 to F8) and rotated (F9 to F14) problems. Specifically, the PSO-DLTA and MoPSO successfully find the global optima or near-global optima of most conventional and rotated problems, except for the functions F3 and F10. For conventional problems, the proposed PSO-DLTA and the MoPSO are the only two algorithms that are able to solve the functions F4, F5, and F7 with Emean = 0.00E+00. The search accuracies of FLPSO-QIW and OLPSO-L in solving the conventional problems are also competitive because they manage to locate at least three global optima or near-global optima of the tested problems. For instance, FLPSO-QIW solves the functions F1, F7, and F8 with promising Emean values, whereas OLPSO-L successfully finds the global optima and near global optima of functions F1, F6, F7, and F8. Although majority of the PSO variants exhibit promising search accuracy in solving the conventional problems, Table 6.5 reports that most of these algorithms fail to maintain 248

Table 6.5 The Emean, SD, and Wilcoxon test results of PSO-DLTA and six compared PSO variants for the 50-D benchmark problems F1

F2

F3

F4

F5

F6

F7

F8

F9

F10

F11

F12

F13

F14

F15

F16

F17

F18

F19

F20

Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h

APSO 2.50E-01 1.81E-01 + 1.46E+03 4.82E+02 + 4.62E+01 1.53E+00 + 5.80E-01 6.29E-01 + 3.60E-02 3.22E-02 + 1.70E-01 8.21E-02 + 6.60E-02 2.57E-02 + 5.44E-01 1.88E-01 + 1.26E+03 3.22E+02 + 5.15E+01 1.39E+01 + 1.83E+02 5.61E+01 + 2.59E+02 6.15E+01 + 2.10E+02 1.01E+02 + 6.32E+01 4.24E+00 + 2.27E-01 9.70E-02 + 1.08E+03 5.18E+02 + 1.97E+03 3.83E+03 + 5.92E-01 7.76E-01 + 7.20E-03 1.06E-02 + 0.00E+00 0.00E+00 =

FLPSO-QIW 2.90E-81 5.97E-81 2.62E+02 8.90E+01 + 4.22E+01 2.39E-01 + 2.60E+00 1.52E+00 + 5.58E+00 2.36E+00 + 5.75E-04 2.21E-03 + 3.43E-14 1.07E-14 + 1.88E-05 8.29E-05 + 2.62E+02 7.62E+01 + 4.55E+01 3.16E+00 + 1.26E+02 1.76E+01 + 1.28E+02 2.13E+01 + 1.52E+00 5.39E-01 + 4.86E+01 3.40E+00 + 1.44E-13 4.15E-14 6.97E+02 1.66E+02 + 1.05E+02 4.86E+01 + 5.88E+00 2.51E+00 + 1.20E+01 3.16E+00 + 2.05E-03 3.49E-03 +

FlexiPSO 1.78E-04 5.23E-05 + 1.42E+00 6.67E-01 + 4.48E+01 1.04E+00 + 2.12E-04 6.24E-05 + 2.07E-04 7.51E-05 + 8.34E-03 9.48E-03 + 3.55E-03 5.36E-04 + 1.12E-01 1.16E-02 + 4.92E+00 3.67E+00 + 4.59E+01 3.60E+00 + 1.49E+02 3.42E+01 + 2.16E+02 8.26E+01 + 2.67E+02 9.17E+01 + 6.60E+01 4.59E+00 + 3.65E-04 6.12E-04 + 3.48E+02 6.90E+02 + 1.85E+02 4.26E+02 + 2.06E-04 6.95E-05 + 2.06E-04 6.44E-05 + 2.76E+02 3.86E+02 +

249

OLPSO-L 4.86E-33 5.15E-33 + 5.71E+02 1.85E+02 + 4.30E+01 3.18E+00 + 3.32E-01 6.03E-01 + 1.17E+00 1.15E+00 + 0.00E+00 0.00E+00 = 5.09E-15 1.79E-15 + 0.00E+00 0.00E+00 = 1.92E+03 4.17E+02 + 4.24E+01 3.73E+00 = 9.80E+01 5.16E+01 + 1.78E+02 4.94E+01 + 7.58E-01 2.68E-01 + 4.58E+01 4.77E+00 + 5.68E-14 0.00E+00 9.30E+02 3.21E+02 + 1.33E+01 1.94E+01 + 1.43E+00 1.10E+00 + 3.00E+00 1.78E+00 + 1.47E-01 1.07E-01 +

MoPSO 0.00E+00 0.00E+00 0.00E+00 0.00E+00 4.90E+01 2.55E-02 + 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 4.90E+01 3.25E-02 + 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 0.00E+00 0.00E+00 = 1.33E+05 6.32E+03 + 3.30E+05 9.37E+04 + 5.99E+10 2.87E+09 + 7.54E+02 2.29E+01 + 6.69E+02 5.01E+01 + 3.78E+03 2.91E+02 +

PSODDS 7.72E+04 9.26E+03 + 6.21E+04 2.36E+04 + 6.99E+03 1.06E+03 + 5.36E+02 8.97E+01 + 5.39E+02 5.15E+01 + 6.95E+02 1.04E+02 + 2.00E+01 2.69E-01 + 7.68E+01 3.02E+00 + 7.15E+04 2.50E+04 + 7.43E+03 1.33E+03 + 8.23E+02 6.14E+01 + 8.45E+02 5.45E+01 + 2.37E+03 5.94E+02 + 6.32E+01 5.15E+00 + 6.93E+04 1.87E+04 + 9.90E+04 3.96E+04 + 4.68E+10 2.01E+10 + 5.20E+02 6.56E+01 + 5.75E+02 1.14E+02 + 1.10E+03 4.43E+02 +

PSO-DLTA 6.44E-72 1.17E-71 9.13E-96 2.65E-95 4.09E+01 8.45E-01 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 7.06E-95 1.61E-94 4.15E+01 3.67E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 8.66E-09 7.28E-09 1.43E-03 1.11E-03 1.33E-01 7.28E-01 1.12E-08 9.64E-09 5.62E-08 8.07E-08 0.00E+00 0.00E+00

Table 6.5 (Continued) F21

F22

F23

F24

F25

F26

F27

F28

F29

F30

Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h Emean SD h #BME w/t/l +'/=/-

APSO 6.04E-02 1.71E-02 + 8.79E-01 5.86E-01 + 1.49E+00 1.47E-01 + 2.07E+01 1.65E-01 1.32E+07 4.09E+06 + 4.13E+00 1.19E+00 + 4.60E+02 7.79E+01 + 5.13E+02 8.95E+01 + 1.10E+03 9.90E+01 + 1.08E+03 9.29E+01 + 1 28/1/1 28/1/1

FLPSO-QIW 2.19E-13 5.74E-14 3.87E-02 7.38E-02 + 6.02E-03 1.04E-02 2.11E+01 4.16E-02 + 1.89E+07 4.92E+06 + 4.01E+00 1.46E+00 + 1.78E+02 1.39E+02 1.79E+02 8.40E+01 = 9.22E+02 3.40E+01 + 9.34E+02 1.05E+01 + 2 24/0/6 24/1/5

FlexiPSO 3.84E-03 5.06E-04 + 1.06E+01 3.88E+00 + 1.52E-01 7.18E-02 + 2.05E+01 1.02E-01 1.32E+07 8.09E+06 + 2.25E+00 7.43E-01 + 5.00E+02 1.15E+02 + 5.78E+02 1.01E+02 + 1.18E+03 1.03E+02 + 1.16E+03 1.08E+02 + 1 29/0/1 29/0/1

OLPSO-L 8.05E-14 1.51E-14 4.85E-02 1.48E-01 + 3.76E-02 4.00E-02 + 2.12E+01 5.06E-02 + 1.82E+07 5.14E+06 + 2.98E+00 8.26E-01 + 1.38E+02 7.31E+01 2.19E+02 9.83E+01 = 9.48E+02 8.46E+00 + 9.54E+02 8.84E+00 + 5 24/2/4 23/4/3

MoPSO 2.05E+01 1.93E-01 + 7.98E+01 2.72E+00 + 5.87E+03 3.05E+02 + 2.12E+01 2.31E-02 + 6.04E+09 1.66E+09 + 9.28E+02 1.14E+02 + 1.19E+03 1.50E+02 + 1.16E+03 1.32E+02 + 9.00E+02 0.00E+00 = 9.00E+02 0.00E+00 = 14 16/11/3 16/11/3

PSODDS 2.03E+01 6.53E-01 + 6.07E+01 4.63E+00 + 2.96E+03 8.13E+02 + 2.11E+01 5.65E-02 + 8.42E+08 4.80E+08 + 8.79E+05 5.59E+05 + 5.89E+02 1.19E+02 + 7.01E+02 1.33E+02 + 1.13E+03 5.86E+01 + 1.12E+03 5.52E+01 + 0 30/0/0 30/0/0

PSO-DLTA 1.09E-09 3.32E-09 1.43E-05 3.20E-05 1.19E-02 1.49E-02 2.10E+01 7.68E-02 5.01E+06 1.78E+06 1.50E+00 3.04E-01 2.11E+02 7.13E+01 2.34E+02 1.83E+02 9.00E+02 0.00E+00 9.00E+02 0.00E+00 21

the similar performance in rotated problems with non-separable characteristic. For example, the OLPSO-L is unable to locate the global optima of the rotated Grienwank function (F13) and the rotated Weierstrass function (F14) although it successfully solves the conventional ones (F6 and F7) with Emean = 0.00E+00. On the other hand, both of the PSO-DLTA and MoPSO sustain their excellent search accuracies in the rotated problems by successfully locating the global optima or near global optima of functions F9 and F11 to F14. These observations suggest that the search mechanisms employed by these two PSO variants are resilient towards the challenging fitness landscapes of rotated search spaces. Similar performance deteriorations could also be observed in shifted problems (F15 to F22) because majority of the test algorithms fail to locate the global optima or near-global optima of this problem category, according to Table 6.5. Among the eight shifted problems,

250

the functions F15, F20, and F21 are considered relatively easier to be solved because some tested algorithms such as APSO, FLPSO-QIW, OLPSO-L, and PSO-DLTA manage to produce the competitive Emean values in these problems. It is important to emphasize that the proposed PSO-DLTA exhibits the best robustness towards the shifting operation, considering that it produces six best Emean and two third best Emean values in eight shifted problems. Specifically, PSO-DLTA is the only algorithm that successfully solves the shifted functions F16, F17, F18, F19, and F22 at the accuracy levels of 10-3, 10-1, 10-8, 10-8, and 10-5, respectively. Finally, Tables 6.5 reports the further performance plunges suffered by all involved algorithms in solving the complex problems (F23 to F26) and the composition problems (F27 to F30). As explained in the earlier chapters, the inclusions of the rotating and shifting operations (F23 to F25), expanded mechanism (F26), or composition operation (F27 to F30) into the conventional problems have tremendously increased the problems’ difficulties and thus imposed greater challenges to the tested algorithms in searching for the global optima of these problem categories. Among all involved PSO variants, the proposed PSO-DLTA is least susceptible to the aforementioned modifications and it exhibits the most competitive search accuracy to solve the complex and composition problems. Specifically, it produces four best Emean, one second best Emean, and three third best Emean values in eight tested problems. One interesting observation could be made from Table 6.5 is, despite having the best ranks in the conventional and rotated problems, the search accuracy of MoPSO is severely compromised when it is applied to solve the shifted, complex, and composition problems. As reported in Table 6.5, none of the Emean values produced by MoPSO in solving these problem categories outperform those of PSO-DLTA. Moreover, despite achieving the best ranks in solving the conventional and rotated problems, MoPSO produces the worst ranks in solving all shifted, complex, and composition problems, except for functions F29 and F30. This observation implies that the working mechanisms of MoPSO do not benefit the algorithm in tackling the problems with shifted fitness landscapes. 251

Based on the experimental results reported in Table 6.5, it is concluded that the proposed PSO-DLTA in general outperforms its peer algorithms, in term of search accuracy. Furthermore, the promising values of #BME and w/t/l achieved by the PSO-DLTA against its peers in each problem category suggest that the underlying search mechanisms employed by the proposed algorithm are sufficiently robust to encounter any modifications that imposed on the problem’s fitness landscape. As compared to the second-ranked MoPSO which only performs well in the conventional and rotated problems, the proposed PSO-DLTA emerges as a more superior optimizer because of the latter’s capability to solve the benchmark functions in different problem categories with satisfactory accuracy.

6.3.3(b) Comparison of the Non-Parametric Statistical Test Results Tables 6.5 and 6.6 show the non-parametric Wilcoxon pairwise comparison results between the PSO-DLTA and its peers. Specifically, Table 6.5 reveals that the Wilcoxon test results, as represented by the h values, are consistent with the reported Emean values because no significant deviations are observed from the summarized results of w/t/l and +/-/=. The results as presented in Table 6.6 further verify the significant performance improvement of PSO-DLTA against its six compared peers in the pairwise comparisons because all p-values obtained from the Wilcoxon test are reported less than  = 0.05. For multiple comparisons (García et al., 2009, Derrac et al., 2011), Table 6.7 first reports the average rankings of all tested algorithms and the associated p-values computed via the Friedman test. Accordingly, PSO-DLTA is the best performing algorithm because it achieves the smallest average rank of 1.65. Although the MoPSO is identified as the PSO variant with second best #BME in Table 6.5, it obtains the third worst rank in Table 6.7. This is because of its significantly poor performance in tackling the shifted, complex, and composition problems. Finally, Friedman test also strongly suggests that a significant global difference exists among the seven tested algorithm, considering that the p-value reported (i.e., 0.00E+00) in Table 6.7 is smaller than  = 0.05.

252

Table 6.6 Wilcoxon test for the comparison of PSO-DLTA and six other PSO variants PSO-ATVTC vs. R+ R− p-value

APSO 427.0 8.0 9.31E-08

FLPSO-QIW 406.0 59.0 1.53E-04

FlexiPSO 453.0 12.0 1.30E-07

OLPSO-L 408.5 56.5 1.17E-04

MoPSO 371.5 63.5 4.75E-04

PSODDS 465.0 0.0 1.86E-09

Table 6.7 Average rankings and the associated p-values obtained by the PSO-DLTA and six other PSO variants via the Friedman test Algorithm Average ranking Chi-square statistic p-value

PSO-DLTA 1.65

OLPSO-L 3.33

FLPSO-QIW FlexiPSO 3.40 4.17 81.75 0.00E+00

MoPSO 4.37

APSO 4.68

PSODDS 6.40

Table 6.8 Adjusted p-values obtained by comparing the PSO-DLTA with six other PSO variants using the Bonferroni-Dunn, Holm, and Hochberg procedures PSO-DLTA vs. PSODDS APSO MoPSO FlexiPSO FLPSO-QIW OLPSO-L

z 8.52E+00 5.44E+00 4.87E+00 4.51E+00 3.14E+00 3.02E+00

Unadjusted p 0.00E+00 0.00E+00 1.00E-06 6.00E-06 1.70E-03 2.55E-03

Bonferroni-Dunn p 0.00E+00 0.00E+00 7.00E-06 3.90E-05 1.02E-02 1.53E-02

Holm p 0.00E+00 0.00E+00 4.00E-06 1.90E-05 3.41E-03 3.41E-03

Hochberg p 0.00E+00 0.00E+00 4.00E-06 1.90E-05 2.55E-03 2.55E-03

Based on the Friedman test’s results, the Bonferroni-Dunn, Holm, and Hochberg tests are subsequently employed as the post-hoc statistical analyses (García et al., 2009, Derrac et al., 2011) to further identify the concrete differences for the control algorithm (i.e., PSO-DLTA). According to Table 6.8, all employed post-hoc procedures confirm the significant improvement of PSO-DLTA over its six compared PSO variants, in term of search accuracy. This is because all APVs produced are smaller than  = 0.05.

6.3.3(c) Comparison of the Success Rate Results The success rate (SR) analysis reported in Table 6.9 reveals that the proposed PSO-DLTA has the most promising search reliability among the compared algorithms. Specifically, PSODLTA completely solves 18 (out of 30) employed benchmarks with SR = 100%. On the other hand, PSODDS is identified as the optimizer with poorest search reliability,

253

Table 6.9 The SR and SP values of PSO-DLTA and six compared PSO variants for the 50-D benchmark problems F1

SR SP F2 SR SP F4 SR SP F5 SR SP F6 SR SP F7 SR SP F8 SR SP F9 SR SP F11 SR SP F12 SR SP F13 SR SP F14 SR SP F15 SR SP F17 SR SP F18 SR SP F19 SR SP F20 SR SP F21 SR SP F22 SR SP F23 SR SP #S/#PS/#NS BSP

APSO 0.00 Inf 0.00 Inf 0.00 Inf 6.67 2.60E+06 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 86.67 2.51E+05 100.00 1.20E+04 0.00 Inf 0.00 Inf 0.00 Inf 1/2/27 0

FLPSO-QIW 100.00 6.04E+04 0.00 Inf 6.67 3.46E+06 0.00 Inf 100.00 5.00E+04 100.00 4.79E+04 100.00 6.67E+04 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 5.85E+04 0.00 Inf 0.00 Inf 0.00 Inf 100.00 4.88E+04 100.00 4.72E+04 70.00 1.03E+05 83.33 2.04E+05 7/3/20 2

FlexiPSO 0.00 Inf 0.00 Inf 100.00 9.72E+04 100.00 9.88E+04 60.00 1.57E+05 100.00 1.59E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 1.04E+05 100.00 1.15E+05 56.67 2.45E+03 100.00 1.70E+05 0.00 Inf 0.00 Inf 6/2/22 0

OLPSO-L 100.00 1.52E+05 0.00 Inf 73.33 2.97E+05 40.00 6.75E+05 100.00 1.24E+05 100.00 1.25E+05 100.00 1.66E+05 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 100.00 1.37E+05 0.00 Inf 13.33 1.27E+06 6.67 3.45E+06 10.00 1.33E+06 100.00 1.15E+05 90.00 1.75E+05 13.33 2.20E+06 6/7/17 0

MoPSO 100.00 7.94E+03 100.00 8.34E+03 100.00 5.58E+03 100.00 5.70E+03 100.00 5.64E+03 100.00 6.65E+03 100.00 8.64E+03 100.00 6.04E+03 100.00 5.68E+03 100.00 5.81E+03 100.00 5.93E+03 100.00 1.03E+04 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 12/0/18 12

PSODDS 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0.00 Inf 0/0/30 0

PSO-DLTA 100.00 4.20E+04 100.00 3.81E+04 100.00 2.10E+04 100.00 2.08E+04 100.00 3.12E+04 100.00 2.46E+04 100.00 5.31E+04 100.00 2.50E+04 100.00 2.37E+04 100.00 2.50E+04 100.00 1.58E+04 100.00 6.62E+04 100.00 1.90E+05 96.67 1.47E+05 100.00 6.43E+04 100.00 8.28E+04 100.00 4.94E+03 100.00 1.02E+05 100.00 1.00E+05 53.33 3.63E+05 18/2/10 4

considering that it never solves any of the tested problems. In other words, the SR values produced by PSODDS in all benchmark problems are equal to 0.00%. As reported in Table 6.9, the proposed PSO-DLTA and the MoPSO successfully solve all the conventional problems completely with SR = 100% at the predefined accuracy level  , except for function F3. The search reliabilities exhibited by the FLPSO-QIW, FlexiPSO, and OLPSO-L in conventional problems are also competitive because these

254

algorithms can completely solve at least three out of eight tested problems. Specifically, FLPSO-QIW and OLPSO-L successfully solve functions F1, and F6 to F8 with SR = 100%. Similar observations could be found on FlexiPSO to solve the functions F4, F5, and F7. Meanwhile, the search reliabilities of most tested algorithms are degraded when they are employed to solve the rotated problems. Specifically, Table 6.9 reports that the SR values produced by majority of the PSO variants in rotated problems are equal to 0.00%, implying that most of these algorithms are unable to completely or partially solve the tested problems. For example, although both of the FLPSO-QIW and OLPSO-L successfully solve the conventional Grienwank (F6) function and the conventional Weierstrass function (F8) with SR = 100%, the search reliabilities of these two algorithms in tackling the rotated Grienwank (F13) function and the rotated Weierstrass function (F14) are drastically jeopardized. Unlike most of the tested algorithms, both of the proposed PSO-DLTA and the MoPSO successfully maintain their excellent search reliabilities in the rotated search spaces. Specifically, PSODLTA and MoPSO are the only two algorithms that manage to completely solve five out of six rotated problems (i.e., functions F9, and F11 to F14) with SR = 100%. Similar performance deterioration of the tested algorithm, in term of search reliability, could also be observed in the shifted problems. Unlike the rotated problems, the SR values obtained by most algorithms in solving the shifted problems are generally higher (i.e., better), according to the simulation results in Table 6.9. For instance, the APSO, FLPSO-QIW, FlexiPSO, and OLPSO-L are capable to completely or partially solve some shifted functions (e.g., functions F18 to F20, and F22), albeit these PSO variants fail to solve any of the rotated problems within the predefined accuracy level  . Meanwhile, opposite observation could be found on the MoPSO because its search reliability is significantly plunged in the shifted problems. Despite of its impeccable performance in dealing with the conventional and rotated problems, MoPSO never solves (i.e., SR = 0.00%) any of the eight shifted problems within the predefined  . Unlike its compared peers, the search reliability of the proposed PSO-DLTA in shifted problems remain excellent because it successfully solves six out of eight shifted problems (i.e., functions F15, and F18 to F22) with SR = 100%. 255

Moreover, PSO-DLTA is also the only algorithm that partially solve the shifted function F17 with SR = 96.67%. Finally, Table 6.9 reveals the further performance degradation (in term of search reliability) of all involved algorithms in solving the complex and composition problems. Specifically, none of these tested algorithms are able to completely or partially solve these two problems categories, except for functions F23, where PSO-DLTA achieves the second best SR values of 53.33%. Although PSO-DLTA fails to solve the remaining complex and composition problems, it is proven better than its peers by achieving more competitive Emean values in the eight tested problems, as reported in Table 5.5.

6.3.3(d) Comparison of the Success Performance Results The SP values are reported in Table 6.9 to quantify the computation cost required by an algorithm to solve a particular benchmark within the predefined accuracy level  . Additionally, Figure 6.6 presents a total of ten representative convergence curves, i.e., two from conventional (F6 and F7), rotated (F10 and F12), shifted (F19 and F22), complex (F25 and F26), and composition (F27 and F30) problems to qualitatively evaluate the convergence speeds of the tested algorithms. For the conventional and rotated problems, it could be observed from Table 6.9 that both the MoPSO and the proposed PSO-DLTA exhibit the most competitive search efficiencies because they achieve 12 best SP and 12 second best SP values, respectively. This implies that, as compared to the remaining tested algorithms, both of the MoPSO and PSO-DLTA require the least computation cost to solve the conventional and rotated problems within the predefined  . The excellent convergence characteristics of MoPSO and PSO-DLTA are qualitatively affirmed by the convergence curves as shown in Figure 6.6. More specifically, the convergence curves of MoPSO and PSO-DLTA in the conventional functions of F6 and F7 [as illustrated in Figures 6.6(a) and 6.6(b), respectively], as well as the rotated functions of F12 [as illustrated in Figure 6.6(d)], are sharply dropped off at one point, usually at the early stage or the middle stage of optimization. These observations 256

(a)

(b)

(c)

(d)

(e)

(f)

Figure 6.6: Convergence curves of 50-D problems: (a) F6, (b) F7, (c) F10, (d) F12, (e) F19, and (f) F22.

suggest the outstanding capability of these algorithms to locate the global optima of the tested problems with excessively small number of fitness evaluations (FEs). Although the SP value of functions F10 is not available for comparison, the convergence curve of PSO-DLTA

257

(g)

(h)

(i)

(j)

Figure 6.6 (Continued): Convergence curves of 50-D problems: (g) F25, (h) F26, (i) F27, and (j) F30.

in this function [as illustrated in Figure 6.6(c)] shows that the convergence speed of the proposed algorithms is competitive against its peers. It is also worth mentioning that the convergence curves of PSO-DLTA in functions F1, F2, F4, F5, F8, F9, F11, F13, and F14 are similar with those as shown in Figures 6.6(a), 6.6(b), and 6.6(d), whereas the convergence curves of function F3 are identical with the one reported in Figure 6.6(c). On the other hand, the proposed PSO-DLTA is identified as the most efficient optimizer in solving the shifted problems because it successfully achieves five best SP and one second best SP values out of the eight tested problems. The excellent convergence characteristic of PSO-DLTA in shifted problems is also verified by the convergence curves demonstrated in Figures 6.6(e) and 6.6(f). Specifically, the convergence curves of function

258

F19 [as depicted in Figure 6.6(e)] reveal that PSO-DLTA has faster convergence speed than its peers in the entire optimization process. It is also worth to mention that convergence curves of F15, F16, F17, F18, and F21 are similar with that in Figure 6.6(e). Meanwhile, the convergence curves of function F22 [as depicted in Figure 6.12(f)] reveal that all tested algorithm, including the PSO-DLTA, are trapped in the local optima of search space during the early and middle stages of optimization. Nevertheless, the proposed PSO-DLTA is the only algorithm that manages to escape from the inferior regions of function F22 in the later stage, and it successfully solves the mentioned function with excellent accuracy. Finally, for most complex and composition problems, no SP values are available for comparison, except for function F23, where the proposed PSO-DLTA achieves the second best SP value. The convergence curves of functions F25 and F26 [represented by Figures 6.6(g) and 6.6(h), respectively] show that the convergence speeds of PSO-DLTA in these two problems are more competitive than its peer algorithms, especially during the early stage of optimization. This enables the PSO-DLTA to solve these problems with better accuracy because the proposed algorithm could locate and exploit the optimal regions of the search space earlier than the other compared algorithms. Meanwhile, the convergence speeds of PSO-DLTA in functions F27 and F30 [represented by Figures 6.6(i) and 6.6(j), respectively] are comparable with most of its peers. Notably, that the convergence curves of functions F23 and F24 are similar with that in Figure 6.6(g), whereas the convergence curves of function F28 and F29 are comparable with those illustrated in Figures 6.6(h) to 6.6(j).

6.3.3(e) Comparison of the Algorithm Complexity Results This subsection aims to investigate the computational complexities of the seven tested algorithms. The AC values produced by all involved algorithms are reported in Table 6.10. Table 6.10 reveals that the proposed PSO-DLTA incurs relatively low computational complexity at D = 50, considering that this algorithm produces the second best (i.e., smallest) AC values among the seven tested algorithms. It is important to emphasize that the AC values recorded by the PSO-DLTA is comparable with the first-ranked OLPSO-L and other 259

Table 6.10 AC Results of the PSO-DLTA and six other PSO variants in D = 50 T0 T1 ̂2 AC

APSO 1.88E−01 4.19E+00 1.37E+03 7.27E+03

FLPSO-QIW 1.88E−01 4.19E+00 2.95E+03 1.56E+04

FlexiPSO 1.88E-01 4.19E+00 6.19E+02 3.27E+03

OLPSO-L 1.88E-01 4.19E+00 4.60E+02 2.42E+03

MoPSO 1.88E−01 4.19E+00 8.80E+02 4.66E+03

PSODDS 1.88E−01 4.19E+00 8.43E+02 4.46E+03

PSO-DLTA 1.88E−01 4.19E+00 5.29E+02 2.79E+03

low complexity PSO variants such as the FlexiPSO, MoPSO, and PSODDS. This suggests that the modifications proposed in the PSO-DLTA are not more complex than the latter four PSO variants, albeit the search performance (i.e., Emean, SR, and SP) achieved by the PSODLTA significantly outperforms those of OLPSO-L, FlexiPSO, MoPSO, and PSODDS. Although the search performance of FLPSO-QIW outperforms the PSO-DLTA in some selected benchmarks, the search mechanisms incorporated into former PSO variant tends to incur much higher computational complexity than the latter, as revealed by their respective AC values. Specifically, the AC values yielded by the FLPSO-QIW are 1.56E+04 and it is 5.59 times higher than that of PSO-DLTA (i.e., 2.79E+03). Based on the experimental results in Table 6.10, it can be concluded that the proposed PSO-DLTA emerges as better optimizer than its compared peers. Both of the DTA and ITA module introduced into the PSO-DLTA successfully improve the algorithm’s search performance without incurring excessive high computational complexity to the algorithm.

6.3.4 Effect of Different Proposed Strategies This subsection aims to investigate the effectiveness of each proposed strategy in PSODLTA. These strategies include the DTA module, the ITA module, and the EKE module. Specifically, this experimental study compares the search performance of (1) PSO-DLTA without the ITA module (PSO-DLTA1), (2) PSO-DLTA without the DTA module (PSODLTA2), (3) PSO-DLTA without the EKE module (PSO-DLTA3), and (4) the complete PSO-DLTA. For PSO-DLTA1 and PSO-DLTA2, the EKE module is maintained in their DTA and ITA module, respectively. Meanwhile, PSO-DLTA3 is similar with the complete

260

PSO-DLTA, except that no EKE module is executed when the PSO-DLTA3 particles successfully achieve fitness improvement in the DTA and ITA module. The comparison results of all the PSO-DLTA variants in each benchmark are reported as Emean and %Improve in Table 6.11. In addition, the simulation results of BPSO and PSO-DLTA variants in each problem category and the overall result are summarized as #BME and average %Improve, respectively, in Table 6.12. According to Tables 6.11 and 6.12, the Emean and #BME values produced by all PSO-DLTA variants outperform those of BPSO, which indicates that any of the proposed DTA, ITA, and EKE modules helps to improve the search accuracy of BPSO. Among all these PSO-DLTA variants, the complete PSO-DLTA produces the largest average %Improve value, followed by PSO-DLTA3, PSODLTA1, and PSO-DLTA2. As shown in Tables 6.11 and 6.12, PSO-DLTA1 performs particular well in the shifted problems, whereas PSO-DLTA2 exhibits superior searching accuracy in solving the rotated problems. Based on the aforementioned observations, it can be deduced that the combination of the DTA and EKE modules could introduce PSODLTA1 with adequate diversity and thus enable the algorithm to locate the shifted optima in problems with a shifted fitness landscape. Conversely, the combination of the ITA and EKE modules produces the desired rotationally invariant property to PSO-DLTA2, and this hybridization enables PSO-DLTA2 to focus effectively on the fitness landscapes with nonseparable characteristic. Meanwhile, the overall search performances of PSO-DLTA3 and PSO-DLTA outperform those of PSO-DLTA2 and PSO-DLTA3, as shown in the simulation results in Tables 6.11 and 6.12. These experimental results suggest that the presence of both DTA and ITA modules are indispensable for both PSO-DLTA3 and PSO-DLTA to solve the problems with various types of fitness landscapes. Finally, it is also notable that the search accuracy of the complete PSO-DLTA in solving all problem categories is more superior to that of PSODLTA3. The outperformance of PSO-DLTA over PSO-DLTA3 is attributed to the capability of EKE module to extract the useful information from the improved PSO-DLTA particle and then utilize this information to further evolve the current Pg particle. The competitive 261

Table 6.11 Comparison of PSO-DLTA variants with BPSO in 50-D problems

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 F21 F22 F23 F24 F25 F26 F27 F28 F29 F30

BPSO 4.67E+03 (-) 2.08E+04 (-) 2.10E+02 (-) 1.15E+02 (-) 1.14E+02 (-) 3.92E+01 (-) 1.21E+01 (-) 8.09E+00 (-) 2.57E+04 (-) 1.08E+02 (-) 1.70E+02 (-) 2.00E+02 (-) 2.04E+02 (-) 5.80E+01 (-) 2.31E+04 (-) 7.72E+04 (-) 9.59E+09 (-) 2.93E+02 (-) 3.14E+02 (-) 7.21E+02 (-) 1.47E+01 (-) 4.99E+01 (-) 7.05E+02 (-) 2.10E+01 (-) 6.11E+08 (-) 2.64E+04 (-) 5.35E+02 (-) 5.59E+02 (-) 1.07E+03 (-) 1.07E+03 (-)

Emean (%Improve) PSO-DLTA1 PSO-DLTA2 3.85E-65 1.03E-60 (100.000) (100.000) 1.35E-87 5.45E-82 (100.000) (100.000) 6.29E+01 4.69E+01 (69.984) (77.619) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 2.94E-01 5.80E-77 (99.999) (100.000) 4.79E+01 4.60E+01 (55.808) (57.561) 3.01E-02 2.73E-89 (99.982) (100.000) 4.58E-02 8.52E-89 (99.977) (100.000) 3.71E-03 1.11E-17 (99.998) (100.00) 6.28E-01 1.53E-76 (98.918) (100.000) 1.59E-07 1.98E-02 (100.000) (100.000) 3.84E-02 3.33E+03 (95.685) (100.00) 1.58E+00 7.55E+03 (100.00) (100.00) 1.45E-05 1.23E-01 (99.958) (100.000) 3.06E-05 9.45E-01 (99.699) (100.000) 6.91E-01 0.00E+00 (99.904) (100.000) 1.10E-08 1.70E-01 (98.839) (100.00) 3.96E-04 2.17E-01 (99.999) (99.566) 5.30E-01 1.02E+00 (99.925) (99.855) 2.10E+01 2.10E+01 (0.093) (0.093) 1.08E+07 3.06E+07 (98.234) (94.996) 1.80E+01 3.58E+01 (99.932) (99.864) 2.85E+02 5.00E+02 (46.765) (6.608) 2.89E+02 5.38E+02 (48.316) (3.823) 9.78E+02 9.90E+02 (8.538) (7.416) 9.89E+02 1.01E+03 (7.897) (5.941)

262

PSO-DLTA3 1.97E-50 (100.000) 1.03E-64 (100.000) 4.31E+01 (79.432) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 1.65E-58 (100.000) 4.41E+01 (59.314) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 7.29E-05 (100.000) 1.67E-02 (100.000) 1.63E+00 (100.000) 5.83E-03 (99.998) 8.66E-03 (99.997) 0.00E+00 (100.000) 6.80E-05 (100.00) 5.98E-03 (99.988) 4.12E-01 (99.942) 2.10E+01 (0.093) 9.59E+06 (98.432) 7.56E+00 (99.971) 2.43E+02 (54.605) 2.76E+02 (50.542) 9.36E+02 (12.466) 9.58E+02 (10.784)

PSO-DLTA 6.44E-72 (100.000) 9.13E-96 (100.000) 4.09E+01 (80.482) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 7.06E-95 (100.000) 4.15E+01 (61.712) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 8.66E-09 (100.000) 1.43E-03 (100.000) 1.33E-01 (100.000) 1.12E-08 (100.000) 5.62E-08 (100.000) 0.00E+00 (100.000) 1.09E-09 (100.000) 1.43E-05 (100.000) 1.19E-02 (99.998) 2.10E+01 (0.093) 5.01E+06 (99.181) 1.50E+00 (99.994) 2.11E+02 (60.640) 2.34E+02 (58.193) 9.00E+02 (15.833) 9.00E+02 (16.185)

Table 6.12 Summarized comparison results of PSO-DLTA variants with BPSO in each problem category

Conventional Problems (F1 to F8) Rotated Problems (F9 to F14) Shifted Problems (F15 to F22) Complex Problems (F23 to F26) Composition Problems (F27 to F30) Overall Results (F1 to F30)

BPSO 0 (-) 0 (-) 0 (-) 0 (-) 0 (-) 0 (-)

Emean (average %Improve) PSO-DLTA1 PSO-DLTA2 5 5 (96.248) (97.202) 0 0 (92.447) (92.927) 1 0 (99.206) (100.000) 1 1 (74.546) (73.702) 0 0 (27.879) (5.947) 7 6 (84.479) (81.581)

PSO-DLTA3 5 (97.429) 4 (93.219) 1 (99.998) 1 (74.609) 0 (32.099) 11 (85.519)

PSO-DLTA 8 (97.563) 6 (93.623) 8 (100.00) 4 (74.865) 4 (37.713) 30 (86.418)

performance of PSO-DLTA in solving all tested problem categories implies that the three proposed strategies, namely, the DTA module, the ITA module, and the EKE modules, are integrated effectively in the PSO-DLTA. None of the contributions of these proposed strategies are compromised when PSO-DLTA is used to solve different types of problems.

6.3.5 Comparison with Other State-of-The-Art Metaheuristic Search Algorithms In this subsection, the proposed PSO-DLTA is compared with five cutting-edge MS algorithms as mentioned in Section 5.3.5 to verify the search performance of the proposed work further. The simulation results of the PSO-DLTA and the five compared MS algorithms in solving the ten 30-D problems are reported in Table 6.13. From Table 6.13, it can be observed that the proposed PSO-DLTA yields the most superior search accuracy among the tested algorithms. Specifically, PSO-DLTA successfully achieves eight best Emean values out of the ten tested problems, i.e., 2.67 times better than the second-ranked OCABC which obtains the three best Emean values. It is also notable that the proposed PSO-DLTA and the OCABC are the only two tested algorithms that successfully locate the global optima of the 30-D Rastrigin and Grienwank problems. Although the search accuracy of PSO-DLTA in tackling the Rosenbrock and Quartic functions are not as

263

Table 6.13 Comparisons between PSO-DLTA and other tested MS variants in optimizing 30D functions Function Emean SD Schwefel Emean 2.22 SD Schwefel Emean 1.2 SD Schwefel Emean 2.21 SD Emean Rosenbrock SD Emean Step SD Emean Quartic SD Emean Rastrigin SD Emean Ackley SD Emean Grienwank SD w/t/l #BME Sphere

RCCRO

RCCBBO

GSO

OXDE

OCABC

6.43E−07 (2.09E−07) 2.19E−03 (4.34E−04) 2.97E−07 (1.15E−07) 9.32E−03 (3.66E−03) 2.71E+01 (3.43E+01) 0.00E+00 (0.00E+00) 5.41E−03 (2.99E−03) 9.08E−04 (2.88E−04) 1.94E−03 (4.19E−04) 1.12E−02 (1.62E−02) 8/1/1 1

1.39E−03 (5.50E−04) 7.99E−02 (1.44E−02) 2.27E+01 (1.03E+01) 3.09E−02 (7.27E−03) 5.54E+01 (3.52E+01) 0.00E+00 (0.00E+00) 1.75E−02 (6.43E−03) 2.62E−02 (9.76E−03) 2.51E−02 (5.51E−03) 4.82E−01 (8.49E−02) 7/1/2 1

1.95E−08 (1.16E−08) 3.70E−05 (8.62E−05) 5.78E+00 (3.68E+00) 1.08E−01 (3.99E−02) 4.98E+01 (3.02E+01) 1.60E−02 (1.33E−01) 7.38E−02 (9.26E−02) 1.02E+00 (9.51E−01) 2.66E−05 (3.08E−05) 3.08E−02 (3.09E−02) 9/0/1 0

1.58E−16 (1.41E−16) 4.38E−12 (1.93E−12) 6.41E−07 (4.98E−07) 1.49E+00 (9.62E−01) 1.59E−01 (7.97E−01) 0.00E+00 (0.00E+00) 2.95E−03 (1.32E−03) 4.06E+00 (1.95E+00) 2.99E−09 (1.54E−09) 1.48E−03 (3.02E−03) 7/1/2 2

4.32E−43 (8.16E−43) 1.17E-22 (7.13E−23) NA 5.67E−01 (2.73E−01) 7.89E−01 (6.27E−01) 0.00E+00 (0.00E+00) 4.39E−03 (2.03E−03) 0.00E+00 (0.00E+00) 5.32E−15 (1.82E−15) 0.00E+00 (0.00E+00) 4/3/2 3

PSODLTA 3.22E-57 (6.33E-57) 2.32E-29 (2.24E-29) 1.14E-74 (2.46E-74) 5.80E-29 (8.38E-29) 2.12E+01 (2.79E+00) 0.00E+00 (0.00E+00) 1.00E+01 (1.09E−01) 0.00E+00 (0.00E+00) 1.18E-16 (6.49E-16) 0.00E+00 (0.00E+00) 8

consistently good as the other compared MS peers, the former algorithms outperforms the latter in the remaining functiuons.

6.3.6 Comparison in Real-World Problems This subsection evaluates the search performance of PSO-DLTA across three engineering design problems, namely 1) the gear train design problems (Sandgren, 1990), (2) the frequency-modulated (FM) sound synthesis problem (Das and Suganthan, 2010), and (3) the spread spectrum radar polyphase code design problem (Das and Suganthan, 2010). The general descriptions and the mathematical models of these three engineering problems have been provided in Section 2.4.2. All of the six PSO variants employed in the previous experiments (Section 6.3.3) are compared with the proposed PSO-DLTA in solving the gear train design, FM sound synthesis, and spread radar polyphase code design problems. The simulation settings of these three engineering design problems have been summarized in Table 3.20. Meanwhile, the

264

Table 6.14 Simulation results of PSO-DLTA and six other PSO variants in the gear train design problem Emean SD h tmean

APSO 1.28E-08 1.70E-08 + 1.11E+02

FLPSO-QIW 3.34E-10 5.78E-10 1.17E+02

FlexiPSO 2.36E-09 5.78E-10 + 9.24E+01

OLPSO-L 6.79E-09 1.25E-08 + 3.83E+01

MoPSO 9.30E-04 1.97E-03 + 6.07E+01

PSODDS 4.91E-08 1.62E-07 + 5.79E+01

PSO-DLTA 1.05E-09 7.62E-10 8.41E+00

Table 6.15 Simulation results of PSO-DLTA and six other PSO variants in the FM sound synthesis problem Emean SD h tmean

APSO 2.06E+01 5.46E+00 + 1.11E+02

FLPSO-QIW 5.23E+00 5.90E+00 1.14E+02

FlexiPSO 2.17E+01 5.75E+00 + 7.96E+01

OLPSO-L 1.64E+01 5.94E+00 + 3.55E+01

MoPSO 2.84E+01 1.09E+00 + 8.13E+01

PSODDS 2.20E+01 3.35E+00 + 7.91E+01

PSO-DLTA 7.87E+00 6.95E+00 5.42E+01

Table 6.16 Simulation results of PSO-DLTA and six other PSO variants in the spread spectrum radar polyphase code design problem Emean SD h tmean

APSO 1.33E+00 1.92E-01 + 4.96E+02

FLPSO-QIW 1.02E+00 6.88E-02 = 9.64E+02

FlexiPSO 1.22E+00 2.48E-01 + 2.55E+02

OLPSO-L 1.27E+00 1.97E-01 + 1.80E+02

MoPSO 2.56E+00 2.21E-01 + 3.77E+02

PSODDS 1.41E+00 2.63E-01 + 3.02E+02

PSO-DLTA 1.01E+00 9.92E-02 1.04E+02

simulation results yielded by all tested algorithm over the 30 independent runs for these three real-world problems are reported in Tables 6.14 to 6.16. These simulation results include the values of Emean, SD, h, and mean computational time (tmean). The simulation results in Tables 6.14 and 6.15 reveal that the proposed PSO-DLTA is identified as the second best optimizer in solving the gear train design and FM sound synthesis problems because it successfully achieves the second best Emean values in these two real-world problems. The Wilcoxon test result (i.e., h values) in Tables 6.14 and 6.15 have verified that the Emean values obtained by the PSO-DLTA is significantly better than those of APSO, FlexiPSO, OLPSO-L, MoPSO, and PSODDS. These observations suggest that the former algorithm indeed has more superior search accuracy than the latter five in solving the gear train design and FM sound synthesis problems. Although the FLPSO-QIW produces slightly better Emean better values than the PSO-DLTA, the former’s desirable search 265

accuracies in solving the mentioned two real-world problems are drastically offset by its huge computational overhead, as represented by the tmean value. Specifically, the tmean values required by the FLPSO-QIW to solve the gear train designs and FM sound synthesis problems are 13.91 times and 2.10 times higher than those of PSO-DLTA, respectively. Meanwhile, Table 6.16 reports that proposed PSO-DLTA has the most superior search accuracy in solving the radar polyphase code design problem because it successfully achieves the best Emean value in the mentioned problems. The competitive search performance of PSO-DLTA in the radar polyphase code design problem is further confirmed by the Wilcoxon test because the h values in Table 6.16 reveal that the search accuracy of PSO-DLTA is comparable with the second-rank FLPSO-QIW and statistically better than the remaining PSO variants (i.e., APSO, FlexiPSo, OLPSO-L, MoPSO, and PSODDS). Apart from having the best search accuracy in solving the radar polyphase code design problem, the proposed PSO-DLTA also outperforms the six other compared peers, in term of mean computational times, by yielding the smallest tmean value. This implies that PSO-DLTA incurs the least computational overhead to solve the radar polyphase code design problem among the seven tested PSO variants. Based on the simulation results reported in Tables 6.14 to 6.16, it is notable that majority of the tested algorithms do not achieve a proper tradeoff between the algorithm’s performance improvement and the extra computational overhead incurred. For instance, although FLPSO-QIW can generally solve the three employed engineering design problems with competitive Emean value, it might be less feasible for some real world applications because the search mechanisms incorporated in this PSO variant tends to incur significantly high computational overhead. On the other hand, the FlexiPSO, OLPSO-L, MoPSO, and PSODDS which generally consume lower computational overhead (i.e., lower tmean values) tend to show inferior search accuracies in tackling the three employed engineering design problems, as demonstrated by their respective Emean values in Tables 6.14 to 6.16. It is worth pointing out that although OLPSO-L and MoPSO exhibit excellent search accuracies in

266

solving the selected benchmark problems, these two PSO variants fail to maintain their competitive performance in dealing with the real-world problems. Finally, the experimental studies in Tables 6.14 to 6.16 also reveal that the proposed PSO-DLTA has better capability than its compared peers in balancing the tradeoff between the algorithm’s performance improvement and the extra computational overhead incurred. Specifically, PSO-DLTA tackles the gear train design, FM sound synthesis, and spread radar polyphase code design problems with promising search accuracy, without incurring the huge amount of computational overhead. These experimental findings suggest the applicability and feasibility of PSO-DLTA to be employed as a powerful optimization tool in tackling the real-world optimization problems.

6.3.7 Discussion The simulation results of benchmark and real-world engineering problems have validated that the proposed PSO-DLTA has more superior search accuracy, search reliability, search efficiency, and consumes less computation overhead than the other compared PSO variants and MS algorithms. By observing the algorithmic framework, it could be concluded that the excellent performance of PSO-DLTA is attributed to the two major contributions proposed in this research work, i.e., the dimension-level task allocation (DTA) module and the individual-level task allocation (ITA) module. In more details, the DTA module allows the particle to perform different search tasks in different dimension of the search space. As compared to the existing populationbased and individual-based task allocation approaches, the proposed DTA module offers more benefits in balancing the algorithm’s exploration/exploitation searches, considering that it performs a more thorough investigation before it determines the search task of a particle in each dimensional component. According to the “two steps forward, one step back” phenomenon, such thorough investigation is necessary because even the same particle could have different characteristics in different dimension of the search space. For example, the absolute distance between particle i and the global best particle Pg could be different in each 267

dimension, as shown in Equation (6.1). Thus, it is more appropriate for every particle in the swarm to select its own searching strategy in each dimensional component, based on its absolute distance from the Pg particle in that particular dimension. Rules 1, 2 and 3, as summarized in Figure 6.1, have provided the insights of how the DTA module assigns different search tasks to a particle in different dimension of the search space. According to Rule 1, the particle i will escape from the Pg particle through the relocation search, when these two particles become extremely similar. The relocation search aims to provide extra diversity to the PSO-DLTA population and prevent it converges towards the local optimum, thus alleviates the premature convergence issue. Meanwhile, Rule 2 allows the particle i to perform the exploitation search in the selected dimensions when it is relatively far away from Pg in those dimensions. Rule 2 is derived based on the fact that it is more urgent for particle i to learn from Pg in d-th dimension of the search space, when these two particles are less similar in that particular dimension (Jin et al., 2013). This strategy could effectively guide the particle towards a more promising region of search space and thus enhance the algorithm’s search accuracy and convergence speed. Finally, Rule 3 maintains the swarm diversity by encouraging the particle i to perform exploration search in the d-th dimension of search space, when its similarity with Pg is sufficiently close in that particular dimension. On the other hand, the ITA module serves as an alternative learning phase of PSODLTA, as there is no guarantee that DTA module can always improve the particle’s personal best position, Pi. From Figures 6.3 and 6.4, it could be observed that the implementation details of the DTA and ITA modules are different. The differences between these two modules are presented as follows: First, unlike DTA module, the ITA module assigns the same searching mechanism to the particle in all dimensional components, as indicated by Equations (6.9) and (6.10). Second, the greedy selection technique is employed by the ITA module during the relocation search, while the DTA module discards the greedy selection when it updates the particle’s current position with relocation search. Finally, the equations

268

employed by the DTA and ITA modules in performing the exploration and exploitation searches are also different, see Equations (6.5), (6.7), (6.9), and (6.10). Based on these differences, it could be anticipated that the search dynamics of the DTA and ITA modules are significantly different. The necessity of these two modules to have different search dynamics is justified from the fact that the DTA module is employed to evolve the current swarm of PSO-DLTA (i.e. the particles’ current positions), whilst the ITA module performs the neighborhood searches around the memory swarm (i.e. the particle’s personal best positions). According to the Hopkins Test (Hopkins and Skellam, 1954) as performed by Epitropakis et al. (2012), the current swarm and the memory swarm have demonstrated different degrees of clustering tendency during the evolutions. More details, the H-measure of the memory swarm is higher than that of current swarm, implying the former appears to have an exploitative behavior, whilst the latter seems to exhibit more explorative behavior. Considering that different clustering tendencies have been demonstrated by the current and memory swarms, it seems that utilizing the learning methodologies with similar search dynamics on these two distinct swarms tends to introduce conflicting effect on the evolution process (Epitropakis et al., 2012). The poor performance of utilizing the methodologies with similar search dynamic is proven in the previous studies of Parsopoulos and Vrahatis (2002) and Epitropakis et al. (2012). These findings suggest that the development of ITA module with different search dynamics might be plausible to enhance the PSO-DLTA’s performance.

6.4 Comparative Study of the Proposed PSO Variants In this thesis, a total of four new PSO variants called TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA, have been proposed to improve the search performance of BPSO during the optimization process. Considering that different modifications have been introduced in these PSO variants, it is important to investigate how these strategies affect the algorithms’ overall search performances. In this section, detailed comparative studies will be performed to 269

investigate the search accuracy, search reliability, search efficiency, and algorithm complexity of the proposed TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA in solving the benchmark problems and the real-world engineering problems.

6.4.1 Comparison of the Mean Error Results This subsection demonstrates the search accuracies of the four proposed PSO variants in solving the 30 benchmark problems described in Table 2.6. Specifically, the Emean values obtained by the TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA are reported in Table 6.17 and these results are summarized as #BME. The simulation results in Table 6.17 show that PSO-ATVTC exhibits the most superior search accuracy among the four proposed PSO variants because it successfully achieves 25 best Emean values out of 30 tested functions. This is followed by the PSO-DLTA, ATLPSO-ELS, and TPLPSO which obtain the #BME values of 15, 14, and 9, respectively. From Table 6.17, it is notable that the four proposed PSO variants successfully locate the global optima of all conventional problems (F1 to F8) by producing Emean = 0.00E+00, except for function F3 where ATLPSO-ELS achieves the best Emean value. For the rotated problems (F9 to F14), it is observed that TPLPSO exhibits the poorest search accuracy in solving this problem category because it only manages to find the global optima of two (out of six) tested problems. On the other hand, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA have more robust search accuracies in tackling the rotated search spaces because these three proposed PSO variants successfully solve at least four rotated problems with the Emean values of 0.00E+00. Similar observations could be found on the shifted problems (F15 to F22) because the proposed TPLPSO is also identified as the worst optimizer in dealing with this problem category. The remaining three proposed PSO variants, on the other hand, maintain their excellent search accuracies in different shifted problems. Specifically, both of the ATLPSOELS and PSO-ATVTC solve the shifted functions F15, and F18 to F22 with the comparable Emean values as reported in Table 6.17. Although the search accuracies demonstrated by the 270

Table 6.17 The Emean and SD results of four PSO variants for the 50-D benchmark problems F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 F21 F22 F23 F24 F25 F26 F27 F28 F29 F30

Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD Emean SD #BME

TPLPSO 0.00E+00 0.00E+00 0.00E+00 0.00E+00 4.35E+01 1.23E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 3.40E+01 2.31E+00 2.47E+01 5.69E+01 2.26E+01 4.36E+01 0.00E+00 0.00E+00 4.43E+01 1.38E+01 1.17E-02 4.69E-03 3.09E+02 1.24E+02 6.99E+01 7.76E+01 6.27E-03 3.05E-03 5.23E-03 2.36E-03 0.00E+00 0.00E+00 1.99E-02 4.63E-03 3.97E-01 1.45E-01 4.24E-03 7.86E-03 2.09E+01 7.86E-02 3.55E+06 1.15E+06 1.68E+00 3.94E-01 2.91E+02 1.67E+02 3.51E+02 1.80E+02 9.02E+02 1.13E+01 9.29E+02 8.58E+00 9

ATLPSO-ELS 0.00E+00 0.00E+00 0.00E+00 0.00E+00 1.70E+01 2.89E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 2.40E+01 3.51E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 4.21E+01 2.00E+01 2.27E-13 4.72E-14 6.08E+01 6.63E+01 2.67E-01 1.01E+00 2.10E-13 7.03E-14 1.93E-13 5.08E-14 0.00E+00 0.00E+00 3.32E-13 7.95E-14 2.61E-14 1.24E-14 3.86E-03 7.15E-03 2.07E+01 2.17E-01 1.60E+06 6.08E+05 1.11E+00 5.89E-01 2.71E+02 1.55E+02 3.22E+02 1.10E+02 9.24E+02 6.51E+01 9.27E+02 2.49E+01 14

271

PSO-ATVTC 0.00E+00 0.00E+00 0.00E+00 0.00E+00 2.13E+01 9.10E-01 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 2.34E+01 1.98E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 1.48E-13 3.11E-14 5.92E-02 1.33E-02 6.68E-01 6.13E-01 1.48E-13 3.11E-14 1.82E-13 2.54E-14 0.00E+00 0.00E+00 2.79E-13 4.22E-14 1.42E-14 2.01E-14 8.32E-11 8.94E-11 2.10E+01 1.28E-01 7.38E+05 1.73E+05 9.92E-01 5.17E-01 2.51E+02 1.19E+02 1.40E+02 1.90E+01 9.00E+02 0.00E+00 9.00E+02 0.00E+00 25

PSO-DLTA 6.44E-72 1.17E-71 9.13E-96 2.65E-95 4.09E+01 8.45E-01 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 7.06E-95 1.61E-94 4.15E+01 3.67E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 8.66E-09 7.28E-09 1.43E-03 1.11E-03 1.33E-01 7.28E-01 1.12E-08 9.64E-09 5.62E-08 8.07E-08 0.00E+00 0.00E+00 1.09E-09 3.32E-09 1.43E-05 3.20E-05 1.19E-02 1.49E-02 2.10E+01 7.68E-02 5.01E+06 1.78E+06 1.50E+00 3.04E-01 2.11E+02 7.13E+01 2.34E+02 1.83E+02 9.00E+02 0.00E+00 9.00E+02 0.00E+00 15

PSO-DLTA in the mentioned six shifted problems are slightly inferior to those of ATLPSOELS and PSO-ATVTC, the former algorithm outperforms the latter two in solving the shifted functions F16 and F17. Finally, Table 6.17 also reports that all proposed PSO variants are able to partially solve the complex problems (F23 to F26) and the composition problems (F27 to F30). Specifically, the proposed PSO-ATVTC exhibits the best search accuracy in dealing with the complex problems because it successfully achieves three (out of four) best Emean values in this problem categories. Meanwhile, both of the PSO-ATVTC and PSO-DLTA are identified as the best optimizers for composition problems, considering that these two proposed PSO variants manage to solve three (out of four) tested problems with best search accuracies. It must be emphasized that, although both of the PSO-ATVTC and PSO-DLTA record the lower Emean values than TPLPSO and ATLPSO-ELS in tackling the complex and composition problems, the qualities of solutions obtained by these PSO variants are still not satisfactory. For instance, the Emean value produced by PSO-ATVTC in function F25 is 7.38E+05 and this best solution is still very far away from the actual global optimum of function F25. This is because the particles in these proposed PSO variants still do not have the sufficient diversity to locate the shifted global optimum of such difficult problem. There is room of improvement for the proposed PSO variant to tackle these complicated problems.

6.4.2 Comparison of the Performance Improvement Gains The main purpose of introducing the TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSODLTA algorithms in this thesis is to enhance the search performance of BPSO during the optimization. Therefore, it is important to investigate the performance gains achieved by the four proposed PSO variants against the BPSO. The simulation results (i.e., Emean and %Improve values) obtained by the four proposed PSO variants and BPSO in each tested benchmark are reported in Table 6.18. Moreover, Table 6.19 is also presented to summarize the comparison results of the proposed TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSODLTA in each problem category, in terms of #BME and average %Improve values. 272

Table 6.18 Comparison of four proposed PSO variants with BPSO in 50-D problems

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 F21 F22 F23 F24 F25 F26 F27 F28 F29 F30

BPSO 4.67E+03 (-) 2.08E+04 (-) 2.10E+02 (-) 1.15E+02 (-) 1.14E+02 (-) 3.92E+01 (-) 1.21E+01 (-) 8.09E+00 (-) 2.57E+04 (-) 1.08E+02 (-) 1.70E+02 (-) 2.00E+02 (-) 2.04E+02 (-) 5.80E+01 (-) 2.31E+04 (-) 7.72E+04 (-) 9.59E+09 (-) 2.93E+02 (-) 3.14E+02 (-) 7.21E+02 (-) 1.47E+01 (-) 4.99E+01 (-) 7.05E+02 (-) 2.10E+01 (-) 6.11E+08 (-) 2.64E+04 (-) 5.35E+02 (-) 5.59E+02 (-) 1.07E+03 (-) 1.07E+03 (-)

TPLPSO 0.00E+00 (100.000) 0.00E+00 (100.000) 4.35E+01 (79.242) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 3.40E+01 (68.632) 2.47E+01 (85.471) 2.26E+01 (88.691) 0.00E+00 (100.000) 4.43E+01 (23.676) 1.17E-02 (100.000) 3.09E+02 (99.600) 6.99E+01 (100.000) 6.27E-03 (99.998) 5.23E-03 (99.998) 0.00E+00 (100.000) 1.99E-02 (99.864) 3.97E-01 (99.205) 4.24E-03 (99.999) 2.09E+01 (0.569) 3.55E+06 (99.419) 1.68E+00 (99.994) 2.91E+02 (45.646) 3.51E+02 (37.204) 9.20E+02 (13.962) 9.29E+02 (13.484)

Emean (%Improve) ATLPSO-ELS PSO-ATVTC 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 2.13E+01 1.70E+01 (89.828) (91.896) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 2.40E+01 2.34E+01 (77.828) (78.420) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 4.21E+01 0.00E+00 (27.506) (100.000) 2.27E-13 1.48E-13 (100.000) (100.000) 6.08E+01 5.92E-02 (99.921) (100.000) 2.67E-01 6.68E-01 (100.000) (100.000) 2.10E-13 1.48E-13 (100.000) (100.000) 1.93E-13 1.82E-13 (100.000) (100.000) 0.00E+00 0.00E+00 (100.000) (100.000) 3.32E-13 2.79E-13 (100.000) (100.000) 2.61E-14 1.42E-14 (100.000) (100.000) 3.86E-03 8.32E-11 (99.999) (100.000) 2.10E+01 2.07E+01 (-0.105) (1.467) 1.60E+06 7.38E+05 (99.739) (99.879) 1.11E+00 9.92E-01 (99.996) (99.996) 2.71E+02 2.51E+02 (49.337) (53.106) 3.22E+02 1.40E+02 (42.337) (75.035) 9.23E+02 9.00E+02 (13.682) (15.833) 9.24E+02 9.00E+02 (13.950) (16.185)

273

PSO-DLTA 6.44E-72 (100.000) 9.13E-96 (100.000) 4.09E+01 (80.482) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 7.06E-95 (100.000) 4.15E+01 (61.712) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 0.00E+00 (100.000) 8.66E-09 (100.000) 1.43E-03 (100.000) 1.33E-01 (100.000) 1.12E-08 (100.000) 5.62E-08 (100.000) 0.00E+00 (100.000) 1.09E-09 (100.000) 1.43E-05 (100.000) 1.19E-02 (99.998) 2.10E+01 (0.093) 5.01E+06 (99.181) 1.50E+00 (99.994) 2.11E+02 (60.640) 2.34E+02 (58.193) 9.00E+02 (15.833) 9.00E+02 (16.185)

Table 6.19 Summarized comparison results of the four proposed PSO variants with BPSO in each problem category

Conventional Problems (F1 to F8) Rotated Problems (F9 to F14) Shifted Problems (F15 to F22) Complex Problems (F23 to F26) Composition Problems (F27 to F30) Overall Results (F1 to F30)

BPSO 0 (-) 0 (-) 0 (-) 0 (-) 0 (-) 0 (-)

Emean (averge %Improve) TPLPSO ATLPSO-ELS 7 8 (97.405) (98.987) 2 4 (77.745) (84.222) 1 1 (99.833) (99.990) 0 1 (74.995) (75.300) 0 0 (27.574) (29.827) 9 14 (81.822) (83.922)

PSO-ATVTC 7 (98.729) 6 (96.403) 6 (100.000) 3 (74.943) 3 (40.040) 25 (87.606)

PSO-DLTA 5 (97.563) 4 (93.623) 3 (100.00) 0 (74.865) 3 (37.713) 15 (86.418)

Based on the results reported in Tables 6.18 and 6.19, it is observed that PSOATVTC achieves the most promising performance improvement gain against the BPSO, by achieving the highest average %Improve values of 87.606%. This is followed by the PSODLTA, ATLPSO-ELS, and TPLPSO, which produce the average %Improve values of 86.418%, 83.922%, and 81.822%, respectively. It is worth mentioning that the summarized results in Table 6.19 are consistent with the Emean values reported in Table 6.17. Specifically, it is evident that all proposed PSO variants achieves the similar performance gains over the BPSO in solving the conventional and complex problems, as verified by their similar average %Improve values in Table 6.19. Meanwhile, both of the PSO-ATVTC and PSODLTA are proven better than the TPLPSO and ATLPSO-ELS in tackling the rotated, shifted, and composition problems, given that the former two algorithms outperform the latter two in these three problem categories, in terms of #BME and average %Improve values.

6.4.3 Comparison of the Non-Parametric Statistical Rest Results The experimental results reported in the previous two subsections reveal that PSO-ATVTC is the best performing algorithms among the four proposed PSO variants. To further verify the competitive performance of PSO-ATVTC over the three other proposed PSO variants, a set of non-parametric statistical tests are employed in this subsection to thoroughly

274

Table 6.20 Wilcoxon test for the comparison of PSO-ATVTC and three other proposed PSO variants PSO-ATVTC vs. R+ R− p-value

TPLPSO 421.5 43.5 2.53E-05

ATLPSO-ELS 362.0 103.0 6.64E-03

PSO-DLTA 323.0 112.0 2.16E-02

investigate if the previously reported comparison results are significant at the statistical level. Considering that PSO-ATVTC has the best search accuracy among the four proposed algorithms, it is chosen as the control algorithm and will be compared against with the remaining three PSO variants (i.e., TPLPSO, ATLPSO-ELS, and PSO-DLTA) in the following non-parametric statistical analyses. Table 6.20 reports the Wilcoxon pairwise comparison results between the PSOATVTC and three other proposed variants. Accordingly, the R+ values produced by the PSOATVTC against the three other proposed PSO variants are higher than the R- values produced. This implies that the number of problems in which PSO-ATVTC outperforms the compared PSO variants is much larger than the number of problems in which the former underperforms the latters. Another important finding from Table 6.20 is that all p-values obtained from the Wilcoxon test smaller than  = 0.05. In other words, the pairwise comparative study has confirmed that the search accuracy of PSO-ATVTC is indeed significantly better than those of TPLPSO, ATLPSO-ELS, and PSO-DLTA. For multiple comparisons (García et al., 2009, Derrac et al., 2011), Table 6.21 first reports the average ranking of all proposed PSO variants and the associated p-value computed via the Friedman test. Accordingly, PSO-ATVTC is the optimizer with best search accuracy because it achieves the smallest average rank of 1.95. This is followed by the ATLPSO-ELS, PSO-DLTA, and TPLPSO, which have the average ranks of 2.25, 2.50, and 3.30, respectively. In addition, Friedman test also detects a significant global difference exists among the four proposed variants because the p-value reported (i.e., 4.21E-04) in Table 6.21 is smaller than  = 0.05.

275

Table 6.21 Average rankings and the associated p-values obtained by the four proposed PSO variants via the Friedman test Algorithm Average ranking Chi-square statistic p-value

PSO-ATVTC 1.95

ATLPSO-ELS PSO-DLTA 2.25 2.50 18.09 4.21E-04

TPLPSO 3.30

Table 6.22 Adjusted p-values obtained by comparing the PSO-ATVTC with three other proposed PSO variants using the Bonferroni-Dunn, Holm, and Hochberg procedures PSO-ATVTC vs. TPLPSO PSO-DLTA ATLPSO-ELS

z 4.05E+00 1.65E+00 9.00E-01

Unadjusted p 5.10E-05 9.89E-02 3.68E-01

Bonferroni-Dunn p 1.54E-04 2.97E-01 1.00E+00

Holm p 1.54E-04 1.98E-01 3.68E-01

Hochberg p 1.54E-04 1.98E-01 3.68E-01

To this end, the Bonferroni-Dunn, Holm, and Hochberg tests are employed as the post-hoc statistical analyses (García et al., 2009, Derrac et al., 2011) to further identify the concrete differences for the control algorithm (i.e., PSO-ATVTC). According to Table 6.22, the Bonferroni-Dunn, Holm, and Hochberg tests confirm the significant performance improvement of PSO-ATVTC over the TPLPSO because the APVs produced by these posthoc procedures are smaller than  = 0.05. On the other hand, none of the employed posthoc test is able to detect the significant performance differences between PSO-ATVTC, ATLPSO-ELS, and PSO-DLTA. This is because these three proposed PSO variants exhibit similar search accuracies in solving majority of the employed benchmark functions, except for the complex and composition problems. It is expected that the performance differences between PSO-ATVTC, ATLPSO-ELS, and PSO-DLTA could be identified by employing them to solve more complex and composition problems.

6.4.4 Comparison of the Success Rate Results The search reliabilities of TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA in solving the 30 employed benchmark problems are compared by using the SR values reported in Table 6.23. In general, the results of SR analysis reported in Table 6.23 show similar patterns with the Emean analysis in Table 6.17. Accordingly, PSO-ATVTC has the most 276

Table 6.23 The SR and SP values of four proposed PSO variants for the 50-D benchmark problems F1

SR SP F2 SR SP F4 SR SP F5 SR SP F6 SR SP F7 SR SP F8 SR SP F9 SR SP F11 SR SP F12 SR SP F13 SR SP F14 SR SP F15 SR SP F17 SR SP F18 SR SP F19 SR SP F20 SR SP F21 SR SP F22 SR SP F23 SR SP #S/#PS/#NS BSP

TPLPSO 100.00 6.65E+02 100.00 1.13E+05 100.00 1.32E+03 100.00 2.02E+03 100.00 7.03E+02 100.00 8.34E+03 100.00 7.83E+02 100.00 1.13E+05 80.00 3.16E+03 76.67 5.60E+03 100.00 6.37E+02 6.67 8.03E+04 0.00 Inf 3.33 8.84E+06 83.33 3.41E+05 93.33 3.00E+05 100.00 1.12E+03 0.00 Inf 0.00 Inf 86.67 2.94E+05 10/7/13 9

ATLPSO-ELS 100.00 2.84E+03 100.00 8.46E+03 100.00 2.85E+03 100.00 2.91E+03 100.00 3.08E+03 100.00 3.62E+03 100.00 2.55E+03 100.00 8.25E+03 100.00 5.72E+03 100.00 5.67E+03 100.00 4.79E+03 16.67 8.05E+05 100.00 2.65E+04 90.00 2.25E+05 100.00 1.05E+05 100.00 1.04E+05 100.00 4.78E+03 100.00 2.33E+04 100.00 3.10E+04 86.67 6.36E+04 17/3/10 6

PSO-ATVTC 100.00 7.57E+03 100.00 2.23E+04 100.00 5.84E+03 100.00 5.60E+03 100.00 1.17E+04 100.00 6.97E+03 100.00 6.87E+03 100.00 2.15E+04 100.00 2.23E+04 100.00 2.16E+04 100.00 1.71E+04 100.00 1.46E+04 100.00 4.59E+04 10.00 2.80E+06 100.00 4.29E+04 100.00 1.22E+05 100.00 2.06E+03 100.00 1.93E+04 100.00 5.12E+04 100.00 6.54E+04 19/1/10 3

PSO-DLTA 100.00 4.20E+04 100.00 3.81E+04 100.00 2.10E+04 100.00 2.08E+04 100.00 3.12E+04 100.00 2.46E+04 100.00 5.31E+04 100.00 2.50E+04 100.00 2.37E+04 100.00 2.50E+04 100.00 1.58E+04 100.00 6.62E+04 100.00 1.90E+05 96.67 1.47E+05 100.00 6.43E+04 100.00 8.28E+04 100.00 4.94E+03 100.00 1.02E+05 100.00 1.00E+05 53.33 3.63E+05 18/2/10 2

competitive search reliability because it successfully solves 19 (out of 30) tested functions with SR = 100%. This is followed by the PSO-DLTA, ATLPSO-ELS, and TPLPSO, which has completely solved 18, 17, and 10 of the benchmark problems. Based on the SR values obtained by the four proposed PSO variants in each problem category, it is notable that TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA have similar search reliabilities in solving the conventional and rotated problems. Specifically, the

277

four proposed PSO variants are able to completely (i.e., SR = 100%) or partially (i.e., 0% < SR < 100%) solve all conventional and rotated problems within the predefined accuracy level

 , except for functions F3 and F10. Meanwhile, it could be observed that the search reliability of TPLPSO is impaired by the shifted fitness landscapes because it never solves (i.e., SR = 0.00%) the shifted functions F15, F16, F21, and F22 within the predefined  . In contrary, the remaining three proposed PSO variants (i.e., ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA) successfully maintain their robust search reliabilities in shifted problems because they completely solve six (out of eight) tested problems with SR = 100%. It is also worth to emphasize that these three PSO variants partially solve the function F17, where PSO-DLTA achieves the best SR values of 96.67%. In other words, PSO-DLTA successfully solves the function F17 within the predefined  in 29 out of 30 of independent runs. Finally, Table 6.23 reveals that the search reliability of all proposed PSO variants are severely compromised by the complicated fitness landscapes of the complex and composition problems. Specifically, none of the proposed algorithms are able to solve these two problem categories within the predefined  , except for function F23, where TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA achieve the SR values of 86.67%, 86.67%, 100%, and 53.33%, respectively. For the remaining complex and composition problems, PSO-ATVTC is proven better than the other three proposed PSO variants for achieving the more competitive Emean, #BME, and %Improve values in these problems, as reported in Tables 6.17 to 6.19.

6.4.5 Comparison of the Success Performance Results Table 6.23 presents the SP values produced by the four proposed PSO variants to quantitatively compare the algorithms’ search efficiencies. Apart from the SP values, a total of ten convergence curves are also depicted in Figure 6.7 to qualitatively visualize the convergence speeds of the TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA. Specifically, Figure 6.7 consists of two convergence curves from conventional (F1 and F7),

278

(a)

(b)

(c)

(d)

(e)

(f)

Figure 6.7: Convergence curves of 50-D problems: (a) F1, (b) F7, (c) F9, (d) F12, (e) F15, and (f) F18.

rotated (F9 and F12), shifted (F15 and F18), complex (F25 and F26), and composition (F27 and F29) problems.

279

(g)

(h)

(i)

(j)

Figure 6.7 (Continued): Convergence curves of 50-D problems: (g) F25, (h) F26, (i) F27, and (j) F29.

Based on the experimental results reported in Table 6.23, it is observed that both TPLPSO and ATLPSO-ELS have the competitive search efficiencies by achieving eight and seven best SP values, respectively. Meanwhile, PSO-ATVTC and PSO-DLTA have the comparable convergence characteristics for obtaining three and two best SP values, respectively. By examining each problem category, it is notable that the competitive #BSP values produced by both of the TPLPSO and ATLPSO-ELS are attributed to their rapid convergence characteristics in solving the simpler conventional and rotated benchmark problems. Specifically, TPLPSO and ATLPSO-ELS obtain eight and three best SP values in these two problem categories, respectively. It is also worth mentioning that PSO-ATVTC exhibits the similar search efficiency with ATLPSO-ELS in solving the conventional and

280

rotated problems because these two algorithms produce the comparable SP values in these two problem categories. The excellent search efficiencies of TPLPSO, ATLPSO-ELS, and PSO-ATVTC are visually demonstrated in Figures 6.7. More particularly, the convergence curves of TPLPSO and ATLPSO-ELS in the conventional functions F1 and F7 [as depicted in Figures 6.7(a) and 6.7(b), respectively] and the rotated functions F9 and F12 [as depicted in Figures 6.7(c) and 6.7(d)] are sharply dropped off at the early or middle stages of optimization process. This implies that TPLPSO, ATLPSO-ELS, and PSO-ATVTC require a considerably small numbers of fitness evaluations (FEs) to locate the global optima of these problems. On the other hand, the four proposed PSO variants show different convergence characteristics when they are applied to solve the shifted problems. TPLPSO fails to maintain its excellent search efficiencies because it produces the worst SP values in functions F17 to F18. Moreover, TPLPSO is also the only proposed PSO variant that never solves the functions F15, F21, and F22 within the predefined  and thus its SP values in these functions are assigned as “Inf”. On the other hand, the search efficiencies of ATLPSOELS, PSO-ATVTC, and PSO-DLTA in shifted problems are competitive because each of these proposed PSO variants achieves two best SP values in eight tested problems. The convergence speeds of the four proposed PSO variants in solving the shifted problems are demonstrated by their respective convergence curves in Figure 6.7(e) and 6.7(f). Accordingly, both of the ATLPSO-ELS and PSO-ATVTC have similar convergence curves for functions F15 and F18 [as depicted in Figures 6.7(e) and 6.7(f), respectively] and these observations are consistent with their respective SP values recorded in Table 6.23. On the other hand, the convergence curves in functions F15 and F18 reveal that the search efficiency of PSO-DLTA outperforms that of TPLPSO because the former converges much faster than the latter in the entire optimization process. Finally, for complex and composition problems, no SP values are available for comparison, except for functions F23 where ATLPSO-ELS and PSO-ATVTC achieve the best and second best SP values, respectively. The convergence curves of functions F25, F26, 281

F27, and F29 shows that TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA exhibit the similar convergence characteristics in tackling the problems with more complicated fitness landscapes. Among the four proposed PSO variants, TPLPSO is identified as the least efficient optimizer to solve the complex and composition problems because it exhibits the lowest convergence speeds, especially during the early stage of search process [see Figures 6.7(g) to 6.7(j)].

6.4.6 Comparison of the Algorithm Complexity Results In this subsection, the computational complexities of the TPLPSO, ATLPSO-ELS, PSOATVTC, and PSO-DLTA are quantitatively compared in Table 6.24 by using the AC analysis as described in Figure 2.11. Table 6.24 reports that TPLPSO has the least computational complexity at D = 50, by producing the smallest AC value among the four proposed PSO variants. This is followed by the PSO-ATVTC, PSO-DLTA, and ATLPSO-ELS. The outperformance of TPLPSO over the other three proposed PSO variants, in term of the AC value, is reasonable because the former algorithm has less and simpler algorithmic modules than the latter three algorithms. On the other hand, the highest AC value produced by the ATLPSO-ELS implies that the mechanisms employed by this algorithm to adaptively balance the exploration/exploitation searches have the most complex implementations among the four proposed algorithms. As compared to the ATLPSO-ELS, both of the PSO-ATVTC and PSO-DLTA are considered to have the moderate computational complexities because their AC values are lower than that of ATLPSO-ELS. Unlike the TPLPSO which suffers with the significant tradeoff between the performance gains and the increment of computational complexity, both of the PSO-ATVTC and PSO-DLTA successfully maintain their excellent search performance despite having relatively simpler implementations than that of ATLPSO-ELS. Based on the experimental results observed in Table 6.24, it can be concluded that both PSOATVTC and PSO-DLTA emerge as better optimizer than the TPLPSO and ATLPSO-ELS

282

Table 6.24 AC Results of the four proposed PSO variants in D = 50 T0 T1 ̂2 AC

TPLPSO 1.88E−01 4.19E+00 2.05E+02 1.07E+03

ATLPSO-ELS 1.88E−01 4.19E+00 5.93E+02 3.31E+03

PSO-ATVTC 1.88E−01 4.19E+00 3.44E+02 1.81E+03

PSO-DLTA 1.88E−01 4.19E+00 5.29E+02 2.79E+03

because the former two algorithms could substantially enhance the search performance of PSO without imposing excessive computational complexities to the algorithms.

6.4.7 Comparison in Real-World Problems In this section, the feasibility and applicability of the TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA in solving (1) the gear train design problem (Sandgren, 1990), (2) the frequency-modulated (FM) sound synthesis problem (Das and Suganthan, 2010), and (3) the spread spectrum radar polyphase code design problem (Das and Suganthan, 2010) are investigated. The simulation results [i.e., Emean, SD, h, and mean computational time (tmean)] yielded by the four proposed PSO variants over the 30 independent runs for these three realworld problems are reported in Tables 6.25 to 6.27. Table 6.25 shows that all of the TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSODLTA exhibit competitive search accuracy in the gear train design problem because they successfully solve this real-world problem with at the accuracy level of 10-9 or 10-10. Among the four proposed PSO variants, PSO-ATVTC is identified as the best optimizer by producing the smallest Emean value. This is followed by the PSO-DTLA, TPLPSO, and ATLPSO-ELS. Meanwhile, PSO-DLTA requires the least computation overheads to solve the gear train design problem, considering that it records the smallest tmean value produced. Among the four proposed PSO variants, it is observed that ATLPSO-ELS exhibits the worst overall search performance because it solves the given problem with the worst accuracy (i.e., highest Emean) and the highest computation overhead (i.e., higest tmean). Meanwhile, Table 6.26 reports that both of PSO-ATVTC and PSO-DLTA exhibit the competitive search accuracy in solving the FM sound synthesis problem, considering that 283

Table 6.25 Simulation results of the four proposed PSO variants in the gear train design problem Emean SD tmean

TPLPSO 1.36E-09 1.57E-09 3.76E+01

ATLPSO-ELS 1.65E-09 4.94E-09 3.89E+01

PSO-ATVTC 1.33E-10 2.76E-10 1.53E+01

PSO-DLTA 1.05E-09 7.62E-10 8.41E+00

Table 6.26 Simulation results of the four proposed PSO variants in the FM sound synthesis problem Emean SD tmean

TPLPSO 1.42E+01 7.23E+00 5.09E+01

ATLPSO-ELS 1.53E+01 1.06E+00 3.30E+01

PSO-ATVTC 5.77E+00 3.72E+00 2.37E+01

PSO-DLTA 7.87E+00 6.95E+00 5.42E+01

Table 6.27 Simulation results of the four proposed PSO variants in the spread spectrum radar polyphase code design problem Emean SD tmean

TPLPSO 9.28E-01 1.41E-01 2.08E+02

ATLPSO-ELS 1.01E+00 1.95E-01 1.81E+02

PSO-ATVTC 1.03E+00 2.13E-01 1.09E+02

PSO-DLTA 1.01E+00 9.92E-02 1.04E+02

these two proposed PSO variants successfully achieve the best and second best Emean values, respectively. Despite having the second best search accuracy, PSO-DLTA tends to consume the highest computation resources in tackling the given problem, as revealed by its tmean value. On the other hand, the best performing PSO-ATVTC does not suffer with the high computation overhead because it produces the lowest tmean values. Finally, Table 6.26 also reveals that both of the TPLPSO and ATLPSO-ELS are less feasible to solve the FM sound synthesis problem because these two proposed PSO variants tend to produce higher values of Emean (inferior search accuracy) and tmean (high computation overhead). Finally, the experimental results in Table 6.27 show that all of the TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA exhibit the comparable search accuracies in dealing with the spread spectrum radar poloyphase code design problem because the similar Emean values are produced. Among the four proposed PSO variant, TPLPSO achieves the best Emean value, implying that it has the most superior search accuracy in solving this engineering

284

design problems. It is notable that although TPLPSO produces the better Emean value than the remaining three PSO variants, the former’s desirable search accuracy in solving the spread spectrum radar poloyphase code design problem is severely compromised by its huge computation overhead, as represented by its highest tmean value. One the other hand, PSODLTA solves the given engineering design problem with the least computation resource because it consumes the least tmean value, followed by PSO-ATVTC, ATLPSO-ELS, and TPLPSO. Based on the experimental results reported in Tables 6.25 to 6.27, it is observed that PSO-ATVTC emerges as the best proposed PSO variants in tackling the three mentioned real-world problems. In term of search accuracy, PSO-ATVTC produces two best Emean values out of the three given problems. Moreover, PSO-ATVTC is also identified as the most efficient PSO variant because it produces one lowest tmean and two second lowest tmean values out of the three engineering design problems. It is also notable that although PSOATVTC produces the worst Emean value in solving the spread spectrum radar poloyphase code design problem, the performance difference between this algorithm and the best performing TPLPSO is marginal and it is less significant as compared to the outperformance margin of the former over the latter, in term of tmean.

6.4.8 Remarks This subsection aims to provide some remarks regarding the overall search performance of the four new PSO variants proposed in this thesis. Specifically, the search accuracy, search reliability, search efficiency, and computation complexity of the TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA in each problem category will be discussed in details to provide the better overview of the algorithm’s overall search performance. Based on the previously reported results, it is evident that TPLPSO has the lowest computation complexity (i.e., lowest AC value), implying that this algorithm has the simplest implementation among the four proposed PSO variants. It is also notable that TPLPSO tends to yield excellent search accuracy (Emean), search reliability (SR), and search efficiency (SP) 285

in the simpler benchmark problems, such as the conventional problems and the rotated problems. Nevertheless, experimental study shows that the search performance of TPLPSO in dealing with the shifted, complex, and composition problems are degraded significantly as compared to the other three proposed PSO variants. These observations suggest that the search mechanisms employed by the TPLPSO are not robust enough to track in shifted global optima in these complicated fitness landscapes. The lack of adaptive mechanisms to adjust the exploration/exploitation searches of particle and the absence of bidirectional learning mechanism between the improved particles and the global best particle might be the two main reasons that contribute to the inferior performance of TPLPSO in solving the more challenging benchmark problems. For the ATLPSO-ELS, the experimental results in Tables 6.17, 6.18, 6.19, and 6.23 show that this second proposed algorithm also performs competitively in the simpler benchmark problems, i.e., the conventional and rotated problems. Unlike the TPLPSO which suffers the severe performance impairment in tackling the shifted, complex, and composition problems, ATLPSO-ELS is able to achieve relatively good search performance in these three problem categories. Specifically, ATLPSO-ELS successfully locates the near-global optima of the majority shifted functions (i.e., function F15, F18, F19, F20, F21, F22) as compared to the TPLPSO. The significant performance gains of ATLPSO-ELS in tackling the shifted, complex, and composition problems could be attributed to the fact that this proposed algorithm is equipped with the adaptive task allocation mechanisms which can systematically assign different search task to different ATLPSO-ELS particles. Moreover, the development of the bidirectional learning mechanism between the improved particles and the global best particle also contributes to the performance improvement of ATLPSO-ELS. This is because the bidirectional learning mechanism tends to accelerate the global best particle towards the promising regions of search space by using the useful information extracted from the improved ATLPSO-ELS particles. Despite achieving the significant search performance gain, ATLPSO-ELS suffers with the huge computation complexity by producing the largest AC values among the four proposed PSO variants. This is because the 286

considerable amounts of metrics and parameters have been introduced to the ATLPSO-ELS to perform the adaptive task allocation mechanisms. Notably, the derivations of some metrics employed by the ATLPSO-ELS are computation intensive. Meanwhile, the third proposed PSO variant, i.e., PSO-ATVTC is reported to have better search accuracy than the ATLPSO-ELS in tackling all tested problem categories. The outperformance of PSO-ATVTC against the ATLPSO-ELS, in term of search accuracy, is attributed to the fact that the former could offer the particles with more choices of exploration/exploitation strengths via the ATVTC module. This merit enables the PSOATVTC particle to select the learning strategy with the appropriate exploration/exploitation strengths according to its search status and its location in the search space. In term of search efficiency, it could be observed that PSO-ATVTC produces the comparable SP values with ATLPSO-ELS, but slightly higher than those of TPLPSO, especially in tackling the simpler benchmark problems (i.e., the conventional and rotated problems). Such observation is reasonable because the ATVTC module requires some extra fitness evaluations (FEs) to determine the appropriate topology connectivity of each PSO-ATVTC particle, in order to assign it with the appropriate exploration/exploitation strengths. It is also noteworthy that, unlike ATLPSO-ELS, the significant improvement gain achieved by the PSO-ATVTC is not traded with the huge computation complexity. Specifically, PSO-ATVTC produces the second best AC values among the four proposed PSO variants, implying that the mechanisms employed by the PSO-ATVTC to adaptively adjust the particle’s exploration/exploitation strengths are much simpler and yet more efficient than those of ATLPSO-ELS. Moreover, as compared to the ATLPSO-ELS which consists of four parameters (K1, K2, Z, and m), only one parameter Z is introduced to the PSO-ATVTC. Thus, it could be anticipated that the parameter tuning process of the latter algorithm is less tedious and less time consuming than the former one. Finally, the last proposed PSO variant, i.e., PSO-DLTA, also shows promising search performances in solving majority of the tested benchmarks. Despite having slightly inferior search efficiency (SP) than the other three PSO variants in solving the simpler 287

conventional and rotated problems, the overall search performance (i.e., Emean, SR, and SP values) of PSO-DLTA is comparable with the best performing PSO-ATVTC in the problems with more complicated search spaces (e.g., the shifted problem and the composition problem). The slightly inferior SP values produced by the PSO-DLTA in solving the simpler benchmark might be due to the DTA module incorporated into this algorithm. Specifically, the task allocation mechanism of the DTA module is executed in dimension-wise and it tends to perform a more thorough search in locating the global optimum solution of a given problem. Such thorough search might not be necessary for solving the simpler benchmark because it tends to consume extra FEs, which will then lead to the slightly higher SP values. Despite having comparable search performance with ATLPSO-ELS, PSO-DLTA is also reported to have lower computation complexity than the former algorithm by producing the lower AC value. This observation suggests that both of the DTA and ITA modules are also the viable alternative to improve the search performance of PSO, without incurring excessive computation complexity. Another merit of PSO-DLTA is that only one parameter Z has been introduced into this proposed PSO variant, as compared to the ATLPSO-ELS which has four parameters introduced. Therefore, it is also expected that the parameter tuning process of the PSO-DLTA is less tedious and less time consuming than the ATLPSO-ELS. Based on the aforementioned remarks, it could be concluded that both of the TPLPSO and ATLPSO-ELS are the less desirable optimizer among the four proposed PSO variants. This is because the former algorithm has the poor search performance in tackling the problems with more complicated fitness landscapes, whereas the latter’s excellent search performance comes at the price of huge computational complexity. Meanwhile, both of the PSO-ATVTC and PSO-DLTA emerge as the more desirable better optimizer among the four proposed PSO variants, considering that these two PSO variants could substantially enhance the search performance of PSO without severely compromising the complexities of their algorithmic frameworks.

288

6.5 Summary In this chapter, a new PSO variants called the PSO-DLTA in developed. Unlike the three PSO variants introduced in the previous three chapters, a DTA module is incorporated into the PSO-DTLA to achieve a better balance of exploration/exploitation searches. Specifically, the proposed DTA module first computes the absolute distance between a particular particle and the global best particle in each dimension of the search space. Based on these computed distances, the proposed DTA module employs three rules to assign different types of search task to different dimensional components of the particle. Considering that DTA module is not always guarantee to improve the particle’s fitness, an ITA module is then proposed as the alternative learning phase of PSO-DLTA. Unlike the DTA module, ITA module performs the individual-level task allocation on the memory swarm members of PSO-DLTA. Simulation results reveal that the proposed PSO-DLTA significantly outperforms its peers in terms of the search accuracy, search reliability, search efficiency, and computational complexity. This implies that the employment of the dimension-level task allocation approach is promising way of enhancing the searching performance of PSO. The simulation results further prove that all both of the DTA and ITA modules are indispensable for the PSO-DLTA to solve the problems with various types of fitness landscape. Considering that a total of four new PSO variants have been proposed in this thesis, it is important to study the performance differences between these proposed algorithms. To this end, a series of comparative studies are conducted to investigate the overall search performance of the four proposed PSO variants. Based on these results obtained from these comparative studies, it could be concluded that both of the TPLPSO and ATLPSO-ELS are relatively undesirable because the former has inferior search accuracy in solving the complicated problems, whereas the latter tends to incur excessively high computation complexity. On the other hand, both of the PSO-ATVTC and PSO-DLTA emerge as better optimizer because of their abilities to significantly enhance the search performance of PSO without severely compromising the complexities of their algorithmic frameworks.

289

CHAPTER 7 CONCLUSION AND FUTURE WORKS

7.1 Conclusion The main thrust of this research is to enhance the search performance of PSO by introducing a set of robust learning strategies into the algorithm. To this end, a total of four new PSO variants called TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA have been developed in this thesis. These four enhanced PSO variants successfully address some main challenges encountered by the original PSO during the optimization process. These challenges include the premature convergence, the balancing of exploration/exploitation searches, and the escalating complexity of a given problem’s fitness landscape. This chapter summarizes the research contributions achieved during the course of this study. A number of future works are also provided at the end of this chapter. The first contribution of this research is the development of an alternative learning phase (i.e., peer-learning phase) into the TPLPSO. This alternative learning phase allows the particle to learn from its peer particles when the earlier learning phase guided by the global best particle is not benefiting the particle’s search. By capitalizing the information acquired from the peer particles, the proposed alternative learning phase allows the TPLPSO particle to explore for the new search trajectories in seeking for the global optimum of a given problem. Another notable innovation that is introduced into the TPLPSO is known as the stochastic-based learning strategy (SPLS). SPLS is a unique learning strategy that specifically developed to evolve the global best particle (Pg) of TPLPSO, considering that the latter is crucial in guiding the swarm during the search process. More particularly, SPLS acts as a countermeasure of premature convergence by providing more exploratory moves to the Pg particle (via the random perturbation mechanism) when the latter is trapped in the inferior regions of search space for sufficiently long time. Experimental results show that the integration of both peer-learning phase and SPLS into the TPLPSO successfully improves

290

the algorithm’s search accuracy, search reliability, and search efficiency in tackling the simpler benchmark problems (e.g., the conventional and rotated problems). To improve the algorithm’s overall search performance in tackling the problems with more challenging fitness landscapes, the second contribution of this research is introduced via the ATLPSO-ELS. This contribution involves the development of two adaptive task allocation (ATA) modules, which aims to achieve better regulations of the exploration/exploitation searches during the current swarm evolution and the memory swarm evolution of ATLPSO-ELS. Specifically, both of the ATAcurrent and ATAmemory modules are developed to systematically divide the current swarm and the memory swarm of ATLPSOELS into the exploration and exploitation sections based to the swarm member’s fitness and diversity. These adaptive population division strategies ensure that there is always a certain (but not fixed) amount of particles to perform the exploration and exploitation searches in both of the current and memory swarms of ATLPSO-ELS. Another important modification that is introduced into the ATLPSO-ELS is known as the orthogonal experiment designbased learning strategy (OEDLS). Similar with SPLS, OEDLS is also proposed to specifically evolve the Pg particle. Nevertheless, unlike the SPLS which mainly governs the exploration capability of the Pg particle, OEDLS focuses on enhancing the Pg particle’s exploitation capability by establishing a bidirectional learning mechanism between the improved swarm members and the Pg particle. More particularly, OEDLS is designed to extract the useful information from the improved swarm members and then utilize these information to further improve the Pg particle. In general, ATLPSO-ELS achieves significant performance improvement in dealing with the more challenging benchmarks (i.e., the shifted, complex, and composition problems) but with a tradeoff of escalated computation complexity. To reduce the algorithm’s computation complexity, the third research contribution is subsequently introduced via the PSO-ATVTC. Specifically, this research contribution aims to offer an innovative, yet efficient approach to adjust the particle’s exploration/exploitation strengths during the search process. Considering that different types of neighborhood 291

structures has different impacts on the transmission rate of best solution with the PSO swarm (Kennedy, 1999), an ATVTC module is therefore implemented to realize the appealing capability of PSO-ATVTC, by adaptively varying the neighborhood structure of each PSOATVTC particle. Specifically, the topology connectivity of each PSO-ATVTC particle with respect to its neighborhood members could be increased, decreased, or shuffled by the ATVTC module according to the particle’s search status. Based on the current information provided by the existing neighborhood members of each PSO-ATVTC particle, three different exemplars will then be constructed to guide the particle’s search direction. Notably, PSO-ATVTC is considered as a PSO variant with multiple learning strategies. This is because different PSO-ATVTC particles consist of different neighborhood structure and each of them could be interpreted as the particle that performs the search via the learning strategy with different exploration/exploitation strengths (Li et al., 2012). Extensive experimental study reveals that PSO-ATVTC achieves the significant performance improvement in all tested problems categories (i.e., the conventional, rotated, shifted, complex, and composition problems). Another observation that needs to be emphasized is the excellent search performance of PSO-ATVTC is not traded with the huge computation complexity. This implies that the mechanisms offered by the ATVTC module to adaptively adjust the particle’s exploration/exploitation strengths are indeed effective and efficient in improving the overall optimization capability of PSO. Another notable research contribution of this thesis is the feasibility study of the dimension-level task allocation mechanism in PSO-DLTA. Unlike most existing PSO variants which assign the same search task to all dimensional components of a particle, PSODLTA has the capability of performing the dimension-level allocation. Specifically, the DTA module proposed in the PSO-DLTA considers the unique distance characteristic computed between a particle and the Pg particle in each dimensional component. These distances will then be capitalized by the DTA module to assign the PSO-DLTA particle with different search tasks (i.e., relocation, exploration, or exploitation) in different dimensions of the search space. Experimental studies show that the dimension-wise mechanisms embedded 292

in the DTA module allow the PSO-DLTA to perform a more thorough search in locating the global optimum solution of a given problem. In general, PSO-DLTA successfully solve majority of the tested problems with promising search performance and low computation complexity. These observations suggest that the dimension-level task allocation mechanism employed in the PSO-DTLA could emerge as another viable approach to improve the algorithm’s performance, without severely jeopardizing the algorithm’s complexity.

7.2 Future Works This section aims to discuss several future work directions that can be done to overcome the limitations of the proposed PSO variants and open up interesting extension from current research. These future works are explained as follows.

7.2.1 Development of Fully Self-Adaptive Framework By observing the algorithmic frameworks of the TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA, it is notable that these proposed PSO variants consist of some parameters (i.e., Z, R, K1, and K2) than need to be manually tuned before they are applied to solve a given problem. Moreover, the parameters such as the inertia weight (i.e.,  ) and acceleration coefficients (i.e., c1 and c2) of these proposed PSO variants are set according to the recommendations of literatures. One of the ideas to improve the current research work is to update the algorithmic frameworks of these proposed PSO variants to a fully self-adaptive version. The main purpose of this self-adaptive structure is to adaptively determine the optimal parameter settings of each proposed algorithm without any prior knowledge of the given problems. By having this self-adaptive structure, it is possible for the four proposed PSO variants to become the truly intelligent algorithms, which have the capabilities of learning and selfevolve according to the knowledge learned from the surrounding environments.

293

7.2.2 Applicability in Different Classes of Optimization Problems It is also notable that although the TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA have exhibited superior performance in the previously reported experiments, these proposed PSO variants are applicable only for solving the single objective (SO) and unconstrained global optimization problem with continuous search space. Nevertheless, in the research arena of computational intelligence, there are numerous types of optimization problems exist. These problems include those with (1) discrete and mixed search spaces, (2) multiple objectives (MO) problems, (3) multimodal problems or the problems with multiple global optima, and etc. It is important to emphasize that the later mentioned problems have a rather different perspective as compared with the SO and unconstrained global optimization problem encountered in the current research work. For instance, the multimodal optimization problems require the algorithm to simultaneously locate multiple global optimum solutions in the problem search space, instead of just focusing on a single optimum. Meanwhile, MO problems consist of more than one objective functions that need to be simultaneously deal with during the optimization process. Moreover, unlike SO problems which consist of only one global optimum, MO problems consider a set of equally important solutions called the Pareto-optima set. Based on these aforementioned descriptions, it could be deduced that more works need to be done to further extend the applicability of TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA in tackling these general class of optimization problems. For instance, two main aspects need to be considered to adapt the four proposed PSO variants for MO problems. First, these proposed PSO variants need to employ the strategies such as Paretoranking or Pareto-sorting to guide the solutions towards the Pareto-frontier (Fonseca and Fleming, 1995). Second, some mechanisms such as sharing or niching methods need to be incorporated to the four proposed PSO variants to ensure a set of well-distributed solutions are generated across the Pareto-frontier (Fonseca and Fleming, 1995). Please refer to Page et

294

al. (2012) for more discussions of the extension of an global optimization algorithm to facilitate its application in solving other classes of optimization problems.

7.2.3 Hybridization with Other Metaheuristic Search Algorithms Another promising direction of extending the current research work is via the hybridizing process. Hybridization is a widely used strategy to improve the search performance of a metaheuristic search (MS) algorithm. This strategy intends to incorporate the desired capabilities of different search operators to mitigate the drawback of a MS algorithm. The first type of hybridization that can be conducted in the future involves the combination of any module proposed in this research work with any other MS algorithm. This is because majority of the algorithmic modules proposed in this study (e.g., SPSL, OEDLS, EKE, ATVTC, DLTA, and etc.) are generic and can be easily adapted into the frameworks of any MS algorithms. It is worth to study if these algorithmic modules could improve the search performance of those MS algorithms other than the PSO. The second type of hybridization that can be performed is to merge the TPLPSO, ATLPSO-ELS, PSO-ATVTC, and PSO-DLTA with other MS algorithms into a single framework. This type of hybrid optimizers has attracted persistent attention from researchers in the past decades because they are promising to achieve the top performance in solving most difficult problems. There are many different hybridization strategies have been proposed in the literature to effectively combine two different MS algorithms. These strategies include (1) the collaboration-based hybridization, (2) the assistance-based hybridization, and (3) embedding-based hybridization (Bin et al., 2012). Please refer to Bin et al. (2012) for more detail descriptions of these hybridization strategies.

295

REFERENCES

AKHTAR, J., KOSHUL, B. B. & AWAIS, M. M. 2013. A framework for evolutionary algorithms based on Charles Sanders Peirce’s evolutionary semiotics. Information Sciences, 236 (0), 93-108. ALRASHIDI, M. R. & EL-HAWARY, M. E. 2009. A survey of particle swarm optimization applications in electric power systems. IEEE Transactions on Evolutionary Computation, 13 (4), 913-918. ANGELINE, P. J. 1998. Using selection to improve particle swarm optimization. In: Proceedings of 1998 IEEE International Conference on Evolutionary Computation (CEC'98), 4-9 May 1998, Anchorage, Alaska, USA, 84-89. BACK, T. 1996. Evolutionary algorithms in theory and practice : Evolution strategies, evolutionary programming, genetic algorithms, New York, USA, Oxford University Press. BACK, T., FOGEL, D. B. & MICHALEWICZ, Z. 1997. Handbook of evolutionary computation, IOP Publishing Ltd. BANKS, A., VINCENT, J. & ANYAKOHA, C. 2007. A review of particle swarm optimization. Part I: background and development. Natural Computing, 6 (4), 467484. BANKS, A., VINCENT, J. & ANYAKOHA, C. 2008. A review of particle swarm optimization. Part II: hybridisation, combinatorial, multicriteria and constrained optimization, and indicative applications. Natural Computing, 7 (1), 109-124. BASTOS-FILHO, C. J. A., CARVALHO, D. F., FIGUEIREDO, E. M. N. & DE MIRANDA, P. B. C. 2009. Dynamic clan particle swarm optimization. In: 2009 Ninth International Conference on Intelligent Systems Design and Applications (ISDA '09), 30 November-2 December 2009, Pisa, Italy, 249-254. BEHESHTI, Z., SHAMSUDDIN, S. M. H. & HASAN, S. 2013. MPSO: median-oriented particle swarm optimization. Applied Mathematics and Computation, 219 (11), 5817-5836. BEYER, H.-G. & SCHWEFEL, H.-P. 2002. Evolution strategies: A comprehensive introduction. 1 (1), 3-52. BIANCHI, L., DORIGO, M., GAMBARDELLA, L. & GUTJAHR, W. 2009. A survey on metaheuristics for stochastic combinatorial optimization. Natural Computing, 8 (2), 239-287. BIN, X., JIE, C., JUAN, Z., HAO, F. & ZHI-HONG, P. 2012. Hybridizing differential evolution and particle swarm optimization to design powerful optimizers: a review and taxonomy. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 42 (5), 744-767. BLUM, C. & ROLI, A. 2003. Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Computing Survey, 35 (3), 268-308.

296

BLUM, C., VALLÈS, M. Y. & BLESA, M. J. 2008. An ant colony optimization algorithm for DNA sequencing by hybridization. Computers & Operations Research, 35 (11), 3620-3635. BOETTCHER, S. & PERCUS, A. G. 1999. Extremal optimization: methods derived from co-evolution. In: DAIDA, J., EIBEN, A. E., GARZON, M. H., HONAVAR, V., JAKIELA, M. & SMITH, R. E., eds. In: Proceedings of 1999 Genetic and Evolutionary Computation Conference (GECCO99), 13-17 July 1999, Orlando, Florida, USA: Morgan Kaufmann, 825-832. BONABEAU, E., DORIGO, M. & THERAULAZ, G. 1999. Swarm intelligence: from natural to artificial systems, New York, USA, Oxford University Press. CAI, G.-R., CHEN, S.-L., LI, S.-Z. & GUO, W.-Z. 2008. Study on the nonlinear strategy of inertia weight in particle swarm optimization. In: 2008 Fourth International Conference on Natural Computation (ICNC'08), 18-20 October 2008, Jinan, Shandong, China, 683-687. CARVALHO, D. F. & BASTOS-FILHO, C. J. A. 2008. Clan particle swarm optimization. In: Proceedings of 2008 IEEE Congress on Evolutionary Computation (CEC'08), 16 June 2008, Hong Kong, 3044-3051. CHATTERJEE, A. & SIARRY, P. 2006. Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization. Computers & Operations Research, 33 (3), 859-871. CHEN, W.-N., ZHANG, J., LIN, Y., CHEN, N., ZHAN, Z.-H., CHUNG, H. S.-H., LI, Y. & SHI, Y.-H. 2013. Particle swarm optimization with an aging leader and challengers. IEEE Transactions on Evolutionary Computation, 17 (2), 241 - 258. CHEN, Y.-P., PENG, W.-C. & JIAN, M.-C. 2007. Particle swarm optimization with recombination and dynamic linkage discovery. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 37 (6), 1460-1470. CHENG, M.-Y., HUANG, K.-Y. & CHEN, H.-M. 2012. K-means particle swarm optimization with embedded chaotic search for solving multidimensional problems. Applied Mathematics and Computation, 219 (6), 3091-3099. CHETTY, S. & ADEWUMI, A. 2013. Three new stochastic local search algorithms for continuous optimization problems. Computational Optimization and Applications, 56 (3), 675-721. CHUANG, L.-Y., TSAI, S.-W. & YANG, C.-H. 2011. Chaotic catfish particle swarm optimization for solving global numerical optimization problems. Applied Mathematics and Computation, 217 (16), 6900-6916. CLERC, M. & KENNEDY, J. 2002. The particle swarm - explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6 (1), 58-73. ČREPINŠEK, M., LIU, S.-H. & MERNIK, L. 2012. A note on teaching–learning-based optimization algorithm. Information Sciences, 212 (0), 79-93. DAS, S. & SUGANTHAN, P. N. 2010. Problem definitions and evaluation criteria for CEC 2011 competition on testing evolutionary algorithms on real world optimization problems. Technical Report, Nanyang Technological University , Singapore. 297

DAS, S. & SUGANTHAN, P. N. 2011. Differential evolution: a survey of the state-of-theart. IEEE Transactions on Evolutionary Computation, 15 (1), 4-31. DE JONG, K. A. 2006. Evolutionary computation: a unified approach, Cambridge, Massachusetts, London, England, MIT Press. DEB, K., ANAND, A. & JOSHI, D. 2002. A computationally efficient evolutionary algorithm for real-parameter optimization. Evol. Comput., 10 (4), 371-395. DEL VALLE, Y., VENAYAGAMOORTHY, G. K., MOHAGHEGHI, S., HERNANDEZ, J. C. & HARLEY, R. G. 2008. Particle swarm optimization: basic concepts, variants and applications in power systems. IEEE Transactions on Evolutionary Computation, 12 (2), 171-195. DENG, J. 1989. Introduction to grey system theory. Journal of Grey System, 1 (1), 1-24. DERRAC, J., GARCÍA, S., MOLINA, D. & HERRERA, F. 2011. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation, 1 (1), 318. DORIGO, M. & BLUM, C. 2005. Ant colony optimization theory: a survey. Theoretical Computer Science, 344 (2–3), 243-278. DORIGO, M., MANIEZZO, V. & COLORNI, A. 1996. Ant system: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 26 (1), 29 - 41. DUKIC, M. L. & DOBROSAVLJEVIC, Z. S. 1990. A method of a spread-spectrum radar polyphase code design. IEEE Journal on Selected Areas in Communications, 8 (5), 743 - 749. EBERHART, R. C. & SHI , Y. 2000. Comparing inertia weights and constriction factors in particle swarm optimization. In: Proceedings of 2000 IEEE Congress on Evolutionary Computation (CEC'00), 16-19 July 2000, San Diego, USA, 84-88. EBERHART, R. C. & SHI, Y. 2001. Particle swarm optimization: developments, applications and resources. In: Proceedings of 2001 IEEE Congress on Evolutionary Computation, 27-30 May 2001, Seoul, South Korea, 81-86. EPITROPAKIS, M. G., PLAGIANAKOS, V. P. & VRAHATIS, M. N. 2012. Evolving cognitive and social experience in particle swarm optimization through differential evolution: a hybrid approach. Information Sciences, 216, 50-92. FENG, Q., LIU, S., TANG, G., YONG, L. & ZHANG, J. 2013. Biogeography-based optimization with orthogonal crossover. Mathematical Problems in Engineering, 2013 (0), 1-20. FENG, Q., LIU, S., ZHANG, J. & YANG, G. 2012. Extrapolated particle swarm optimization based on orthogonal design. Journal of Convergence Information Technology, 7 (2), 141-152. FERREIRA, C. 2001. Gene expression programming: a new adaptive algorithm for solving problems. Complex Systems, 13 (2), 87-129.

298

FERREIRA, C. 2004. Gene expression programming and the automatic evolution of computer programs. In: DE CASTRO, L. N. & VON ZUBEN, F. J. (eds.) Recent Developments in Biologically Inspired Computing. Idea Group Publishing. FONSECA, C. M. & FLEMING, P. J. 1995. An overview of evolutionary algorithms in multiobjective optimization. IEEE Transactions on Evolutionary Computation, 3 (1), 1-16. FU, Y., DING, M. & ZHOU, C. 2012. Phase angle-encoded and quantum-behaved particle swarm optimization applied to three-dimensional route planning for UAV. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 42 (2), 511-526. GAO, W.-F., LIU, S.-Y. & HUANG, L.-L. 2013. A novel artificial bee colony algorithm based on modified search equation and orthogonal learning. IEEE Transactions on Cybernetics, 43 (3), 1011-1024. GARCÍA, S., MOLINA, D., LOZANO, M. & HERRERA, F. 2009. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 Special Session on Real Parameter Optimization. Journal of Heuristics, 15 (6), 617-644. GOLDBERG, D. E. & HOLLAND, J. H. 1988. Genetic algorithms and machine learning. Machine Learning, 3 (2), 95-99. GONG, W., CAI, Z., LING, C. X. & LI, H. 2010. A real-coded biogeography-based optimization with mutation. Applied Mathematics and Computation, 216 (9), 27492758. HAMILTON, J. & TEE, S. 2013. Blended teaching and learning: a two-way systems approach. Higher Education Research & Development, 32 (5), 748-764. HANSEN, N. & OSTERMEIER, A. 2001. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9 (2), 159-195. HE, S., WU, Q. H. & SAUNDERS, J. R. 2009. Group search optimizer: an optimization algorithm inspired by animal searching behavior. IEEE Transactions on Evolutionary Computation, 13 (5), 973-990. HEDAYAT, A. S. 1999. Orthogonal arrays: theory and applications, New York, SpringerVerlag. HO, S.-Y., LIN, H.-S., LIAUH, W.-H. & HO, S.-J. 2008. OPSO: orthogonal particle swarm optimization and its application to task assignment problems. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 38 (2), 288-298. HOPKINS, B. & SKELLAM, J. G. 1954. A new method for determining the type of distribution of plant Iindividuals. Annals of Botany, 18 (2), 213-227. HSIEH, S.-T., SUN, T.-Y., LIU, C.-C. & TSAI, S.-J. 2009. Efficient population utilization strategy for particle swarm optimizer. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 39 (2), 444-456. HU, M., WU, T. & WEIR, J. D. 2012. An intelligent augmentation of particle swarm optimization with multiple adaptive methods. Information Sciences, 213 (0), 68-83.

299

HU, M., WU, T. & WEIR, J. D. 2013. An adaptive particle swarm optimization with multiple adaptive methods. IEEE Transactions on Evolutionary Computation, 17 (5), 705-720. HUANG, C.-J., CHUANG, Y.-T. & HU, K.-W. 2009. Using particle swam optimization for QoS in ad-hoc multicast. Engineering Applications of Artificial Intelligence, 22 (8), 1188-1193. HUANG, H., QIN, H., HAO, Z. & LIM, A. 2012. Example-based learning particle swarm optimization for continuous optimization. Information Sciences, 182 (1), 125-138. JIN, X., LIANG, Y., TIAN, D. & ZHUANG, F. 2013. Particle swarm optimization using dimension selection methods. Applied Mathematics and Computation, 219 (10), 5185-5197. JUANG, C.-F. 2004. A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 34 (2), 997-1006. JUANG, Y.-T., TUNG, S.-L. & CHIU, H.-C. 2011. Adaptive fuzzy particle swarm optimization for global optimization of multimodal functions. Information Sciences, 181 (20), 4539-4549. KATHRADA, M. 2009. The flexi-PSO: towards a more flexible particle swarm optimizer. OPSEARCH, 46 (1), 52-68. KENNDY, J. 2000. Stereotyping: improving particle swarm performance with cluster analysis. In: Proceedings of 2000 IEEE Congress on Evolutionary Computaiton (CEC'00), 16-19 July 2000, La Jolla, California, USA, 1507 - 1512. KENNEDY, J. 1999. Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance. In: Proceedings of 1999 IEEE Congress on Evolutionary Computation (CEC'99), 6-9 July 1999, Washington, DC, USA, 19311938. KENNEDY, J. & EBERHART, R. 1995. Particle swarm optimization. In: Proceedings of 1995 IEEE International Conference on Neural Networks, 27 November-1 December 1995, Perth, Australia, 1942-1948. KENNEDY, J. & MENDES, R. 2002. Population structure and particle swarm performance. In: Proceedings of 2002 IEEE Congress on Evolutionary Computation (CEC '02), 12-17 May 2002, Honolulu, Hawaii, USA, 1671-1676. KIRANYAZ, S., INCE, T., YILDIRIM, A. & GABBOUJ, M. 2010. Fractional particle swarm optimization in multidimensional search space. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 40 (2), 298-319. KIRKPATRICK, S., GELATT, C. D. & VECCHI, M. P. 1983. Optimization by simulated annealing. Science, 220 (4598), 671-680. KO, C.-N., CHANG, Y.-P. & WU, C.-J. 2007. An orthogonal-array-based particle swarm optimizer with nonlinear time-varying evolution. Applied Mathematics and Computation, 191 (1), 272-279. KOZA, J. R. 1992. Genetic programming: on the programming of computers by means of natural selection, Cambridge, Massachusetts, USA, The MIT Press. 300

KRAUTH, W. 1996. Introduction to Monte Carlo algorithms. In: KONDOR, J. K. A. I. (ed.) Lecture Notes in Physics Springer Verlag. LAM, A. Y. S. & LI, V. O. K. 2010. Chemical-reaction-inspired metaheuristic for optimization. IEEE Transactions on Evolutionary Computation, 14 (3), 381-399. LAM, A. Y. S., LI, V. O. K. & YU, J. J. Q. 2012. Real-coded chemical reaction optimization. IEEE Transactions on Evolutionary Computation, 16 (3), 339-353. LEU, M.-S. & YEH, M.-F. 2012. Grey particle swarm optimization. Applied Soft Computing, 12 (9), 2985-2996. LI, C. 2010. Particle swarm optimization in stationary and dynamic environments. PhD Thesis, University of Leicester, England. LI, C., YANG, S. & NGUYEN, T. T. 2012. A self-learning particle swarm optimizer for global optimization problems. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 42 (3), 627-646. LIANG, J. J., QIN, A. K., SUGANTHAN, P. N. & BASKAR, S. 2006. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Transactions on Evolutionary Computation, 10 (3), 281-295. LIANG, J. J. & SUGANTHAN, P. N. 2005. Dynamic multi-swarm particle swarm optimizer. In: Proceedings of 2005 IEEE Swarm Intelligence Symposium (SIS'05), 8-10 June 2005, Pasadena, California, USA, 124-129. LIBERTI, L. 2008. Introduction to global optimization. Lecture of Ecole Polytechnique, Palaiseau F, 91128. LIN, C.-J., CHEN, C.-H. & LIN, C.-T. 2009. A hybrid of cooperative particle swarm optimization and cultural algorithm for neural fuzzy networks and its prediction applications. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 39 (1), 55-68. LIN, W.-M., GOW, H.-J. & TSAI, M.-T. 2011. Hybridizing particle swarm optimization with signal-to-noise ratio for numerical optimization. Expert Systems with Applications, 38 (11), 14086-14093. LIU, B., WANG, L. & JIN, Y.-H. 2007. An effective PSO-based memetic algorithm for flow shop scheduling. IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, 37 (1), 18-27. LIU, L., LIU, W. & CARTES, D. A. 2008. Particle swarm optimization-based parameter identification applied to permanent magnet synchronous motors. Engineering Applications of Artificial Intelligence, 21 (7), 1092-1100. MAJHI, R., PANDA, G., MAJHI, B. & SAHOO, G. 2009. Efficient prediction of stock market indices using adaptive bacterial foraging optimization (ABFO) and BFO based techniques. Expert Systems with Applications, 36 (6), 10097-10104. MARIANI, V. C., DUCK, A. R. K., GUERRA, F. A., COELHO, L. D. S. & RAO, R. V. 2012. A chaotic quantum-behaved particle swarm approach applied to optimization of heat exchangers. Applied Thermal Engineering, 42 (0), 119-128.

301

MARINAKIS, Y. & MARINAKI, M. 2013. A hybridized particle swarm optimization with expanding neighborhood topology for the feature selection problem. In: BLESA, M., BLUM, C., FESTA, P., ROLI, A. & SAMPELS, M. (eds.) Hybrid Metaheuristics. Springer Berlin Heidelberg. MELANIE, M. 1999. An introduction to genetic algorithms, Cambridge, Massachusetts, MIT Press. MENDES, R., KENNEDY, J. & NEVES, J. 2004. The fully informed particle swarm: simpler, maybe better. IEEE Transactions on Evolutionary Computation, 8 (3), 204210. MEZURA-MONTES, E. & COELLO, C. A. C. 2005. A simple multimembered evolution strategy to solve constrained optimization problems. IEEE Transactions on Evolutionary Computation, 9 (1), 1-17. MILGATE, G. C., PURDIE, N. & BELL, H. R. 2011. Two way teaching and learning : toward culturally reflective and relevant education, Melbourne, Australia, Australian Council for Educational Research (ACER) Press. MIRANDA, V. & FONSECA, N. 2002a. EPSO-best-of-two-worlds meta-heuristic applied to power system problems. In: Proceedings of 2002 IEEE Congress on Evolutionary Computation (CEC'02), 12-17 May 2002a, Honolulu, Hawaii, USA, 1080 - 1085. MIRANDA, V. & FONSECA, N. 2002b. EPSO-evolutionary particle swarm optimization, a new algorithm with applications in power systems. In: 2002 IEEE/PES Asia Pacific Transmission and Distribution Conference and Exhibition, 2002b, Yokohama, Japan, 745-750. MIRJALILI, S., MOHD HASHIM, S. Z. & MORADIAN SARDROUDI, H. 2012. Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Applied Mathematics and Computation, 218 (22), 11125-11137. MODARES, H., ALFI, A. & NAGHIBI SISTANI, M.-B. 2010. Parameter estimation of bilinear systems based on an adaptive particle swarm optimization. Engineering Applications of Artificial Intelligence, 23 (7), 1105-1111. MONSON, C. & SEPPI, K. 2004. The Kalman Swarm. In: DEB, K. (ed.) Genetic and Evolutionary Computation (GECCO 2004). Springer Berlin Heidelberg. MONTES DE OCA, M. A., STUTZLE, T., BIRATTARI, M. & DORIGO, M. 2009. Frankenstein's PSO: a composite particle swarm optimization algorithm. IEEE Transactions on Evolutionary Computation, 13 (5), 1120-1132. MONTES DE OCA, M. A., STUTZLE, T., VAN DEN ENDEN, K. & DORIGO, M. 2011. Incremental social learning in particle swarms. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41 (2), 368-384. MONTGOMERY, D. C. 1991. Design and analysis of experiments, New York, Wiley. NASIR, M., DAS, S., MAITY, D., SENGUPTA, S., HALDER, U. & SUGANTHAN, P. N. 2012. A dynamic neighborhood learning based particle swarm optimizer for global numerical optimization. Information Sciences, 209 (0), 16-36.

302

NEDIC, A. 2002. Subgradient methods for convex minimization. PhD Thesis, Massachusetts Institue of Technology, USA. NERI, F. & COTTA, C. 2012. A Primer on Memetic Algorithms. In: NERI, F., COTTA, C. & MOSCATO, P. (eds.) Handbook of Memetic Algorithms. Springer Berlin Heidelberg. NEYESTANI, M., FARSANGI, M. M. & NEZAMABADI-POUR, H. 2010. A modified particle swarm optimization for economic dispatch with non-smooth cost functions. Engineering Applications of Artificial Intelligence, 23 (7), 1121-1126. NGUYEN, T. T., LI, Z., ZHANG, S. & TRUONG, T. K. 2014. A hybrid algorithm based on particle swarm and chemical reaction optimization. Expert Systems with Applications, 41 (5), 2134-2143. OCHS, P. W. & PEIRCE, C. S. 1993. Founders of constructive postmodern philosophy: Peirce, James, Bergson, Whitehead, and Hartshorne, State University of New York Press. ÖZBAKıR, L. & DELICE, Y. 2011. Exploring comprehensible classification rules from trained neural networks integrated with a time-varying binary particle swarm optimizer. Engineering Applications of Artificial Intelligence, 24 (3), 491-500. OZCAN, E. & MOHAN, C. K. 1999. Particle swarm optimization: surfing the waves. In: Proceedings of 1999 Congress on Evolutionary Computation (CEC'99), 6-9 July 1999, Washington, DC, USA, 1939 - 1944. PAGE, S. F., CHEN, S., HARRIS, C. J. & WHITE, N. M. 2012. Repeated weighted boosting search for discrete or mixed search space and multiple-objective optimisation. Applied Soft Computing, 12 (9), 2740-2755. PARSOPOULOS, K. E. & VRAHATIS, M. N. 2002. Recent approaches to global optimization problems through particle swarm optimization. Natural Computing, 1 (2-3), 235-306. PARSOPOULOS, K. E. & VRAHATIS, M. N. 2004. A unified particle swarm optimization scheme. In: Proceedings of 2004 International Conference of Computational Methods in Sciences and Engineering (ICCMSE'04), 2004, Zeist, The Netherlands: VSP International Science Publishers, 874-879. PONTES, M. R., NETO, F. B. L. & BASTOS-FILHO, C. J. A. 2011. Adaptive clan particle swarm optimization. In: 2011 IEEE Symposium on Swarm Intelligence (SIS'11), 1115 April 2011, Paris, France, 1-6. RAO, R. V., SAVSANI, V. J. & VAKHARIA, D. P. 2011. Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Computer-Aided Design, 43 (3), 303-315. RAO, R. V., SAVSANI, V. J. & VAKHARIA, D. P. 2012. Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems. Information Sciences, 183 (1), 1-15. RATNAWEERA, A., HALGAMUGE, S. K. & WATSON, H. C. 2004. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Transactions on Evolutionary Computation, 8 (3), 240-255.

303

SAKTHIVEL, V. P., BHUVANESWARI, R. & SUBRAMANIAN, S. 2010. Multi-objective parameter estimation of induction motor using particle swarm optimization. Engineering Applications of Artificial Intelligence, 23 (3), 302-312. SALOMON, R. 1996. Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. Biosystems, 39 (3), 263-278. SANDGREN, E. 1990. Nonlinear integer and discrete programming in mechanical design optimization. Journal of Mechanical Design, 112 (2), 223-229. SARATH, K. N. V. D. & RAVI, V. 2013. Association rule mining using binary particle swarm optimization. Engineering Applications of Artificial Intelligence, 26 (8), 1832–1840. SATAPATHY, S., NAIK, A. & PARVATHI, K. 2013. A teaching learning based optimization based on orthogonal design for solving global optimization problems. SpringerPlus, 2 (1), 130. SHARMA, K. D., CHATTERJEE, A. & RAKSHIT, A. 2009. A hybrid approach for design of stable adaptive fuzzy controllers employing Lyapunov theory and particle swarm optimization. IEEE Transactions on Fuzzy Systems, 17 (2), 329-342. SHI, X. H., LU, Y. H., ZHOU, C. G. & LEE, H. P. 2003. Hybrid evolutionary algorithms based on PSO and GA. In: Proceedings of 2003 IEEE Congress on Evolutionary Computation (CEC'03), 8-12 December 2003, Canberra, Australia, 2393-2399. SHI, Y. & EBERHART, R. 1998. A modified particle swarm optimizer. In: Proceedings of 1998 IEEE World Congress on Computational Intelligence, 4-9 May 1998, Anchorage, Alaska, USA, 69-73. SHI, Y. & EBERHART, R. C. 1999. Empirical study of particle swarm optimization. In: Proceedings of 1999 IEEE Congress on Evolutionary Computation (CEC'99), 3-9 July 1999, Washington, DC, USA, 1945-1950. SHI, Y. & EBERHART, R. C. 2001. Fuzzy adaptive particle swarm optimization. In: Proceedings of 2001 IEEE Congress on Evolutionary Computation (CEC'01), 27-30 May 2001, Seoul, South Korea, 101-106. SHIH, T.-F. 2006. Particle swarm optimization algorithm for energy-efficient cluster-based sensor networks. IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences E89-A (7), 1950-1958. SHIN, S.-Y., LEE, I.-H., KIM, D. & ZHANG, B.-T. 2005. Multiobjective evolutionary optimization of DNA sequences for reliable DNA computing. IEEE Transactions on Evolutionary Computation, 9 (2), 143 - 158. SIMON, D. 2008. Biogeography-based optimization. IEEE Transactions on Evolutionary Computation, 12 (6), 702-713. SINGH, P. & BORAH, B. 2014. Forecasting stock index price based on M-factors fuzzy time series and particle swarm optimization. International Journal of Approximate Reasoning, 55 (3), 812-833. SONG, Y., CHEN, Z. & YUAN, Z. 2007. New chaotic PSO-based neural network predictive control for nonlinear process. IEEE Transactions on Neural Networks, 18 (2), 595601. 304

SUGANTHAN, P. N. 1999. Particle swarm optimiser with neighbourhood operator. In: Proceedings of 1999 IEEE Congress on Evolutionary Computation (CEC'99), 6-9 July 1999, Washington, DC, USA, 1958-1962. SUGANTHAN, P. N., HANSEN, N., LIANG, J. J., DEB, K., CHEN, Y. P., AUGER, A. & TIWARI, S. 2005. Problem definitions and evaluation criteria for the CEC 2005 special session on real parameter optimization. Technical Report, Nanyang Technological University , Singapore. SUN, J., CHEN, W., FANG, W., WUN, X. & XU, W. 2012. Gene expression data analysis with the clustering method based on an improved quantum-behaved Particle Swarm Optimization. Engineering Applications of Artificial Intelligence, 25 (2), 376-391. SUN, J., FANG, W., WU, X., XIE, Z. & XU, W. 2011. QoS multicast routing using a quantum-behaved particle swarm optimization algorithm. Engineering Applications of Artificial Intelligence, 24 (1), 123-131. SUN, J., FENG, B. & XU, W. 2004a. Particle swarm optimization with particles having quantum behavior. In: Proceedings of 2004 IEEE Congress on Evolutionary Computation (CEC'04) 19-23 June 2004a, Portland, Oregon, USA, 325-331 Vol.1. SUN, J., XU, W. & FENG, B. 2004b. A global search strategy of quantum-behaved particle swarm optimization. In: 2004 IEEE Conference on Cybernetics and Intelligent Systems (CIS'04), 1-3 December 2004b, Singapore, 111-116. TANG, Y., WANG, Z. & FANG, J.-A. 2011. Feedback learning particle swarm optimization. Applied Soft Computing, 11 (8), 4713-4725. VAN DEN BERGH, F. 2002. An analysis of particle swarm optimizers. PhD Thesis, University of Pretoria, South Africa. VAN DEN BERGH, F. & ENGELBRECHT, A. P. 2004. A cooperative approach to particle swarm optimization. IEEE Transactions on Evolutionary Computation, 8 (3), 225239. VAN DEN BERGH, F. & ENGELBRECHT, A. P. 2006. A study of particle swarm optimization particle trajectories. Information Sciences, 176 (8), 937-971. WANG, C., LIU, Y. & ZHAO, Y. 2013a. Application of dynamic neighborhood small population particle swarm optimization for reconfiguration of shipboard power system. Engineering Applications of Artificial Intelligence, 26 (4), 1255-1262. WANG, H., SUN, H., LI, C., RAHNAMAYAN, S. & PAN, J.-S. 2013b. Diversity enhanced particle swarm optimization with neighborhood search. Information Sciences, 223 (0), 119-135. WANG, S. & CHEN, L. 2009. A PSO algorithm based on orthogonal test design. In: 2009 Fifth International Conference on Natural Computation (ICNC'09), 14-16 August 2009, Tianjin, China, 190 - 194. WANG, Y., CAI, Z. & ZHANG, Q. 2012. Enhancing the search ability of differential evolution through orthogonal crossover. Information Sciences, 185 (1), 153-177. WANG, Y., LI, B., WEISE, T., WANG, J., YUAN, B. & TIAN, Q. 2011. Self-adaptive learning based particle swarm optimization. Information Sciences, 181 (20), 45154538. 305

WANG, Z., SUN, X. & ZHANG, D. 2007. A PSO-based classification rule mining algorithm. In: HUANG, D.-S., HEUTTE, L. & LOOG, M. (eds.) Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence. Springer Berlin Heidelberg. WEISE, T. 2008. Global optimization algorithms - theory and application. Available: http://www.it-weise.de/ [Accessed 18 March 2014]. WHITLEY, D., RANA, S., DZUBERA, J. & MATHIAS, K. E. 1996. Evaluating evolutionary algorithms. Artificial Intelligence, 85 (1–2), 245-276. WU, X., CHENG, B., CAO, J. & CAO, B. 2008. Particle swarm optimization with normal cloud mutation. In: Proceedings of the 7th World Congress on Intelligent Control and Automation, 25-27 June 2008, Chongqing, China, 2828-2832. WU, Y. 2009. Parallel hybrid evolutionary algorithm based on chaos-GA-PSO for SPICE model parameter extraction. In: 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS2009), 20-22 November 2009, Shanghai, China, 688-692. XIE, X.-F., ZHANG, W.-J. & YANG, Z.-L. 2002. Dissipative particle swarm optimization. In: Proceedings of 2002 IEEE Congress on Evolutionary Computation (CEC'02) 1217 May 2002, Honolulu, Hawaii, USA. XIN, B., CHEN, J., ZHANG, J., FANG, H. & PENG, Z.-H. 2012. Hybridizing differential evolution and particle swarm optimization to design powerful optimizers: a review and taxonomy. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 42 (5), 744-767. XU, G. 2013. An adaptive parameter tuning of particle swarm optimization algorithm. Applied Mathematics and Computation, 219 (9), 4560-4569. YAGHINI, M., KHOSHRAFTAR, M. M. & FALLAHI, M. 2013. A hybrid algorithm for artificial neural network training. Engineering Applications of Artificial Intelligence, 26 (1), 293-301. YAN, X., WU, Q. & LIU, H. 2013. Orthogonal particle swarm optimization algorithm and its application in circuit design. TELKOMNIKA Indonesian Journal of Electrical Engineering, 11 (6), 2926-2932. YANG, C.-H., TSAI, S.-W., CHUANG, L.-Y. & YANG, C.-H. 2012. An improved particle swarm optimization with double-bottom chaotic maps for numerical optimization. Applied Mathematics and Computation, 219 (1), 260-279. YANG, F., SUN, T. & ZHANG, C. 2009. An efficient hybrid data clustering method based on K-harmonic means and particle swarm optimization. Expert Systems with Applications, 36 (6), 9847-9852. YANG, J., BOUZERDOUM, A. & PHUNG, S. L. 2010. A particle swarm optimization algorithm based on orthogonal design. In: Proceedings of 2010 IEEE Congress on Evolutionary Computation (CEC'10), 18-23 July 2010, Barcelona, Spain, 1-7. YAO, X., LIU, Y. & LIN, G. 1999. Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation, 3 (2), 82-102.

306

YU, L., CHEN, H., WANG, S. & LAI, K. K. 2009. Evolving least squares support vector machines for stock market trend mining. IEEE Transactions on Evolutionary Computation, 13 (1), 87 - 102. ZHAN, Z.-H., ZHANG, J., LI, Y. & CHUNG, H. S. H. 2009. Adaptive particle swarm optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 39 (6), 1362-1381. ZHAN, Z.-H., ZHANG, J., LI, Y. & SHI, Y.-H. 2011. Orthogonal learning particle swarm optimization. IEEE Transactions on Evolutionary Computation, 15 (6), 832-847. ZHANG, J.-H., WU, L.-Y., ZHAO, Y.-Y. & ZHANG, X.-S. 2007. An optimization approach to the reconstruction of positional DNA sequencing by hybridization with errors. European Journal of Operational Research, 182 (1), 413-427. ZHANG, J., WANG, J. & YUE, C. 2012. Small population-based particle swarm optimization for short-term hydrothermal scheduling. IEEE Transactions on Power Systems, 27 (1), 142 - 152. ZHAO, B., GUO, C. X., BAI, B. R. & CAO, Y. J. 2006. An improved particle swarm optimization algorithm for unit commitment. International Journal of Electrical Power & Energy Systems, 28 (7), 482-490. ZHAO, S.-Z., SUGANTHAN, P. N. & DAS, S. 2010. Dynamic multi-swarm particle swarm optimizer with sub-regional harmony search. In: Proceedings of 2010 IEEE Congress on Evolutionary Computation (CEC'10), 18-23 July 2010, Barcelona, Spain, 1-8. ZHOU, D., GAO, X., LIU, G., MEI, C., JIANG, D. & LIU, Y. 2011. Randomization in particle swarm optimization for global search ability. Expert Systems with Applications, 38 (12), 15356-15364. ZHU, W., TANG, Y., FANG, J.-A. & ZHANG, W. 2013. Adaptive population tuning scheme for differential evolution. Information Sciences, 223 (0), 164-191.

307

APPENDICES

308

APPENDIX A Parameter Sensitivity Analyses for the Compared PSO Variants

309

Table A1 Effects of the acceleration rate  on APSO Value of 

Fixed at 0.01

Fixed at 0.05

Fixed at 0.1

F1 F4 F11 F14 F19 F21 F23 F26 F28 F30

1.81E-01 1.07E+00 2.18E+02 6.97E+01 1.01E+00 1.10E-01 1.79E+00 6.06E+00 5.92E+02 1.21E+03

2.56E-01 1.03E+00 2.15E+02 6.81E+01 1.04E+00 9.33E-02 1.94E+00 6.45E+00 6.06E+02 1.18E+03

2.72E-01 1.06E+00 2.12E+02 6.86E+01 1.21E+00 7.16E-02 1.68E+00 6.29E+00 5.76E+02 1.17E+03

Random (0.01, 0.05) 3.02E-01 7.03E-01 1.90E+02 6.72E+01 4.61E-01 6.67E-02 1.51E+00 5.05E+00 5.28E+02 1.09E+03

Random (0.05, 0.1) 2.50E-01 5.81E0-01 1.83E+02 6.32E+01 5.92E-01 6.04E-02 1.49E+00 4.13E+00 5.13E+02 1.08E+03

Random (0.01, 0.1) 3.12E-01 6.72E-02 1.86E+02 6.44E+01 7.37E-01 6.44E-02 1.52E+00 4.52E+00 4.93E+02 1.11E+03

Table A2 Effects of the elitist learning rate  on APSO Value of 

Fixed at 0.1

Fixed at 0.5

Fixed at 1.0

F1 F4 F11 F14 F19 F21 F23 F26 F28 F30

3.57E-01 1.15E+00 2.74E+02 6.56E+01 1.09E+00 1.00E-01 1.61E+00 6.22E+00 6.87E+02 1.22E+03

3.65E-01 1.17E+00 2.86E+02 6.80E+01 1.06E+00 9.16E-02 1.56E+00 5.16E+00 6.37E+02 1.18E+03

3.66E-01 1.07E+00 2.38E+02 6.56E+01 1.15E+00 7.27E-02 1.65E+00 5.65E+00 6.12E+02 1.14E+03

From 1.0 to 0.5 2.81E-01 6.72E+00 1.90E+02 6.27E+01 6.31E-01 6.45E-02 1.52E+00 4.77E+00 5.28E+02 1.09E+03

From 0.5 to 0.1 2.72E-01 6.79E+00 1.89E+02 6.41E+01 4.88E-01 6.11E-02 1.49E+00 4.09E+00 5.35E+02 1.09E+03

Table A3 Effects of the refreshing gap m on CLPSO Value of m 0 1 2 3 4 5 6 7 8 9 10 Value of m 0 1 2 3 4 5 6 7 8 9 10

F1 6.641E-49 2.763E-48 4.453E-48 6.514E-48 9.315E-48 1.119E-47 2.459E-47 3.287E-47 3.299E-47 4.328E-47 7.085E-46 F21 1.536E-07 1.594E-08 1.547E-08 2.767E-08 2.917E-08 3.867E-08 4.334E+08 4.515E-08 4.888E-08 4.871E-08 4.851E-07

F4 1.068E+02 1.039E+02 1.053E+02 1.015E+02 9.742E+01 9.263E+01 9.353E+01 9.099E+01 9.122E+01 9.308E+01 1.056E+02 F23 1.957E-01 1.721E-01 1.430E-01 7.067E-02 9.241E-02 4.232E-02 3.176E-02 2.951E-02 3.028E-02 3.212E-02 3.176E-02

F11 3.631E+02 3.740E+02 3.575E+02 3.554E+02 3.465E+02 3.504E+02 3.421E+02 3.327E+02 3.457E+02 3.627E+02 3.869E+02 F26 2.528E+01 2.197E+01 2.258E+01 2.145E+01 2.151E+01 2.093E+01 2.065E+01 2.031E+01 2.078E+01 2.066E+01 2.104E+01

310

F14 6.146E+01 6.008E+01 5.891E+01 5.817E+01 5.829E+01 5.781E+01 5.752E+01 5.646E+01 5.733E+01 5.813E+01 6.035E+01 F28 4.311E+02 3.960E+02 3.837E+02 3.667E+02 3.278E+02 3.272E+02 3.042E+02 2.91E+02 3.181E+02 3.314E+02 3.571E+02

F19 8.180E+01 7.655E+01 7.492E+01 7.23E+01 7.29E+01 7.17E+01 7.065E+01 6.994E+01 7.189E+01 7.973E+01 7.920E+01 F30 9.523E+02 9.402E+02 9.410E+02 9.416E+02 9.390E+01 9.340E+02 9.353E+02 9.344E+02 9.378E+02 9.399E+02 9.454E+02

From 1.0 to 0.1 2.50E-01 5.81E0-01 1.83E+02 6.32E+01 5.92E-01 6.04E-02 1.49E+00 4.13E+00 5.13E+02 1.08E+03

Table A4 Effects of the inertia weight  2 on FLPSO-QIW Value of 2 F1 F4 F11 F14 F19 F21 F23 F26 F28 F30

0.4 (Linear) 2.43E-78 6.97E+00 1.61E+02 5.12E+01 1.39E+01 4.63E-10 2.21E-02 7.40E+00 2.56E+02 9.50E+02

0.2 (Linear) 1.06E-78 3.98E+00 1.37E+02 5.23E+01 1.30E+01 4.06E-11 2.95E-02 5.29E+00 4.19E+02 9.38E+02

0.3 (Quadratic) 3.82E-81 2.99E+00 1.39E+02 4.97E+01 1.20E+01 3.89E-13 9.96E-03 3.91E+00 2.24E+02 9.41E+02

0.2 (Quadratic) 2.89E-81 2.60E+00 1.23E+02 4.86E+01 1.19E+01 2.19E-13 6.02E-03 4.01E+00 1.79E+02 9.34E+02

0.1 (Quadratic) 3.52E-81 2.99E+00 1.25E+02 4.99E+01 1.12E+01 3.05E-13 7.45E-03 4.38E+00 1.89E+02 9.36E+02

Table A5 Effects of the acceleration coefficients ( ̂ and ̂ ) on FLPSO-QIW Value of ̂ and ̂ F1 F4 F11 F14 F19 F21 F23 F26 F28 F30

̂ = 2.25, ̂ = 0.75 9.42E-82 2.79E+00 1.22E+02 4.99E+01 1.30E+01 3.04E-13 7.08E-03 4.38E+00 1.87E+02 9.35E+02

̂ = 2.00, ̂ = 1.00 2.89E-81 2.60E+00 1.23E+02 4.86E+01 1.19E+01 2.19E-13 6.02E-03 4.01E+00 1.79E+02 9.34E+02

̂ = 1.75, ̂ = 1.25 2.93E-81 2.99E+00 1.32E+02 5.25E+01 1.40E+01 3.12E-13 8.42E-03 3.61E+00 2.43E+02 9.45E+02

̂ = 1.50, ̂ = 1.50 1.93E-80 2.97E+00 1.29E+01 5.46E+01 1.39E+01 3.29E-13 9.86E-03 4.99E+00 2.37E+02 9.54E+02

Table A6 Effects of the reconstruction gap G on OLPSO-L Value of G 0 1 2 3 4 5 6 7 8 9 10 Value of G 0 1 2 3 4 5 6 7 8 9 10

F1 1.96E-32 1.80E-32 1.15E-32 8.13E-33 7.86E-33 4.86E-33 6.85E-33 7.32E-33 7.39E-33 7.76E-33 8.12E-33 F21 1.19E-12 3.19E-13 8.08E-14 8.09E-14 8.06E-14 8.05E-14 8.06E-14 8.06E-14 8.08E-14 8.08E-14 8.12E-14

F4 6.45E-01 6.67E-01 5.47E-01 5.42E-01 5.86E-01 3.32E-01 6.78E-01 9.95E-01 9.95E-01 1.99E+00 1.99E+00 F23 1.74E-01 1.44E-01 5.56E-02 4.09E-02 3.78E-02 3.76E-02 5.64E-02 6.34E-02 6.65E-02 5.77E-02 7.29E-02

F11 3.51E+02 1.12E+02 1.18E+02 1.04E+02 9.87E+01 9.80E+01 9.55E+01 8.32E+01 9.15E+01 1.21E+02 1.25E+02 F26 5.08E+00 3.59E+00 3.68E+00 3.17E+00 3.00E+00 2.98E+00 3.26E+00 3.03E+00 3.43E+00 3.43E+00 3.52E+00

311

F14 5.55E+01 5.08E+01 4.97E+01 4.63E+01 4.66E+01 4.58E+01 4.64E+01 4.77E+01 4.77E+01 4.92E+01 4.81E+01 F28 4.31E+02 4.03E+02 2.85E+02 2.21E+02 2.08E+02 2.19E+02 2.60E+02 2.48E+02 2.86E+02 3.07E+02 4.14E+02

F19 6.00E+00 7.00E+00 6.00E+00 5.00E+00 3.00E+00 3.00E+00 3.00E+00 5.00E+00 5.00E+00 4.00E+00 4.00E+00 F30 9.65E+02 9.64E+02 9.59E+02 9.57E+02 9.56E+02 9.54E+02 9.58E+02 9.60E+02 9.63E+02 9.65E+02 9.72E+02

APPENDIX B Case Study to Investigate the Capability of the Proposed Orthogonal Experiment Design-Based Learning Strategy (OEDLS)

312

Case Study To illustrate the mechanism of the proposed orthogonal design experiment (OED)-based learning

strategy

(OEDLS),

a

three

dimensional

(3-D)

Sphere

function

of

f ( X )  x12  x22  x32 with the global minimum of [0, 0, 0] is considered. Suppose that the

existing global best particle in the population has the position vector of Pgold = [0, 5, 0], with the associated objective function value of ObjV(Pgold) = 25. Meanwhile, the particle i with improved self-cognitive experience has an updated personal best position of Pi = [3, 3, 5], with the associated objective function value of OjbV(Pi) = 45. Considering that the OEDLS in this case study deals with 3-D search space (i.e., N = 3) and each dimensional component consists of two types of particles (i.e., Q = 2), the

L4 (2 3 ) OA obtained from the Equation (2.9) is sufficient to derive the predictive solution 3 Xp. As illustrated in Equation (2.9), the L4 (2 ) OA consists of four rows (i.e., M = 4),

3 implying that a total of four combinations of test cases are generated in the L4 (2 ) OA. The 3 number of 1 and 2 in each column of L4 (2 ) OA denotes the levels of each factor. For 3 example, the second row of L4 (2 ) OA is (1, 2, 2), meaning that in this test case, the first

factor (i.e., d = 1) is contributed from the first dimensional component of Pgold (i.e., Pg,1old), whereas the second (i.e., d = 2) and third (i.e., d = 3) factors are contributed from the second and third dimensional components of Pi (i.e., Pi,2 and Pi,3), respectively. Table B1 presents 3 the four combinations of test cases generated in the L4 (2 ) OA (i.e., C1 to C4), and their

associated objective function values, i.e., f1 to f4. The values of f1 to f4 are computed by replacing the position vector of each test case combination into the objective function f ( X )  x12  x 22  x32 .

The FA is subsequently performed to identify the best combination of test case in the

L4 (2 3 ) OA. Specifically, the main effect of each factor j ( 1  j  N ) with level k ( 1  k  Q ), i.e., Sjk, is computed using Equation (2.10). For example, to calculate the effect

313

Table B1 Deciding the best combination levels of Pgold and improved Pi using the OEDLS Combinations C1 C2 C3 C4 Levels 1 2 OED Results

d=1 0 (1) 0 (1) 3 (2) 3 (2) S11 = (f1 + f2)/2 = 29.5 S12 = (f3 + f4)/2 = 38.5 1

d=2 d=3 fj 5 (1) 0 (1) f1 = 25 3 (2) 5 (2) f2 = 34 5 (1) 5 (2) f3 = 59 3 (2) 0 (1) f4 = 18 Factor Analysis S21 = (f1+f3)/2 = 42 S31 = (f1+f4)/2 = 21.5 S22 = (f2+f4)/2 = 26 S32 = (f2+f3)/2 = 46.5 2 1

of Pg,1old (i.e., level 1) on third dimensional component (i.e., d = 3), i.e., S31, only the experimental results of C1 and C4 (i.e., f1 = 25 and f4 = 18) are considered in Equation (2.10). This is because only these two test cases consist of the third dimensional component of Pgold (i.e., Pg,3old = 0) at d = 3. According to Equation (2.10), the sum of f1 and f4 is subsequently divided by the zmnq (i.e., 2 in this case) to yield Sjk (i.e., S31 = 21.5 in this case). The FA results of this case study are summarized in Table B1. Since the main objective of this case study is to determine the best level combinations of factors that produce the minimum value of f ( X )  x12  x22  x32 , it can be considered as a minimization problem. In other words, for each factor, the factor level that give the smaller value of Sjk has more significant effect and thus is more desirable. For example, Table B1 shows that at the factor of d = 2, the second dimensional component of Pi (i.e., Pi,2 = 3 with level 2) is identified to have more significant effect than the second dimensional component of Pgold (i.e., Pg,2old = 1 with level 1) because of S22 < S21. From Table B1, it can be concluded that the best levels for factors d = 1, d = 2, and d = 3 are the levels of 1, 2, and 1, respectively. In other words, the predictive solution generated in this case study has the position vector of Xp = [Pg,1old, Pi,2, Pg,3old] = [0, 3, 0], with the associated objective function values of OjbV(Xi) = 9. By inspecting the predictive solution Xn, it can be observed that the first and third components of Xn are contributed by the existing global best particle, whereas the second component of Xn is contributed by particle i with improved self-cognitive experience. In

314

other word, the OEDLS successfully establishes the bidirectional learning mechanism between the improved particle i and the global best particle, considering that it is able to extract the useful information from Pi (i.e., Pi,2 = 3) and utilize it to further improve the Pgold. Since the objective function value of predictive solution Xn [i.e., OjbV(Xn) = 9] is lower than that of existing global best particle Pgold [i.e., OjbV(Pgold) = 25], the former replaces the latter to become the new global best particle Pgnew in the population.

315

LIST OF PUBLICATIONS Journals: 1. LIM, W. H. & MAT ISA, N. A. Particle swarm optimization with dual-level task allocation. Accepted, to appear in Engineering Applications of Artificial Intelligence. (Impact Factor: 1.962) 2. LIM, W. H. & MAT ISA, N. A. Particle swarm optimization with adaptive timevarying topology connectivity. Applied Soft Computing, 24(0), 623-642. (Impact Factor: 2.679) 3. LIM, W. H. & MAT ISA, N. A. 2014. Bidirectional teaching and peer-learning particle swarm optimization. Information Sciences, 280 (0), 111-134. (Impact Factor: 3.893) 4. LIM, W. H. & MAT ISA, N. A. 2014. An adaptive two-layer particle swarm optimization with elitist learning strategy. Information Sciences, 273 (0), 49-72. (Impact Factor: 3.893) 5. LIM, W. H. & MAT ISA, N. A. 2014. Teaching and peer-learning particle swarm optimization. Applied Soft Computing, 18 (0), 39-58. (Impact Factor: 2.679) 6. LIM, W. H. & MAT ISA, N. A. 2014. Particle swarm optimization with increasing topology connectivity. Engineering Applications of Artificial Intelligence, 27 (0), 80-102. (Impact Factor: 1.962) 7. LIM, W. H. & MAT ISA, N. A. 2013. Two-layer particle swarm optimization with intelligent division of labor. Engineering Applications of Artificial Intelligence, 26 (10), 2327-2348. (Impact Factor: 1.962) 8. TAN, K. S., LIM, W. H. & ISA, N. A. M. 2013. Novel initialization scheme for Fuzzy C-Means algorithm on color image segmentation. Applied Soft Computing, 13 (4), 1832-1852. (Impact Factor: 2.679) 9. TAN, K. S., MAT ISA, N. A. & LIM, W. H. 2013. Color image segmentation using adaptive unsupervised clustering approach. Applied Soft Computing, 13 (4), 20172036. (Impact Factor: 2.679) 10. LIM, W. H. & MAT ISA, N. A. 2012. A novel adaptive color to grayscale conversion algorithm for digital images. Scientific Research and Essays, 7 (30), 2718-2730. (Impact Factor: 0.380)

316

Conference Proceedings/Colloquium: 1. LIM, W. H. & MAT ISA, N. A. 2014. Illumination estimation based color to grayscale conversion algorithms. In: 2014 International Conference on Electrical Engineering, Computer Science and Informatics (EECSI 2014), 20-21 August 2014, Yogyakarta, Indonesia. 2. LIM, W. H. & MAT ISA, N. A. 2013. Particle swarm optimization with timevarying topology connectivity. In: 4th Postgraduate Colloquium of School of Electrical and Electronics Engineering USM, 18-20 August, Pangkor Island, Perak, Malaysia.

Under Review Journals: 1. LIM, W. H. & MAT ISA, N. A. Adaptive division of labor particle swarm optimization. Expert Systems with Application (Under review, January 2014). 2. LIM, W. H. & MAT ISA, N. A. A self-adaptive topology connectivity-based particle swarm optimization. Engineering Applications of Artificial Intelligence (Under review, June 2014). 3. LIM, W. H. & MAT ISA, N. A. Particle swarm optimization with modified selfcognitive learning. Turkish Journal of Electrical Engineering & Computer Sciences (Under review, August 2014).

317

LIST OF RESEARCH GRANT Lim Wei Hong, “Development of PSO Algorithm with Multi-Learning Frameworks for Application in Image Segmentation”, Skim Penyelidikan Siswazah Universiti Penyelidikan (USM-RU-PRGS) – 1001/PELECT/8046018, RM6,700.00, 15 Mei 2013 – 14 Mei 2015.

318