A chaotic artificial immune system optimisation ...

2 downloads 0 Views 604KB Size Report
Abstract Artificial immune system algorithm (AIS) is a population-based global heuristic optimisation algorithm. It is inspired by immune system of human bodies.
Neural Comput & Applic DOI 10.1007/s00521-014-1751-5

ORIGINAL ARTICLE

A chaotic artificial immune system optimisation algorithm for solving global continuous optimisation problems A. Rezaee Jordehi

Received: 9 April 2014 / Accepted: 16 October 2014  The Natural Computing Applications Forum 2014

Abstract Artificial immune system algorithm (AIS) is a population-based global heuristic optimisation algorithm. It is inspired by immune system of human bodies. Alleviating premature convergence problem of heuristic optimisation algorithms is a hot research area. In this study, chaoticbased strategies are embedded into AIS to alleviate its premature convergence problem. Four various chaoticbased AIS strategies with five different chaotic map functions (totally 20 cases) are examined, and the best one is chosen as the best chaotic paradigm for AIS. The results of applying the proposed chaotic AIS to a variety of unimodal and multimodal benchmark functions reveal that it offers high-quality solutions. It significantly outperforms conventional AIS and gravitational search algorithm. The outperformance is both in terms of accuracy of solutions and stability in finding accurate solutions. Keywords Artificial immune system optimisation algorithm  Global optimisation  Chaos

1 Introduction The process of finding the best item among a set of possible items is called optimisation. There are many optimisation problems in all areas of sciences and technologies [1–7]. For solving them, many diverse algorithms have been developed [8]. Among those algorithms, heuristics have attracted much attention [8–14]. Heuristics encompass various optimisation algorithms such as artificial bee A. Rezaee Jordehi (&) Department of Electrical Engineering, University Putra Malaysia (UPM), 43400, Serdang, Selangor, Malaysia e-mail: [email protected]

colony, particle swarm optimisation, differential evolution, firefly swarm optimisation, teaching learning-based optimisation, genetic algorithm, ant colony optimisation and bat swarm algorithm [15]. Heuristics do not entail preconditions such as continuity, convexity or differentiability of objective functions and, therefore, can easily be applied to various optimisation problems [16]. Artificial immune system (AIS) optimisation algorithm is a population-based heuristic algorithm that takes inspiration from immune system of human body and is devised by De Castro [17, 18]. AIS has been employed to solve optimisation problems in different areas of sciences and technologies [19–22]. Nevertheless, in some runs of AIS, antibodies are stuck in false (local) optima and cannot truly converge into global optimum. This issue is called premature convergence. Chaos is a property of nonlinear systems and is defined as randomness generated by simple deterministic systems [23, 24]. Actually, chaotic systems are a sub-category of dynamical systems. They have attracted the attention of researchers in different areas. In this study, the objective is to employ chaotic strategies in order to discourage premature convergence of AIS. The paper is organised as follows: in Sect. 2, an overview of AIS is given. The chaotic strategies are introduced in Sect. 3. The results are presented in the fourth section, and concluding remarks are presented in Sect. 5.

2 An overview of artificial immune system (AIS) optimisation algorithm 2.1 Natural immune system The immune system of vertebrates including human being encompasses cells, molecules and organs in the body that

123

Neural Comput & Applic

protect it against infectious diseases caused by viruses, bacteria, etc. [25]. In order to do this, the immune system must be able to differentiate between the body’s own cells as the self cells and foreign pathogens as the non-self cells or antigens. Then, the immune system has to make an immune response in order to eliminate non-self cell or antigen [25]. Antigens are further categorised in order to activate the suitable defensive mechanism, and the immune system develops a memory to enable more efficient responses in case of further infection by the similar antigen [18, 25, 26]. Clonal selection theory explains how the immune system fights against an antigen. According to this theory, only the cells which recognise the antigen are selected to proliferate [26]. The selected cells are subjected to an affinity maturation process that enhances their affinity to the selected antigens [26]. Clonal selection operates on B-lymphocytes and B cells created by the bone marrow and also on T-lymphocytes or T cells produced by the thymus [26]. When the body is exposed to an antigen, B cells respond by secreting certain antibodies. Thereafter, a signal from the T-helper cells, a subclass of T cells, stimulates the B cell to proliferate and mature into terminal antibody-secreting cells called plasma cells. The proliferation rate is proportional to the affinity level, i.e. higher the affinity level of B cells is, more clones will be generated [26]. This process of selection and mutation in B cells is named affinity maturation [26]. 2.2 Artificial immune system optimisation algorithm Inspired by natural immune system of human body, explained in Sect. 2.1, an optimisation algorithm named as artificial immune system optimisation (AIS) has been developed. In AIS, the search agents are similar to antibodies in natural immune systems, and fitness values (inverse of objective values) are analogous to affinities in natural immune systems. AIS includes the following stages. 2.2.1 Initialisation Like all other heuristic optimisation algorithms, the population of Ni individuals (antibodies) is randomly initialised in search space. 2.2.2 Clonal proliferation In this stage, the antibodies are cloned (proliferated) based on their fitness (affinity).

123

2.2.3 Maturation Maturation is applied as a mutation operator represented by Eq. (1). It is applied with a probability p, where p is called mutation rate or mutation probability.   ð1Þ Xid ¼ Xid þ K Xd;max  Xd;min  N ð0; 1Þ where Xid represents dth dimension of ith antibody, Xd,max and Xd,min are the upper and lower bounds of ith decision variable, N(0, 1) represents standard normal distribution, and K is a scale factor. 2.2.4 Evaluation In this stage, all antibodies are evaluated. That is, their affinity value is computed. 2.2.5 Ageing operator Ageing operator eliminates the individuals which have lost more than a specified number of iterations. Indeed, this operator leads to enhancement in population diversity. 2.2.6 Tournament selection This selection operator is applied to select Ni individuals for next generation. Stages 2–6 are repeated till termination criterion is met. The pseudocode of AIS optimisation algorithm is presented below, and its flowchart is depicted in Fig. 1. 2.2.7 AIS pseudocode 1. 2.

Initialise Ni antibodies randomly and compute their affinities. For t = 1:tmax, 2.1 Perform Clonal proliferation of antibodies. 2.2 Mutate antibodies with probability p according to Eq. (1). 2.3 Compute affinities of antibodies. 2.4 Apply ageing operator to eliminate too old antibodies. 2.5 Apply tournament selection operator to select Ni antibodies for next generation. End for Display optimal decision vector and optimal objective value End

Neural Comput & Applic

Start

Input AIS parameters

Initialise antibodies randomly and evaluate them.

Clonal proliferation is implemented.

Table 1 Features of employed chaotic functions [15, 24] Chaotic map

Equation

Parameters

Logistic [27]

xkþ1 ¼ a  xk ð1  xk Þ

a = 4, x0 = 0.6

ICMIC [28] Sinusoidal [27]

   xkþ1 ¼ abs sin xak xkþ1 ¼ ax2k sinðpxk Þ

Piecewise [29] Maturation is applied as a mutation operator.

xkþ1 All antibodies are evaluated.

Aging operator is applied.

Bernoulli [23]

xkþ1

8 xk > 0  xk  a > > a > > x  a k > > a  xk  0:5 < 0:5  a ¼ 1  a  xk > 0:5  xk  1  a > > > 0:5  a > > 1  x > k : 1  a  xk  1 a 8 > xk < 0  xk  a a ¼ 1 x  ð1  aÞ > : k 1  a  xk  1 a

a = 7, x0 = 0.6 a = 2.3, x0 = 0.6 a = 0.6, x0 = 0.6

a = 0.3, x0 = 0.6

Tournament selection operator is applied.

No

Stopping criterion met? Yes

Output optimal decision vector and optimal objective value

End

Fig. 1 Flowchart of AIS

Fig. 2 Logistic map function

3 Chaotic-based AIS variants In this study, four different chaotic-based AIS variants are examined with five different chaotic map functions to recognise the best chaotic variant and the best chaotic map function. In the first chaotic AIS variant (AIS I), random initialisation is replaced by chaotic initialisation, that is, the initialisation of ith antibody is done via Eq. (2).   Xid ¼ Xmin;d þ Xmax;d  Xmin;d  chaosði þ dÞ ð2Þ d ¼ 1; 2; . . .; n where Xid represents dth dimension of position of ith antibody, n is the number of decision variables, Xmax,d and Xmin,d represent upper and lower bounds of dth decision variable, and chaos(.) is a typical chaotic map function. All chaotic functions used in the study have been tabulated in Table 1 and have been plotted in Figs. 2, 3, 4, 5 and 6. In the second chaotic-based AIS variant (AIS II), mutation rate is updated at each iteration via the following equation.

Fig. 3 ICMIC map function

  pðtÞ ¼ pi þ chaosðtÞ pi  pf

ð3Þ

where t is the iteration number, pi and pf are constants to be set by user. In the third chaotic-based AIS variant, mutation rate is updated via Eq. (4). Actually, mutation rate is computed via multiplying a linearly decreasing function by a chaotic map function.

123

Neural Comput & Applic Table 2 Features of the examined chaotic-based AIS strategies Chaotic strategy

Features

AIS I

Chaotic initialisation   pðtÞ ¼ pi þ chaosðtÞ pi  pf   ðpf pi Þt  chaosðtÞ pðtÞ ¼ pi þ tmax

AIS II AIS III AIS IV Fig. 4 Sinusoidal map function

Chaotic initialisation and   ðpf pi Þt pðtÞ ¼ pi þ tmax  chaosðtÞ

the best chaotic strategy and the best chaotic map function. Next, the best chaotic strategy and the best chaotic map function constitute the proposed chaotic AIS of this study and are applied to different benchmark functions for validation. After validating the proposed chaotic AIS, it will be applied to high-dimensional problems to assess its scalability. Fig. 5 Piecewise map function

4.1 Experimental results for different algorithms in dimension 5

Fig. 6 Bernoulli map function

 pð t Þ ¼



 pf  pi t pi þ  chaosðtÞ tmax

ð4Þ

where pi and pf are two tunable parameters. In this study, they are set as 0.2 and 0.01, respectively. In the fourth chaotic-based AIS variant (AIS IV), AIS I and AIS III are hybridised, i.e. chaotic initialisation is implemented and mutation rate at each iteration is updated by Eq. (4). AIS I has been hybridised with AIS III since experiments showed that AIS III is superior to AIS II. The specifications of the examined chaotic AIS strategies have been tabulated in Table 2.

In the experiments, the dimension of the problem is set to 5, number of individuals is 20, and maximum number of iterations is 100. All algorithms are run 30 times, and their statistical data are presented. The details of benchmark functions are presented in Table 2. In all benchmark functions, the true global optimum is equal to zero. Application of conventional AIS to different benchmark functions revealed that it yields relatively poor solutions. In some cases, it was stuck in local optima. For instance, in 30 runs on Rosenbrock function, the average of results was 0.1278, or in 30 runs on Rastrigin function, the average of results was 1.8562 (Table 3). The results of applying different chaotic AIS strategies with five different chaotic map functions to Rosenbrock function are tabulated in Table 4. Based on Table 4, the following points can be drawn: •



4 Experimental results



Initially, all proposed chaotic-based AIS strategies along with different chaotic map functions are examined to find



123

For logistic, sinusoidal and Bernoulli map functions, chaotic initialisation leads to the results better than conventional AIS; however, chaotic initialisation by ICMIC and piecewise functions deteriorates the results. For logistic, piecewise and Bernoulli functions, AIS II outperforms conventional AIS, while for ICMIC and sinusoidal functions, the results are worse than the results of conventional AIS. AIS III for all chaotic functions except Bernoulli outperforms conventional AIS. AIS IV leads to the best results among four examined strategies. In particular, with logistic map function, it

Neural Comput & Applic Table 3 Specifications of test functions [15, 24] Function name

Formulation

Rastrigin (multimodal) Levy (multimodal)

Range

 P  f ð X Þ ¼ 10n þ ni¼1 Xi2  10 cosð2pXi Þ   Pn1 f ðX Þ ¼ sin2 ðpXi Þ þ i¼1 ðyi  1Þ2 1 þ 10 sin2 ðpyi þ 1Þ þ ðyi  1Þ2 1 þ sin2 ð2pyi Þ

[-5.12, 5.12] [-15, 30]

Xi 1 4 fori

Griewank (multimodal) Rosenbrock (multimodal) Ackley (multimodal) Sphere (unimodal) Dixon & Price (unimodal) Zakharov (unimodal)

where yi ¼ 1 þ ¼ 1; 2; . . .; n   Qn Pn Xi2 Xiffi þ1 f ð X Þ ¼ i¼1 4;000  i¼1 cos p i h i  2 2 Pn1 f ð X Þ ¼ i¼1 100 Xi  Xiþ1 þðXi  1Þ2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  ffi  P ffi Pn 1 f ð X Þ ¼ 20 þ e  20 exp 0:2 1n ni¼1 Xi2  exp i¼1 cosð2pXi Þ n P f ð X Þ ¼ ni¼1 Xi2  2 P f ð X Þ ¼ ðX1  1Þ2 þ ni¼2 i 2Xi2  Xi1 Pn 2 Pn 4 P f ð X Þ ¼ ni¼1 Xi2 þ i¼1 0:5iXi þ i¼1 0:5iXi

[-600, 600] [-5, 10] [-15, 30] [-5.12, 5.12] [-10, 10] [-5, 10]

Table 4 Mean of optimal objectives for Rosenbrock function

Table 6 Statistics of optimal objective values for Levy function

Map name

AIS I

AIS II

AIS III

AIS IV

Method

Mean

Min

Max

Std

Logistic

0.1142

0.1237

0.0845

0.0674

AIS

1.683e-6

3.3e-8

3.823e-6

1.099e-6

ICMIC

0.1483

0.1418

0.1061

0.0929

GSA

0.2279

0.0013

1.2417

0.3708

Sinusoidal

0.1189

0.1564

0.1091

0.1074

Chaotic AIS

4.04e-7

2.05e-8

9.68e-7

1.54e-7

Piecewise

0.1356

0.1159

0.1259

0.1395

Bernoulli

0.1147

0.1049

0.1294

0.1250 Table 7 Statistics of optimal objective values for Griewank function Method

Table 5 Statistics of optimal objective values for Rosenbrock function Method AIS GSA Chaotic AIS

Mean

Min

Max

Std

0.1278

0.0058

0.9486

0.2077

144.8100

3.1123

498.6242

130.1889

0.0674

0.0275

0.1769

0.0238

AIS GSA Chaotic AIS

The statistics of optimal objectives for Rosenbrock function achieved by different algorithms are as tabulated in Table 5. In this study, the performance of chaotic-based AIS is compared with conventional AIS and GSA. The best value of each column in tables is bolded. Table 5 shows that chaotic AIS strongly outperforms conventional AIS and GSA. The outperformance is evident in mean, maximum and standard deviation of the optimal objectives. Actually, the embedded chaotic strategy enables AIS to enhance the diversity among its search agents and escape from local optima. The lower values of standard deviations in chaotic AIS imply the fact that chaotic AIS not only results in high-quality solutions but also is very stable and reliable in finding high-quality solutions.

Min

Max

Std

0.1304

0.0904

0.1663

0.0251

25.0630 0.1194

5.7458 0.0714

54.8695 0.1648

13.1226 0.0224

Table 8 Statistics of optimal objective values for Rastrigin function Method

leads to the best result among all examined cases. Therefore, AIS IV with logistic function is the proposed chaotic variant of this study.

Mean

Mean

Min

Max

Std

AIS

1.8562

0.0287

2.9894

0.8598

GSA

4.4081

0.0113

11.3169

3.0679

Chaotic AIS

1.1983

0.6820

1.3220

0.1530

In Tables 6, 7, 8, 9, 10 and 11, the statistics of optimal objectives achieved by conventional AIS, chaotic AIS and GSA are presented. According to the tables, in all benchmark functions, chaotic AIS ranks first among all used algorithms. Thus, chaotic AIS has proved to be a powerful optimisation technique for solving global optimisation problems (Table 12). 4.2 Scalability assessment of chaotic AIS After validation of chaotic AIS, here, its capability for solving high-dimensional optimisation problems is assessed. To this end, it is applied to benchmark functions in

123

Neural Comput & Applic Table 9 Statistics of optimal objective values for Ackley function Min

Max

Table 13 Comparison of different algorithms for dimension 30

Method

Mean

Std

AIS

2.4506

0.2714

2.8327

0.5680

Rastrigin

GSA

0.9244

0.0002

3.3752

1.1413

Mean

0.1915

0.1674

0.1176

Chaotic AIS

0.8945

0.5447

1.3250

0.2515

Min

0.1504

0.1338

0.0675

Max

0.2129

0.2042

0.1769

Std

0.0356

0.0353

0.0275

Table 10 Statistics of optimal objective values for Sphere function Method

Mean

Min

Max

Std

AIS

1.804e-4

4.34e-5

3.053e-4

7.04e-5

GSA

0.011

0

0.0480

0.0160

Chaotic AIS

2.263e-5

5.86e-6

5.086e-5

1.273e-5

Table 11 Statistics of optimal objective values for Dixon and Price function Method

Mean

Min

Max

Std

AIS

0.0009

0.0001

0.0024

0.0006

GSA

1.1044

0.5523

1.8383

0.4480

Chaotic AIS

7.50e-4

6.45e-5

9.81e-4

1.68e-5

AIS

GSA

Chaotic AIS

Levy Mean

3.4137

15.5932

2.8078

Min

2.4180

9.5818

1.9667

Max

4.3206

22.4081

3.8721

Std

0.9544

6.4508

0.9024 10.9623

Griewank Mean

12.4890

3.864e5

Min

12.0433

7.0e2

Max

13.1464

1.0455e6

11.3938

Std

0.5812

5.736e5

0.7036

7.1011

Rosenbrock Mean

18.3277

51.4724

16.9730

Min

17.8008

32.7231

11.8792

Max

18.6866

75.2271

18.5683

Std

0.4662

21.6896

2.7656

Ackley

Table 12 Statistics of optimal objective values for Zakharov function

Mean

1.2303

1.7370

1.0841

Min

0.9341

1.6182

0.7378

Max

1.3825

1.8659

1.3294

Std

0.2566

0.1241

0.0875

Method

Mean

Min

Max

Std

AIS

0.0048

0.0020

0.0091

0.0023

GSA

0.0278

0

0.1968

0.0508

Mean Min

1.7569 1.6937

390.4736 377.9815

1.0475 0.8275

Chaotic AIS

9.23e-4

5.81e-5

1.13e-3

3.74e-5

Max

1.8200

401.0498

1.5543

Std

0.0893

11.6529

0.0689

dimension of 30. Number of antibodies is set to 300, and maximum number of iterations is set as 100. All algorithms are run 30 runs. Table 13 tabulates the statistical results achieved by different algorithms. It indicates that desirable performance of chaotic AIS is not much affected by increase in dimensionality. Chaotic AIS performs well in high dimension. In all benchmark functions except Zakharov function, chaotic AIS ranks first among used optimisation algorithms.

Sphere

Dixon–Price Mean

6.8772

1.5114e3

Min

5.3654

1.1413e3

3.8794

Max

8.3889

1.8101e3

11.7684

Std

2.1380

3.401e2

2.6755

4.8695

Zakharov Mean

3.2586

13.5479

5.6932

Min

3.1980

13.4209

3.6838

Max

3.3032

13.6905

8.7026

Std

0.0544

0.1355

0.2933

5 Conclusions and future research directions In this study, a novel powerful chaotic-based AIS optimisation algorithm has been developed. This chaotic-based AIS discourages premature convergence in AIS. For developing it, four different chaotic-based AIS strategies with five various map functions were examined and the best one was chosen as the proposed chaotic AIS. The proposed

123

chaotic AIS is a hybrid of chaotic initialisation and chaotic linearly decreasing mutation rate. The results of applying such chaotic-based AIS to miscellaneous benchmark functions imply its superiority over conventional AIS and GSA. Evaluation of the capability of the proposed chaotic-based AIS in solving real-world optimisation problems is put forward as a direction for future research.

Neural Comput & Applic

References 1. Altun A, S¸ ahman M (2013) Cost optimization of mixed feeds with the particle swarm optimization method. Neural Comput Appl 22:383–390 2. Yu S, Zhu K, He Y (2012) A hybrid intelligent optimization method for multiple metal grades optimization. Neural Comput Appl 21:1391–1402 3. Jordehi AR, Jasni J, Abd Wahab N, Kadir MZ, Javadi MS (2015) Enhanced leader PSO (ELPSO): a new algorithm for allocating distributed TCSC’s in power systems. Int J Electr Power Energy Syst 64:771–784 4. Ahandani MA, Alavi-Rad H (2014) Opposition-based learning in shuffled frog leaping: An application for parameter identification. Inform Sci (in press) 5. Patel A, Taghavi M, Bakhtiyari K, Celestino Ju´Nior J (2013) An intrusion detection and prevention system in cloud computing: a systematic review. J Netw Comput Appl 36:25–41 6. Patel A, Bakhtiyari K, Taghavi M (2011) Evaluation of cheating detection methods in academic writings. Libr Hi Tech 29:623–640 7. Jordehi AR, Joorabian M (2011) Optimal placement of Multitype FACTS devices in power systems using evolution strategies. In: Power engineering and optimization conference (PEOCO), 2011 5th International, IEEE, 2011, pp 352–357 8. Jordehi AR, Jasni J, Abdul Wahab NI, Kadir A, Abidin MZ (2013) Particle swarm optimisation applications in FACTS optimisation problem. In: Power engineering and optimization conference (PEOCO), 2013 IEEE 7th International, IEEE, 2013, pp 193–198 9. Jordehi R (2011) Heuristic methods for solution of FACTS optimization problem in power systems. In: 2011 IEEE student conference on research and development, 2011, pp 30–35 10. Jordehi AR, Jasni j (2013) Parameter selection in particle swarm optimisation: a survey. J Exp Theor Artif Intell 25:527–542 11. Jordehi AR, Jasni J (2011) A comprehensive review on methods for solving FACTS optimization problem in power systems. Int Rev Electr Eng 6(4):1916–1926 12. Jordehi AR, Jasni J (2013) Particle swarm optimisation for discrete optimisation problems: a review. Artif Intell Rev (in press) 13. Jordehi AR, Jasni J (2012) Approaches for FACTS optimization problem in power systems. In: Power engineering and optimization conference (PEDCO) Melaka, Malaysia, 2012 Ieee International, IEEE, 2012, pp 355–360

14. Jordehi AR (2014) Particle swarm optimisation for dynamic optimisation problems: a review. Neural Comput Appl. doi:10. 1007/s00521-014-1661-6 15. Rezaee Jordehi A (2014) Chaotic bat swarm optimisation (CBSO). Appl Soft Comput (in press) 16. Wang H, Zhao G, Li N (2012) Training support vector data descriptors using converging linear particle swarm optimization. Neural Comput Appl 21:1099–1105 17. de Castro LN, Timmis J (2002) Artificial immune systems: a new computational intelligence approach. Springer, New York 18. de Castro LN, Timmis J (2003) Artificial immune systems as a novel soft computing paradigm. Soft Comput 7:526–544 19. Gao X-Z, Chow M-Y, Pelta D, Timmis J (2010) Theory and applications of artificial immune systems. Neural Comput Appl 19:1101–1102 20. Weckman G, Bondal A, Rinder M, Young W II (2012) Applying a hybrid artificial immune systems to the job shop scheduling problem. Neural Comput Appl 21:1465–1475 21. Coelho G, Silva A, Zuben F (2010) An immune-inspired multiobjective approach to the reconstruction of phylogenetic trees. Neural Comput Appl 19:1103–1132 22. Gao XZ, Ovaska SJ, Wang X, Chow MY (2009) Clonal optimization-based negative selection algorithm with applications in motor fault detection. Neural Comput Appl 18:719–729 23. Tavazoei MS, Haeri M (2007) Comparison of different onedimensional maps as chaotic search pattern in chaos optimization algorithms. Appl Math Comput 187:1076–1085 24. Jordehi AR (2014) A chaotic-based big bang-big crunch algorithm for solving global optimisation problems. Neural Comput Appl 25(6):1329–1335 25. De Castro LN, Von Zuben FJ (2002) Learning and optimization using the clonal selection principle. Evolut Comput IEEE Trans 6:239–251 26. Basu M (2011) Artificial immune system for dynamic economic dispatch. Int J Electr Power Energy Syst 33:131–136 27. May RM (1976) Simple mathematical models with very complicated dynamics. Nature 261:459–467 28. He D, He C, Jiang L-G, Zhu H-W, Hu G-R (2001) Chaotic characteristics of a one-dimensional iterative map with infinite collapses. Circuits Syst I Fundam Theory Appl IEEE Trans 48:900–906 29. Tomida AG (2008) Matlab toolbox and GUI for analyzing onedimensional chaotic maps. In: Computational sciences and its applications, 2008. ICCSA’08. International conference on, IEEE, 2008, pp 321–330

123