Software test effort estimation: a model based on ...

4 downloads 49480 Views 272KB Size Report
E-mail: [email protected]. E-mail: [email protected]. E-mail: ... Testing requires a good amount of time and effort in the entire software development life cycle. ..... 2nd ed., QED Information Sciences, Wellesley, Mass.
278

Int. J. Bio-Inspired Computation, Vol. 4, No. 5, 2012

Software test effort estimation: a model based on cuckoo search Praveen Ranjan Srivastava*, Abhishek Varshney and Priyanka Nama Department of Computer Science and Information Systems, Birla Institute of Technology and Science (BITS), Pilani, Vidya Vihar Campus, Pilani-333031, Rajasthan, India Fax: 01596-244183 E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] *Corresponding author

Xin-She Yang Department of Engineering, University of Cambridge, Trumpington Street, Cambridge CB2 1PZ, UK E-mail: [email protected] Abstract: Test effort estimation is the process of predicting effort for testing the software. It has always been a fascinating area for software engineering researchers. “How long will it take to test the system?” is the most promising question in minds of testers before the testing process actually starts. Many factors such as the productivity of the test team, strategy chosen for testing, the size and complexity of the system, technical factors, and expected quality can affect test effort estimation. Testing requires a good amount of time and effort in the entire software development life cycle. Several researches have attempted to develop test effort estimation models but still it is not possible to achieve accurate forecasting. A new model based on a metaheuristic technique called, cuckoo search, for estimating the test effort is proposed in this paper. The proposed model is used to assign weights to the various factors involved based on past results, and, is then used for predicting the test effort for new projects of similar kind. Keywords: test effort estimation; cuckoo search; use case points analysis; exponential moving average. Reference to this paper should be made as follows: Srivastava, P.R., Varshney, A., Nama, P. and Yang, X-S. (2012) ‘Software test effort estimation: a model based on cuckoo search’, Int. J. Bio-Inspired Computation, Vol. 4, No. 5, pp.278–285. Biographical notes: Praveen Ranjan Srivastava is working under the Software Engineering and Testing Research Group in the Computer Science and Information Systems Department at the Birla Institute of Technology and Science (BITS) Pilani, India. He is currently doing research in the area of software testing using metaheuristic techniques. His research areas are software testing, quality assurance, quality attributes ranking, testing effort, software release, test data generation, agent oriented software testing, and advanced soft computing techniques. He has published more than 80 research papers in various leading international journals and conferences in the area of software testing. He has been actively involved in reviewing various research papers submitted in his field to different leading journals and various international and national level conferences. Abhishek Varshney is a graduate student and currently doing his ME in Software Systems in the Computer Science and Information Systems department at Birla Institute of Technology and Science, Pilani, India. His areas of interest lie in software testing, P2P networks, data warehousing and pervasive computing. He has published a few research papers in various national and international conferences. Priyanka Nama is a graduate student and currently doing her ME in Software Systems at Birla Institute of Technology and Science, Pilani, India. Her areas of interest lie in image processing, pattern recognition and computer networks and soft computing.

Copyright © 2012 Inderscience Enterprises Ltd.

Software test effort estimation: a model based on cuckoo search

279

Xin-She Yang received his DPhil in Applied Mathematics from the University of Oxford, and is currently a Senior Research Scientist at National Physical Laboratory. He has authored/edited a dozen books and published more than 140 papers. He is the Editor-in-Chief of Int. J. Mathematical Modeling and Numerical Optimisation.

1

Introduction

Software engineering (SE) is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software (Abran et al., 2004). Software testing is a major phase in software development process and accounts for nearly 50% of the total development effort (Sommerville, 2009). Software testing is any activity aimed at evaluating an attribute or capability of a programme or system and determining if it meets its required results (Hetzel, 1988). Test effort estimation in terms of man-days required for the test process can thus play a crucial role in the software development life cycle as it may help organisations to allocate resources accordingly (Sommerville, 2009). Various test effort estimation techniques (Nageswaran, 2001; Van Veenendaal and Dekkers, 1999; David and David, 2011) like COCOMO (Sommerville, 2009), use case point analysis (Nageswaran, 2001) test case point analysis (Van Veenendaal and Dekkers, 1999), function point analysis (David and David, 2011), metrics-based models, etc., exist in the literature. However, in current competitive markets, more reliable estimates are highly needed. Metaheuristic techniques (http://en.wikipedia.org/wiki/ Metaheuristic) such as genetic algorithms, particle swarm optimisation (Aloka et al., 2011) and tabu search (Ferrucci et al., 2009) have found wide applications in SE. Metaheuristic techniques are used for optimisation of problems by iterative improvement of a candidate solution. In this paper, a software test effort estimation model is developed which exploits the advantages of cuckoo search (CS) (Yang and Deb, 2009) for effort optimisation. CS algorithm has been derived from the behaviour of some cuckoo species which lay their eggs in some host birds’ nest along with the Levy flight behaviour followed by some birds. It has been used successfully for solving tough optimisation problems. The structure of the paper is organised as follows. The next section describes the background work in test effort estimation. Section 3 describes CS in detail. In Section 4, the proposed strategy for test effort estimation is described. Section 5 includes experimental work using the proposed strategy. In Section 6, results are analysed and compared, the advantages of the proposed strategy are also highlighted. Finally, Section 7 includes the conclusions of this paper.

2

Background

Various studies (Nageswaran, 2001; Van Veenendaal and Dekkers, 1999; Aloka et al., 2011; Abhishek et al., 2010)

have been carried out for test effort estimation. Use case point analysis technique for test effort estimation was proposed by Nageswaran (2001). In this technique, test effort is estimated on the basis of use cases and actors of the system by rating them into different categories. Some technical factors are also considered, constant weights are assigned to each factor. The major drawback of this technique is that it can prevent the system from adapting with time, when the perceived weights change, the value of effort changes. Another method is test point analysis by Van Veenendaal and Dekkers (1999), where software is divided into different modules and for each module, some parameters are considered to calculate total test hours. To optimise the results as obtained from the above mentioned methods, various evolutionary techniques can be used, like software effort estimation using neural networks (Kaur et al., 2010), wherein the ANN was trained using past data and then used to predict the test effort. Software effort estimation using soft computing techniques (Sandhu et al., 2008) and using fuzzy logic (Martin et al., 2005) have also been proposed. However, both of these approaches are used for entire software development effort estimation and not specifically for test effort estimation. They also require a large historical dataset for training prior to estimation. Aloka et al. (2011) proposed particle swarm optimisation-based approach for test effort estimation. This approach (Aloka et al., 2011), however, does not utilise the historical results. Abhishek et al. (2010) proposed an approach based on neural network. Pre coding and post coding test efforts can be calculated with this approach. Srivastava et al. (2011) proposed an approach which deals with the features of the software testing effort (STE) estimation problem by proposing a novel fuzzy model by integrating COCOMO, fuzzy logic and weighing techniques, test effort drivers (TEDs) into a single platform. The main drawback of his approach is that it requires fine-tuning of the fuzzy rules, which needs experience of a decision maker. Traditional techniques (Nageswaran, 2001; Van Veenendaal and Dekkers, 1999) for test effort estimation do not take into account the historical data while techniques using artificial intelligence (Martin et al., 2005; Abhishek et al., 2010) require a huge set of historical data. Hence, in this paper an adaptable model for test effort estimation is developed wherein, past optimised data obtained from CS will be used to predict the test effort for new projects. In the next section, we will describe the standard CS algorithm.

280

3

P.R. Srivastava et al.

Introduction to CS algorithm

CS was developed by Yang and Deb (2009, 2010) for global optimisation. It was based on the behaviour of some cuckoo species, which follow obligate brood parasitism by laying their eggs in the nests of other species of birds. Once the host birds discover that the eggs are not their own, there are two possibilities, either they throw them away or simply abandon their nests. Lévy flights improve the performance of CS. A Lévy flight is a random walk in which the step-lengths have a probability distribution that is heavy-tailed (Brown et al., 2007). CS uses the following representations (Yang and Deb, 2009, 2010). A solution is represented by an egg in the nest. The objective is to replace a not-so-good solution in the nests with a potentially better solution. Each nest is considered to have one cuckoo egg only. It is based on the following three principles: 1

each cuckoo lays its egg in a randomly chosen nest, one at a time

2

the nests with high quality of eggs are considered best nests for the next generation to follow

3

the host bird can discover the egg laid by cuckoo with a probability pa ∈ (0, 1), and the number of available nests is fixed.

Each egg or nest can be considered as a solution to a problem of interest. Thus, CS can be used to find optimal solutions to any problem that can be formulated mathematically in terms of optimisation. Software testing and effort estimation can be duly considered as an optimisation problem in this context, as one of the Figure 1

The proposed strategy

objectives is to minimise the test effort while testing the paths of the software as thorough as possible.

4

Proposed methodology

Our proposed model exploits a balanced combination of evolutionary approach with the use case point analysis technique, which consists of 16 parameters, though the weights of these parameters were fixed for the calculation of test effort in other methods (Nageswaran, 2001). Here the same 16 parameters are used, but with a range of weights can be defined for each parameter. In addition to these 16 factors, our proposed approach also uses two additional factors, namely, expertise of development team and expertise of test team, which will be used for the calculation of conversion factor to convert AUCP into effort in man-hours. The ranges can be fixed or dynamic. This helps in adapting the model with the change in perceived weights for parameters, which, may tend to change with time, technology, market scenarios, etc. CS is used to optimise the weights of the parameters for projects for which actual effort is known. These optimised weights can then be used for test effort estimation of new projects of a similar kind. The weights used for estimation are exponentially smoothened with the weights already determined from historical data for similar kind of projects. Here, our proposed model is applied on projects, which are similar in nature. The similarity can be based on the technology used, functionality, developing organisation, complexity, etc. This grouping will help in determining accurate estimate of weights. The flow chart of the proposed strategy is given in Figure 1.

Software test effort estimation: a model based on cuckoo search The CS applied in Figure 1 to obtain the optimised weights is explained in Figure 2. Figure 2

Cuckoo search

The process of estimation can start when at least one historical data of a project is available in terms of its actual test effort. The various steps in the estimation process can now be defined as follows.

4.1 Initialisation The proposed model requires at least one project as historical data, for which, actual effort is known. This project is then fed into the proposed model. The values of 16 parameters and the actual effort, also called fitness, are given as the inputs for the optimiser to get the optimised weights of these parameters. Before applying CS to the given project, ranges of weights for each parameter are initialised. Instead of old approach where the weights of the parameters were assumed to be fixed in Nageswaran (2001), we define suitable ranges for each parameter. Defining ranges gives more flexibility for calculating effort. These ranges can either be simple bounds around the actual weights proposed in Nageswaran (2001), or can be made dynamic by keeping the ranges within a fixed percentage of the optimised weights obtained after each iteration. This percentage can also vary from parameter to parameter depending on the degree of their fluctuations with time, organisation, etc. The ranges of weights assumed in the proposed approach are given in Table 1.

Table 1

281 Ranges of weights for parameters Weight [as proposed by Nageswaran (2001)]

Parameter

Assumed range

Actor – simple

1

[0.5, 1.5]

Actor – average

2

[1.5, 2.5]

Actor – complex

3

[2.5, 3.5]

Use case – simple

5

[4.5, 5.5]

Use case – average

10

[9.5, 10.5]

Use case – complex

15

[14.5, 15.5]

Use case – very complex

20

[19.5, 20.5]

Test tools

3

[2.5, 3.5]

Documented inputs

5

[4.5, 5.5]

Development environment

1

[1, 2]

Test environment

1

[1, 2]

Test-ware reuse

2

[1.5, 2.5]

Distributed system

4

[3.5, 4.5]

Performance objectives

1

[1, 2]

Security features

2

[1.5, 2.5]

Complex interfacing

2

[1.5, 2.5]

4.2 Applying CS A search space of n = 25 nests is selected as suggested by Yang and Deb (2010) where each nest represents a possible solution. Various studies on different sizes of search space are conducted, and it was found out that for most problems 25 nests gives optimal results. Each nest is represented by 16 parameters (three for actors, four for use cases and nine for technical factors) as in the case of use case points. Initially random values of weights, from the ranges defined for each parameter from Table 1 will be assigned to all the nests. The value of pa, defined in Section 3, is taken to be 0.25. The following objective function is used as proposed by Nageswaran (2001). AUCP = UUCP * [0.65 + (0.01 * TEF)] Effort = AUCP * Conversion factor (DTEF + TTEF)

where UUCP = UAW + UUCW

AUCP

adjusted use case point

UUCP

unadjusted use case point

UAW

unadjusted actor weight

UUCW

unadjusted use case weight

282

P.R. Srivastava et al.

TEF

technical complexity factor

DTEF

development team expertise factor

TTEF

testing team expertise factor.

The local best nest or solution is calculated using the above defined objective function by comparing each nest with the fitness value and the nest closest to fitness value is defined as a local best solution. The effort value for this nest is then calculated and compared with the tolerance. The tolerance assumed in the proposed model is e–5. If the effort is found to be less than the tolerance, this nest is declared as the global best, otherwise, the new set of nests is generated using Lévy flights and the same procedure will be applied again. The Lévy flights by Mantegna’s algorithm is used here (Yang, 2010) with the exponent of 1.5. The optimised weights thus obtained from the above procedure are stored in the database for future references.

nests with 16 parameters each are assigned random weights within the range defined in Table 1. Table 2

Initial parameters of some nests

Parameter Actor – simple

Nest 1

Nest 2

Nest 3

Nest 4

0.52

1.13

0.67

1.18

Actor – average

2.20

2.21

2.13

2.44

Actor – complex

2.50

3.18

3.46

2.59

Use case – simple

5.11

4.82

5.03

5.01

Use case – average

9.90

10.03

9.97

9.61

Use case– complex

14.79

15.37

15.29

15.04

Use case – very complex

20.15

19.55

19.59

20.18

Test tools

2.82

3.00

3.38

2.64

Documented inputs

4.60

4.93

4.50

5.27

Development environment

1.53

1.90

1.51

1.3

Test environment

1.16

1.63

1.67

1.89

4.3 Estimation

Test-ware reuse

2.38

2.48

2.06

1.80

After the optimised weights for parameters are known for at least one completed project, the effort estimation can be carried out for any new project of the similar kind. For estimation, the previously obtained weights for the same type of project are taken into account by applying exponential smoothening. In exponential smoothening, higher weights are assigned to newer samples and older samples decay in weight exponentially. The number of historical results to be considered in exponential smoothening can be either fixed or varied, depending on the time duration. The weights obtained are now used to estimate the test effort required for this project. After this project gets completed and its testing is done, the actual value of effort is known. This project can now be used for optimising the weights for new projects.

Distributed system

4.16

4.08

3.97

3.56

Performance objectives

1.84

1.84

1.32

1.21

Security features

2.26

1.96

2.10

1.58

Complex interfacing

2.30

2.04

2.41

2.45

5

Experimental study

According to the proposed strategy, we need at least one project for which testing has already been done and its actual effort is known. The data for the project provided by Nageswaran (2001) will be used to initialise the system. Parameters for calculating the total effort are taken from the requirements. The actual effort for this project was found to be 390 man-days, so the actual AUCP comes out to be 192. This will serve as the fitness value in applying CS. As per the proposed strategy after applying CS, optimised weights are calculated for all the parameters taking into account the actual AUCP. These optimised weights and the actual AUCP of the project will be stored in the database as historical data. We implemented our proposed approach in MATLAB (http://www.mathworks.in/products/matlab/index.html) for analysis and to get the optimised weights by applying CS and exponential smoothening for estimation. First, all the 25

For Nest 1, AUCP is calculated as defined by the formula in Section 4.2. AUCP = 125.46 * (0.65 + 0.01 * (91.64) = 195.71

Similarly, for other nests AUCP calculated are 197.98, 190.75, and 202.55 and so on. After calculating AUCP for all the nests, the best solution among them, which is closest to the fitness value or the smallest objective, is taken as the local best. Here, Nest 2 is taken as local best as its AUCP (190.75) is more close to fitness value of the project (192). As |192–190.75| > tolerance (e–5), so it requires more iteration to get the optimised target. Now, new nests are calculated by adding a factor (the product of step size and the difference of current nest with the previously obtained best nest) to the current nest. It is done for all the 25 nests. The step size(s) is calculated using Lévy flights as defined in (Yang, 2010). new _ nest = nest + s * (nest − best)

This set of ‘new_nest’ becomes the new set of solutions from which again the local best is calculated and this local best is again checked if it can become the global best. This process of calculating new nests will continue until the desired level of tolerance is achieved. Using Lévy flights weights of all parameters of 25 nests are updated and the same procedure as discussed above is repeated to get the global best nest with optimised weights. After 8,850 iteration (in case of the discussed example) as simulated in MATLAB (http://www.mathworks.in/products/ matlab/index.html), the global best nest is calculated and the weights of this nest are the optimised weights, which are shown in Tables 3 to 5.

Software test effort estimation: a model based on cuckoo search Table 3

Optimised weights of actors

283

No. of actors

Optimised weight of parameter

Range of weights

Dividing it by the conversion factor of 18 (as taken by Sundari, TCPA – Tool to Test Effort Estimation), the actual AUCP comes out to be 177.77.

Simple

0

1.21

[0.5, 1.5]

Table 6

Average

32

2.21

[1.5, 2.5]

Complex

0

3.09

[2.5, 3.5]

Actor type

Table 4

Actor type

Optimised weights of use cases

Use cases type

No. of use cases

Optimised weight of parameter

Calculation of unadjusted actor weights

Range of weights

No. of actors 1

1.21

1.21

Average

0

2.21

0.00

Complex

5

3.09

15.45

Total UAW

2

5.33

[4.5, 5.5]

Average

1

9.90

[9.5, 10.5]

Complex

1

15.50

[14.5, 15.5]

Use cases type

[19.5, 20.5]

Simple

Table 5

1

19.81

Table 7

Optimised weights of technical factors

Technical factor

Assigned value

Optimised weight of parameter

Range of weights

Test tools

5

2.63

[2.5, 3.5]

Documented inputs

5

5.07

[4.5, 5.5]

Development environment

2

1.84

[1, 2]

Test environment

3

1.31

Test-ware reuse

3

Distributed system

UAW

Simple

Simple

Very complex

Weight

16.66

Calculation of unadjusted use case weight No. of use cases

Weight

UUCW

10

5.33

53.30

Average

6

9.90

59.40

Complex

3

15.50

46.50

Very complex

0

19.81

Total UUCW Table 8

0.00 159.20

Calculation of technical factors Assigned value

Weight

TEF

Test tools

3

2.63

7.89

[1, 2]

Documented inputs

2

5.07

10.14

1.89

[1.5, 2.5]

2

1.84

3.68

4

4.25

[3.5, 4.5]

Development environment Test environment

1

1.31

1.31

Performance objectives

2

1.59

[1, 2]

Test-ware reuse

1

1.89

1.89

1

4.25

4.25

Security features

4

1.95

[1.5, 2.5]

Distributed system

1.59

3.18

5

1.5

[1.5, 2.5]

Performance objectives

2

Complex interfacing

Security features

1

1.95

1.95

Complex interfacing

1

1.5

1.50

After the optimised weights for the parameters are known, this can be used to estimate the effort (AUCP initially) for new projects. The data for the project whose effort is to be estimated is taken from http://citeseerx.ist.psu.edu/viewdoc/ summary?doi=10.1.1.117.6132. After AUCP is known, total effort can be calculated as: Effort = AUCP * Conversion factor

Since, it is assumed that there are no more projects for which historical data is known except one, so the weights for the above discussed project will be used as it is for estimation, otherwise, exponentially smoothened weights would have been used for the same. The AUCP or this new assumed project (http://citeseerx.ist.psu.edu/ viewdoc/summary?doi=10.1.1.117.6132) can be shown in Tables 6 to 8. The AUCP for this as calculated from the given equation is found to be 175.49. The actual effort for this project was 400 man-days or 3200 man-hours.

Technical factor

Total TEF

0.9979

Now to calculate, the effort in terms of man-days, two additional factors are considered 1

expertise of development team

2

expertise of testing team.

The values for Tables 9 and 10. Table 9

these

parameters

are

given

Weights for development team expertise

Expertise of development team

Weight

Experienced

2

Mixture of experienced and non-experienced

4

Non-experienced

8

in

284

P.R. Srivastava et al.

Table 10

Weights for testing team expertise

Expertise of testing team

Weight

Experienced

5

Mixture of experienced and non-experienced

10

Non-experienced

15

The effort in man-hours can be calculated as: Effort (Man − hours) = AUCP * (DTEF + TTEF)

where DTEF

development team expertise factor

TTEF

testing team expertise factor.

For the given project for which the test effort is to be estimated, taking the values of DTEF and TTEF to be 8 and 10, respectively, as taken from http://citeseerx.ist.psu.edu/ viewdoc/summary?doi=10.1.1.117.6132, effort in man-hours for this new project can be calculated as 395 man days.

6

Comparison and analysis

As calculated above, the magnitude of relative error of the given project (http://citeseerx.ist.psu.edu/viewdoc/ summary?doi=10.1.1.117.6132) through the proposed approach can be found as: MRE = |Actual effort – Predicted effort| / Actual effort * 100 = |400 – 395| / 400 * 100 = 1.25%

The MRE of the same project as obtained in Sundari (TCPA – Tool to Test Effort Estimation) by TCPA approach is = |400 − 387| / 400 * 100 = 3.25%

The effort for the same project can also be calculated using the use case point analysis as defined by Nageswaran (2001). The test effort is found to be 344 man-days. Therefore, the MRE is given as 16.5 %. The obtained results can be summarised as given in Figure 3. Figure 3

MRE for project under analysis as obtained by different approaches

The comparison and analysis is done taking one project (Nageswaran, 2001) and estimating the effort of other

(Sundari, TCPA – Tool to Test Effort Estimation). Since, there is not many data available publicly on use case point analysis for software test effort estimation; therefore, only limited comparison has been done on a few projects as discussed in this paper.

7

Conclusions and future work

In this paper, CS algorithm was applied for software test effort estimation. The approach uses CS for optimising the parameters involved in estimating the effort through use case point analysis. The estimation for projects utilises the optimised weights as obtained by applying CS on projects of similar kind whose testing is over. The historical values of weights are then exponentially smoothened for estimation of test effort of new projects, so that the most recent parameter weights get the maximum priority. It was found that the this approach gives better results as compared to other metaheuristic techniques available for test effort estimation. The future work can consider the development team expertise factor and test team expertise factor to be included in the parameters that are optimised through the CS approach. In addition, the ranges of weights can be considered as dynamically varying during iterations for all the parameters so as to make the approach as adaptive as possible. Future work can be done on classifying the parameters on the basis of degree of fluctuation of weights in them. In addition, it can be expected that this proposed model can also be used for other effort estimation for complex projects such as project management, testing planning and complex task optimisation.

References Abhishek, C. et al. (2010) ‘Test effort estimation using neural network’, Journal of Software Engineering & Applications (JSEA), Vol. 3, No. 4, pp.331–340. Abran, A. et al. (2004) Guide to the Software Engineering Body of Knowledge, IEEE Computer Society, Los Alamitos, CA. Aloka, S. et al. (2011) ‘Test effort estimation-particle swarm optimization based approach’, Communications in Computer and Information Science (CCIS), Part 3, Vol. 168, pp.463–47. Available at http://en.wikipedia.org/wiki/Metaheuristic (accessed on 11 November 2011). Available at http://www.mathworks.in/products/matlab/index.html (accessed on 20 November 2011). Brown, C., Liebovitch, L.S. and Glendon, R. (2007) ‘Lévy flights, DobeJu/hoansi foraging patterns’, Human Ecol., Vol. 35, pp.129–138. David, G. and David, H. (2011) Function Point Analysis: Measurement Practices for Successful Software Projects, Lavoisier S.A.S., France. Ferrucci, F. et al. (2009) ‘Using tabu search to estimate software development effort’, Lecture Notes in Computer Science (LNCS), Vol. 5891, pp.307–320. Hetzel, W.C. (1988) The Complete Guide to Software Testing, 2nd ed., QED Information Sciences, Wellesley, Mass.

Software test effort estimation: a model based on cuckoo search Kaur, J. et al. (2010) ‘Neural network – a novel technique for software effort estimation’, International Journal of Computer Theory and Engineering (IJCTE), Vol. 2, No. 1, pp.1793–8201. Martin, C.L. et al. (2005) ‘Software development effort estimation using fuzzy logic: a case study’, 6th Mexican International Conference on Computer Science, Mexico, pp.113–120. Nageswaran, S. (2001) ‘Test effort estimation using use case points’, 14th International Software/Internet Quality Week (QW2001), San Francisco. Sandhu, P.S. et al. (2008) Software Effort Estimation Using Soft Computing Techniques, World Academy of Science, Engineering and Technology (WASET), pp.488–491. Sommerville, I. (2009) Software Engineering, Pearson Edition, India. Srivastava, P.R. et al. (2011) ‘Software testing effort: an assessment through fuzzy criteria approach’, Journal of Uncertain Systems (JUS), Vol. 5, No. 3, pp.183–201.

285

Sundari, R.T., TCPA – Tool to Test Effort Estimation, available at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.117. 6132 (accessed on 20 November 2011). Van Veenendaal, E.P.W.M. and Dekkers, T. (1999) Test Point Analysis: A Method for Test Estimation, ESCOM, Herstmonceux, England. Yang, X-S. (2010) Nature-Inspired Metaheuristic Algorithms, LuniverPress, UK. Yang, X-S. and Deb, S. (2009) ‘Cuckoo search via Levy flights’, Proc. of World Congress on Nature & Biologically Inspired Computing (NaBIC), India, IEEE Computer Society, pp.210–214. Yang, X-S. and Deb, S. (2010) ‘Engineering optimisation by cuckoo search’, International Journal of Mathematical Modelling and Numerical Optimisation (IJMMNO), Vol. 1, No. 4, pp.330–343.