population based particle filtering

1 downloads 0 Views 195KB Size Report
Table 1 presents the PMCMC Algorithm of Jasra et al. [2]. Table 1. A PMCMC Algorithm. 1. Initialisation. • Set discrete time instant k = 1. • For each sample i = 1, ...
POPULATION BASED PARTICLE FILTERING Harish Bhaskar1 , Lyudmila Mihaylova1 and Simon Maskell2 1

Dept. of Communication Systems, Lancaster University, UK 2 QinetiQ, Malvern Technology Centre, Worcestershire, UK {h.bhaskar,mila.mihaylova}@lancaster.ac.uk, [email protected]

Keywords: tracking, particle filter, population eliminating particles with small weights and replibased methods, MCMC, SMC sampler cating particles with larger weights. The most frequently used algorithms for particle resampling are: residual resampling [6], systematic resampling [3], Abstract stratified resampling [4], and multinomial resamThis paper proposes a novel particle filtering stratpling [3]. The multinomial resampling procedure egy by combining population Monte Carlo Markov involves drawing uniform distributed samples and chain methods with sequential Monte Carlo chain applying the inversion method [7]. Residual resamparticle which we call evolving population Monte pling, otherwise known as remainder resampling, is Carlo Markov Chain (EP MCMC) filtering. Iteran efficient technique of decreasing the variance of ative convergence on groups of particles (populathe generated samples due to resampling [8]. Stratitions) is obtained using a specified kernel moving fied resampling is based on ideas of survey sampling particles toward more likely regions. The proposed and involves pre-partitioning the uniform distributechnique introduces variety in the particles both tion interval into disjoint sets and drawing indepenin the sampling procedure and in the resampling dently and henceforth applying multinomial resamstep. The proposed EP MCMC filter is compared pling approach [8]. Finally, systematic resampling with the generic particle filter [1], with a populaimproves the stratified approach further by detertion MCMC [2] and a sequential Monte Carlo samministically linking all the variable in the subinterpler [2]. Its effectiveness is illustrated over an exvals. These techniques are sensitive to the order in ample for object tracking in video sequences and which the particles are presented: a simple permuover the bearing only tracking problem. tation can change the resampling process. Though these methods are widely used in the literature, 1 Introduction they lack thorough theoretical analysis of their behaviour, apart some theoretical works [8]. Particle filtering [3, 4, 5] is a powerful technique that has proven its potential for estimation of Population-based simulation methods [2, 9] non-linear systems, with non-Gaussian noises and [10, 11, 12] can help resolving the particle filtermulti-modal distributions. The posterior state ing degeneracy problems. Population inference probability density function is approximated by a methods have been actively investigated during set of samples (particles) and they are propagated the recent years. The main idea behind these over time with appropriate weighting coefficients. techniques can be summarised as a process of genOne of the main limitations of the particle filter- erating a collection of random variables in parallel ing technique is degeneracy, the case when only to approximate some target density π [2]. As a one particle has a significant weight. The resam- result, the population simulation methods generate pling step introduces variety in the particles and a collection of samples in parallel as against a can help avoiding the degeneracy phenomenon by single independent or dependent sample. There are 1

two main categories of population-based methods. The first one is the group of the Markov Chain Monte Carlo (MCMC) methods and the second is based on importance sampling and resampling ideas. The main difference between these groups of techniques is in the fact that the MCMC methods are directed by theoretically convergent iterations, whereas the sampling/ resampling techniques rely on handling a large number of samples in parallel. One of the first methods [13] referred to as population-based MCMC, attempts to build a new target density π ∗ which admits π as a marginal. A number of different categories of techniques have been proposed within the literature [6, 14, 2]. In [13] the new target density is generated by a population move. The exchange move procedure swaps information between chains. Similar types of moves such as those in the context of genetic algorithms have been specified by the crossover, mutation and snooker moves [15].

amongst populations. The remaining part of the paper is organised as follows. Section 2 describes briefly previous related techniques: the PMCMC algorithm and the SMCS [2]. Section 3 presents the novel EP MCMC filter. Section 4 presents results over over the range-bearing object tracking problem based on simulated data and over the object tracking problem in real video sequences. Finally, conclusions are given in Section 5.

2

Related Previous Works

The PMCMC algorithm [2] attempts to build a new target density π ∗ which admits π as a marginal. The technique is motivated by two main hypotheses. The sequence of M densities {πj }, j = 1, . . . , M will be selected such that they remain related and in general easier to simulate than π. The algorithm comprises three main procedures: mutation, crossover and exchange, and all these operations are performed sequentially in time. A population is defined (i) (1:N ) by X k = [xk ], i = 1, . . . , N where N is the number of samples (particles) at discrete time k. The usage of a population of samples will allow global moves to be constructed resulting in faster mixing MCMC algorithms [16, 17, 18]. Table 1 presents the PMCMC Algorithm of Jasra et al. [2].

The main contribution of this paper is in the proposed novel evolving population MCMC (EP MCMC) filtering strategy. Iterative convergence on groups (populations) of particles is obtained by using a specified kernel. The effectiveness of the proposed technique is illustrated by a comparison with the generic particle filter (GPF) algorithm [1], with a population MCMC (PMCMC) [2] and a sequential Monte Carlo sampler (SMCS) [2]. The Table 1. A PMCMC Algorithm. PMCMC technique investigated in [2], attempts 1. Initialisation to combine the strength of two or more competing • Set discrete time instant k = 1 samples within a population using cross-over and mutation operators in order to build a more • For each sample i = 1, ..., N where N is reliable and robust population of samples. Whilst the number of samples the PMCMC technique generates better samples (i) (i) – Draw X k ∼ ηk , where X k is a within a population, the SMCS method [2] builds population of i samples and ηk is a better blocks of populations consisting of these known proposal distribution at time samples. This is done by initially generating a instant k. block of populations each consisting of several samples, applying crossover and mutation opera2. For t = 1, ..., T , where T is the MCMC genertors over Monte Carlo cycles and finally generating ations, perform a group of well behaving populations. In contrast • (MUTATION) to these techniques, the proposed EP MCMC filter Perform Mutation as illustrated in Secaims to generate a block of faithful populations tion 3.2 as in the SMCS and simultaneously combining Make a random choice between steps • samples within each of these populations in order • (CROSSOVER) to obtain the optimal solution. In other words, the Perform Crossover as illustrated in Secoperators of crossover, mutation and exchange are tion 3.1 applied both locally between samples and globally

(i)

• (EXCHANGE) Perform Exchange as illustrated in Section 3.3 Experimental results have confirmed that the MCMC technique often produces satisfactory performances nevertheless it relies on clever population moves to accomplish reliable performance in efficient computational time. The second approach, the sequential Monte Carlo sampler (SMCS) technique constructs samples from a sequence of related target distributions, using resampling techniques on the samples from previous target density. SMCS methods have been proposed as a framework for dealing with non-linear and non-Gaussian dynamic models. In principle, the method accomplishes a good representation of the posterior distribution by considering a number of hypotheses samples taken from an easy to sample distribution π and weighting them based on the likelihood of the observations and iterating over phases of selection of particles, resampling and reweighting [19, 20, 21]. Most of these population techniques share similar grounds and have common difficulties. As the size of the samples increase, the algorithm is able to better approximate the posterior distribution. However, degeneracy could occur, when all but one particles have weights turning negligible. In such cases resampling methods are then used for handling degeneracy. The resampling phase is usually performed when the measured Effective Sample Size (ESS) [6] is less than a predefined threshold Nthr . Generally, resampling involves selecting new particle positions and weights such that the discrepancy between the resampled weights is reduced [4]. The SMCS proposed in [2] is summarised in Table 2.

• Evaluate weights wk (X k ) from the likelihood and normalise the weights to ob(i) tain Wk . 2. Iterate steps 3 and 4. 3. Resampling • If ESS ≤ Nthr , where the ESS is meaPN (i) sured as ( i=1 (Wk )2 )−1 and Nthr is a threshold. – Resample (i)

– Set Wk =

1 N

4. Sampling • Increment the time instant k = k + 1, if k = p + 1, then stop. • For each sample i = 1, ..., N (i)

(i)

– Draw X k ∼ Kk (X k−1 , .), where Kk is a kernel function. (i)

• Evaluate weights w ek (X k−1:k ) and normalise the weights to obtain (i)

Wk =

3

(i)

(i)

Wk−1 w ek (X k−1:k ) . PN (j) (k) ek (X k−1:k ) j=1 Wk−1 w

The Proposed EP MCMC Filter

This Section presents an approach combining ideas from the PMCMC, SMC techniques [2], and genetic algorithms [22] for probabilistic inference. The motivation to the proposed population particle filtering algorithm is drawn from the similarities between the structure of the PMCMC algorithms [2] and the genetic algorithms [22]. Genetic algorithms [22] are characterised with the Table 2. A SMCS Algorithm [2] candidates for optimal solution at each generation 1. Initialisation and usually comprise three steps: crossover, mutation and selection. The resampling process in • Set discrete time instant k = 1 particle filtering is analogous to the reproduction • For each sample i = 1, ..., N , where N is procedure of genetic algorithms. Both procedures are aimed at introducing variety in the population number of samples of samples. The main advantage of the proposed (i) – Draw the population X k ∼ ηk here EP MCMC technique compared with the (i) where X k is a population of i sam- SMC is that it provides a very systematic mechaples and ηk is a known proposal dis- nism of drawing samples from a target density and tribution at time instant k. is therefore easy to implement it. In Table 3 we

(i)

illustrate the EP MCMC filter algorithm.

• If

Wn,k−1 (i) Wn+1,k−1

≤ Nthr2 ; where Nthr2 is some

threshold,

Table 3. The Proposed EP MCMC Filter

• Independent Particle-to-Particle Sampling:

1. Initialisation • Set discrete time instant k = 1 and number of populations n = 1

– Iterate over generations and For each sample i = 1, ..., N draw new popu(i) lations Xn,k using, ∗ (MUTATION) Perform Mutation as illustrated in section 3.2 ∗ Make a random choice between steps ∗ (CROSSOVER) Perform Crossover as illustrated in section 3.1 ∗ (EXCHANGE) Perform Exchange as illustrated in section 3.3 – End

• For each sample i = 1, ..., N (i)

(i)

– Draw X n,k ∼ ηk , where X n,k is the nth population containing i samples at kth time instant and ηk is a known proposal distribution at time instant k. (i)

• Evaluate weights wn,k (X k ) from the likelihood and normalise the weights to (i) obtain Wn,k . 2. Iterate steps 3. and 4. 3. Resampling

(i)

• If ESS ≤ Nthr1 , where the ESS is meaPN (i) sured as ( i=1 (Wn,k )2 )−1 and Nthr1 is some threshold. – For each population j = 1, ..., n Resample, ∗ Iterate over generations ∗ (MUTATION) Perform Mutation as illustrated in section 3.2 ∗ Make a random choice between steps ∗ (CROSSOVER) Perform Crossover as illustrated in section 3.1 ∗ (EXCHANGE) Perform Exchange as illustrated in section 3.3 ∗ Recompute the weights (i) wn,k (X k ) from the likelihood and normalise the weights (i) to obtain Wn,k . – End 4. Sampling • Increment the time instant k = k + 1, if k = p + 1, then stop.

ek (X k−1:k ) from cur• Evaluate weights w rent and previous states using the likelihood and normalise the weights to obtain (i) Wk In the following subsections, the crossover, mutation and exchange operators that are effectively used in the PMCMC and the EP MCMC methods are illustrated.

3.1

CrossOver

In the proposed strategy the component vector is encoded as a string of binary bits. The crossover point lc is a random point on the string of bits of length l. The crossover operator cannot be applied to all parts of the component vector. Some parts of the component vector may not undergo any change and thus for such components, the probability of crossover ρc is zero. This leaves the crossover being operated only on components that are expected to undergo random changes. The crossover operator functions on two distinguished component vectors, for example, xs , xq . A detailed algorithm of the crossover scheme used is as follows: • Draw a uniform random number uc for every component uc ≈ U (0, 1) and if uc ≤ ρc then perform crossover

• Estimate a crossover point lc based on a uniform random integer between 1 and the length of the component l • Generate two offsprings: xs = (x1s , x2s , ..., x(lc −1)s , xlc q , ..., xdq ) and xq = (x1q , x2q , ..., x(lc −1)q , xlc s , ..., xds ). where ds and dq refers to the length of the samples xs and xq respectively.

3.2

Mutation

xt

yt

Øh Øa11 Øa12 Øa21 Øa22 Øl11 Øl12 Øl21 Øl22

0.6 0.6 0.4 0.5

0.7

0.5

0.6

0.5 0.2 0.2 0.4

x’t

y’t Ø’h Ø’a11 Ø’a12 Ø’a21 Ø’a22 Ø’l11 Ø’l12 Ø’l21 Ø’l22

0.7 0.7 0.1 0.8

0.8

0.6

0.2

0.3

0.5

0.1

0.3

xt yt Ø’h Øa11 Øa12 Øa21 Ø’a22 Ø’l11 Øl12 Ø’l21 Ø’l22

Figure 1: Exchange Operation Illustrated on an example set of samples (first row), their probabilities (second row) and the new hybrid offspring

A probability of mutation is initially defined for where v k is a Gaussian process noise, v k ∼ each component. Such a probability is chosen in or- N (0, Σ), the sampling period is Tp = 1 and, der to make sure that components need no stochas T3 T2  p p tic fluctuations could be prohibited from undergo0 0 32 2  Tp  ing mutation operation. For such components, the  Tp 0 0  2  . Σ = 5 (3) probability of mutation ρm is considered zero. The Tp3 Tp2  0 0  32 2  components are assumed as a vector string of biTp 0 0 Tp nary units. According to the proposed mutation 2 mechanism, ¡ 1 2 3 4 ¢0 The state vector is xk = xk , xk , xk , xk , where x1k • Draw a uniform random number um for every and x3k correspond to the horizontal and vertical component um ≈ U (0, 1) and if um ≤ ρm then positions, while x2k and x4k correspond to the velocperform mutation ities. The observation equation of the bearing only tracking problem is: • Estimate a mutation point lm based on a uniµ 3¶ form random integer between 1 and length of xk −1 + wk , (4) yk = tan the component l x1k • Flip the lm bit where, the measurement noise is wk ∼ N (0, R), having a covariance R = 10−1 . Experiments are 3.3 Exchange carried out comparing the proposed EP MCMC Consider two independent chain of samples, for ex- filter the PMCMC, SMCS [2] and GPF [1] (with ample, s and q. Swap the information between the recursively calculated weights). Their performance two chains based on a probability min{1, A} as il- is evaluated using the combined (in x and y lustrated in Figure 1, where direction) root mean square error (RMSE) and the average number of unique particles. Figure 2 πs (xq )πq (xs ) A= . (1) shows the root mean square error (RMSE) over πn (xs )πq (xq ) 100 Monte Carlo runs. It is evident that the 4 Results and Analysis EP MCMC filtering mechanism produces smaller RMSE error than the PMCMC, SMCS and GPF 4.1 Bearings Only Tracking across most intervals of time. Simulation results from the bearing only tracking All filter are run with N = 1000 particles. The problem (with the same parameters as in [23]) are thresholds used are as folllows: Nthr = Nthr1 = presented in this subsection. The target motion is Nthr2 = N/10. However, the algorithms maintain a modeled using a constant velocity model different number of unique particles, as it is evident   from Figure 3. Figure 3 shows the average number 1 Tp 0 0  0 1 0 0  of unique particles that the EP MCMC filter uses in  xk =  (2) comparison with the GPF. The EP MCMC filter re 0 0 1 Tp  xk−1 + v k tains a higher number of unique particles compared 0 0 0 1

Root Mean Square Error Plot

Trajectory Plot of Motion of the Object Center

30 SMCS PMCMC GPF EP MCMC

25

EP MCMC GPF PMCMC SMCS Manual Ground Truth

300

20

y−cordinate Position

Combined RMSE

250

15

10

150

100

50

5

0

200

0 0

0

5

10

15 Cycles

20

25

30

100 200

40 300

Figure 2: Root mean square error (combined on x and y positions) of EP MCMC [2], SMCS [2], and the GPF [1] over 100 Monte Carlo runs.

30 20

400

x−cordinate Position

10 500

0

Frame Number

Average Unique Particles

Figure 4: Trajectory of the moving object estiwith the GPF. This fact predetermines the superior mated by the EP MCMC, PMCMC [2], SMCS [2], performance of the EP MCMC algorithm compared the generic PF [1] and ground truth data. with the other considered filters.The EP MCMC filThe location of the centre (tx , ty ) of the torso can ter introduces variety in the particles both in the be estimated and all other body parts (legs, arms sampling and resampling steps. and head) are located with respect to the torso, Average Number of Unique Particles versus Cycles 800 by taking into account certain geometrical/spatial EPMCMC GPF constraints between the body parts. The centre of 700 SMCS PMCMC every other part within the body model is calcu600 lated using the centre of the torso and accounting 500 for their rotational degree of freedom. The over400 all state vector of the population particle filter is 300 assumed to contain the centre of the torso and the 200 rotation angles of all the parts. A constant velocity 100 model is assumed and the likelihood is a function 0 0 20 40 60 80 100 of the colour distribution. Number of Cycles The experiments are conducted on a Intel CoreFigure 3: Average number of unique particles (be- Duo processor with 1GB RAM. Figure 4, shows fore resampling) and for one run of the EP MCMC, the actual trajectory of the moving object and the PMCMC [2], SMCS [2] and GPF [1]. estimated trajectories obtained by the proposed EP MCMC filter, the PMCMC [2], SMCS [2] and GPF [1]. The actual trajectory of the torso is man4.2 Video Body Parts Tracking ually obtained and this is our ground truth data. The performance of the proposed EP MCMC tech- It can be observed that the GPF looses track of the nique is illustrated on the body parts tracking prob- moving object at around the 27th time instant and lem [24]. The object detection is performed by this is a direct consequence of loss of diversity in the means of the cluster background subtraction algo- population of samples. The proposed EP MCMC rithm proposed in [25]. In the context of the hu- algorithm is able to track the moving objects acman body parts tracking the centre of the region curately in comparison with the manually labeled surrounding each body part is evaluated. Video ground truth data. Figure 5 presents the absolute mean error in data sets [26] comprising different motions are used. Results are shown over a video sequence from the the x and y co-ordinates for the EP MCMC, PMCMU data set [26] comprising a moving person and CMC, SMCS, and GPF. It is evident that there is a consisting of 153 colour image frames at a resolu- big difference in error between the errors from the GPF compared with the other algorithms. The EP tion of 352 x 240 pixels.

5

Mean Absolute Error in Spatial Location

Error in y−cordinates

80 EPMCMC GPF PMCMC SMCS

60

40

20

0 300 40

200

30 20

100 10 0

Error in x−cordinates

0

Frame Number

Conclusions

This paper proposes a novel evolving population MCMC technique for probabilistic inference. The proposed technique is compared with the population MCMC particle filtering algorithm [2], SMC sampler algorithm [2] and the generic PF. The results demonstrate that the main appealing property of the proposed EP MCMC filtering technique is in its ability to iterate through the processes to crossover, exchange and mutation for building a diverse population and simultaneously resample particle with a deterministic selection procedure.

Figure 5: Absolute mean error in pixels in x and y co-ordinates for the proposed EP MCMC, PM- Acknowledgement: The authors acknowledge CMC [2], SMCS [2] and GPF [1] computed from the support of UK MOD Data and Information the position estimates and the ground truth data. Fusion Defence Technology Centre under the Tracking Cluster project DIFDTC/CSIPC1/02. MCMC filter gives the least absolute mean error.

[1] E. Wan and R. van der Merwe, The Unscented Kalman Filter, Ch. 7: Kalman Filtering and Neural Networks. Edited by S. Haykin. Wiley Publishing, September 2001, pp. 221–280.

Combined (in x and y directions) Root Mean Square Error 250 EPMCMC GPF PMCMC SMCS

Combined RMSE

200

[2] A. Jasra, D. A. Stephens, and C. C. Holmes, “On population-based simulation for static inference,” Stat. Comput., vol. 17, no. 3, pp. 263–279, 2007.

150

100

50

0

References

0

5

10

15

20 25 Frame Number

30

35

40

[3] N. Gordon, D. Salmond, and A. Smith, “A novel approach to nonlinear / non-Gaussian Bayesian state estimation,” IEE ProceedingsF, vol. 140, pp. 107–113, April 1993.

Figure 6: Root mean square error (combined on x and y positions) for the proposed EP MCMC, PMCMC [2], SMCS [2] and GPF [1].

[4] A. Doucet, N. Freitas, and E. N. Gordon, Sequential Monte Carlo Methods in Practice. New York: Springer-Verlag, 2001.

In Figure 6, the plot of the root mean square error combined in both x and y directions against time (frame number) is illustrated. The proposed EP MCMC algorithm produces the least error in comparison to the other methods. This is further validated using the average root mean squared error over the considered time interval as: (5.4634, 5.8572) using the EP MCMC approach, (34.0862, 17.9748) through PMCMC, (23.0862, 10.2334) using SMCS and (94.0862, 25.8453) for the GPF.

[5] M. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Trans. on Signal Proc., vol. 50, no. 2, pp. 174–188, 2002. [6] J. Liu and R. Chen, “Sequential Monte Carlo methods for dynamic systems,” Journal of the American Statistical Association, vol. 93, no. 443, pp. 1032–1044, 1998.

[7] J. Hol, T. Sh on, and F. Gustaffsson, “On [17] P. Fearnhead, “Markov chain Monte Carlo sufresampling algorithms for particle filters,” ficient statistics, and particle filters,” Jourin Proc. Nonlinear Statistical Signal Processnal of Computational and Graphical Statistics, ing Workshop, Cambridge, United Kingdom, vol. 11, no. 4, pp. 848–862, 2002. September, 2006. [18] W. Gilks, S. Richardson, and D. Spiegelhalter, Eds., Markov Chain Monte Carlo in Practice. [8] A. Douc, E. Moulines, and O. Capp´e, “ComChapman and Hall, 1999. parison of resampling schemes for particle filtering,” in Proc. of the 4th International Symp. on Image and Signal Processing and [19] O. Capp´e, S. Godsill, and E. Mouline, “An overview of existing methods and recent adAnalysis, 2005, pp. 64–69. vances in sequential Monte Carlo,” IEEE Proceedings, vol. 95, no. 5, pp. 899 – 924, 2007. [9] T. Higuchi, “Monte Carlo algorithm using the genetic algorithm operations,” J. Statist. [20] J. Vermaak, N. Ikoma, and S. Godsill, “SeComput. Simul., vol. 59, pp. 1–23, 1997. quential Monte Carlo framework for extended object tracking,” IEE Proc.-Radar, Sonar [10] M. Strens, M. Bernhardt, and N. Everett, Navig., vol. 152, no. 5, pp. 353–363, 2005. “Markov chain Monte Carlo sampling using direct search optimization,” in Proc. of the Ninetieth International Conf. on Machine [21] O. Frank, J. Nieto, J. Guivant, and S. Scheding, “Multiple target tracking using sequential Learning, 2002. Monte Carlo methods and statistical data association,” in Proc. of the IROS, 2003. [11] M. Strens, “Evolutionary MCMC sampling and optimization in discrete spaces,” in Proc. of the Twentieth International Conf. on Ma- [22] M. Mitchell, An Introduction to Genetic Algorithms (Complex Adaptive Systems) . MIT chine Learning, 2003. Press, 1998. [12] S. Park, J. Hwang, and E. Kim, “A new particle filter inspired by biological evolution: Ge- [23] A. Doucet, M. Briers, and S. Senecal, “Efficient block sampling strategies for sequential netic filter,” in Proc. of World Academy of Monte Carlo methods,” Journal of ComputaScience, Engineering and Technology, Vol. 21, tional and Graphical Statistics, vol. 15, no. 3, 2007. pp. 693–711, 2006. [13] C. J. Geyer, “Markov chain Monte Carlo maximum likelihood,” in Computing Science and [24] D. Ramanan, D. A. Forsyth, and A. Zisserman, “Tracking people by learning their apStatistics: Proc. 23rd Symp. Interface, 1991, pearance,” IEEE Trans. Pattern Anal. Mach. pp. 156–163. Intell., vol. 29, no. 1, pp. 65–81, 2007. [14] J. Liu, Monte Carlo Strategies in Scientific [25] H. Bhaskar, L. Mihaylova, and S. Maskell, Computing. Springer Verlag, 2001. “Automatic target detection based on background modeling using adaptive cluster den[15] L. F. and W. W.H., “Real-parameter evosity estimation,” in LNCS from the 3rd lutionary monte carlo with applications to German Workshop on Sensor Data Fusion: bayesian mixture models,” Journal of the Trends, Solutions, Applications, Germany, American Statistical Association, vol. 96, pp. 2007, pp. 130–134. 653–666, June 2001. [16] J. Pantrigo, A. S´anchez, K. Gianikellis, and [26] R. Gross and J. Shi, “The CMU motion of body (MoBo) database (CMU-RIA. S. Monteymayor, “Combining particle filter TR-01-18),” Robotics Inst., Carnegie Meland population based metahuristics for visual lon Univ. The data are available at: http : articulated object tracking,” Electronic Let//mocap.cs.cmu.edu/, Tech. Rep., 2001. ters on Computer Vision and Image Analysis, vol. 5, no. 3, pp. 68–83, 2005.