Region-restricted rapid keypoint registration - OSA Publishing

2 downloads 0 Views 1MB Size Report
Abstract: A two-stage keypoint registration approach is proposed to achieve frame-rate performance, while maintaining high accuracy under large perspective ...
Region-restricted rapid keypoint registration Zhenghao Li1, Weiguo Gong1*, A.Y.C. Nee2, and S.K. Ong2 1

Key Lab of Optoelectronic Technology and System of Ministry of Education, Chongqing University, 174 Shapingba Street, Chongqing 400044, China 2 Department of Mechanical Engineering, Faculty of Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore 117576, Singapore *[email protected]

Abstract: A two-stage keypoint registration approach is proposed to achieve frame-rate performance, while maintaining high accuracy under large perspective and scale variations. First, an agglomerative clustering algorithm based on an effective edge significance measure is adopted to derive the corresponding regions for keypoint detection. Next, a lightweight detector and a compact descriptor are utilized to obtain the exact location of the keypoints. In conjunction with the point transferring method, the proposed approach can perform registration task in textureless regions robustly. Experiments are conducted to demonstrate that the approach can handle the real-time tracking tasks. ©2009 Optical Society of America OCIS codes: (100.2960) Image analysis; (100.3008) Image recognition, algorithms and filters; (100.4999) Pattern recognition, target tracking.

References and links 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004). K. Mikolajczyk, and C. Schmid, “Performance evaluation of local descriptors,” IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1615–1630 (2005). H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110(3), 346–359 (2008). V. Lepetit, and P. Fua, “Keypoint recognition using randomized trees,” IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1465–1479 (2006). M. Özuysal, P. Fua, and V. Lepetit, “Fast keypoint recognition in ten lines of code,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Minneapolis, United States, 2007), pp. 1–8. J. Matas, O. Chuma, M. Urbana, and T. Pajdla, “Robust wide-baseline stereo from maximally stable extremal regions,” Image Vis. Comput. 22(10), 761–767 (2004). K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and L. V. Gool, “Comparison of affine region detectors,” Int. J. Comput. Vis. 65(1-2), 43–72 (2005). P. E. Forssén, “Maximally stable colour regions for recognition and matching,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Minneapolis, United States, 2007), pp. 1220–1227. C. Boncelet, “Image noise models,” in Handbook of image and video processing (second edition), A. C. Bovic, ed. (Elsevier Academic Press, San Diego, United States, 2005). P. E. Forssén, and A. Moe, “View matching with blob features,” Image Vis. Comput. 27(1-2), 99–107 (2009). Visual Geometry Group, “Affine covariant regions datasets,” http://www.robots.ox.ac.uk/~vgg/data/. T. Liu, A. W. Moore, A. Gray, and K. Yang, “An investigation of practical approximate nearest neighbor algorithms,” in Advances in Neural Information Processing Systems, L. K. Saul, Y. Weiss and L. Bottou, eds. (MIT Press, Cambridge, 2005). T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002). V. Lepetit, and P. Fua, “Towards recognizing feature points using classification trees,” in Technical Report IC/2004/74. (EPFL, 2004). E. Rosten, and T. Drummond, “Fusing points and lines for high performance tracking,” in Proceedings of the IEEE International Conference on Computer Vision (Beijing, China, 2005), pp. 1508–1515. E. Rosten, and T. Drummond, “Machine learning for high-speed corner detection,” in Proceedings of the European Conference on Computer Vision (Graz, Austria, 2006), pp. 430–443. Z. Li, W. Gong, A. Y. C. Nee, and S. K. Ong, “The effectiveness of detector combinations,” Opt. Express 17(9), 7407–7418 (2009), http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-17-9-7407.

#116233 - $15.00 USD

(C) 2009 OSA

Received 26 Aug 2009; revised 9 Nov 2009; accepted 9 Nov 2009; published 18 Nov 2009

23 November 2009 / Vol. 17, No. 24 / OPTICS EXPRESS 22096

18. M. L. Yuan, S. K. Ong, and A. Y. C. Nee, “Registration using natural features for augmented reality systems,” IEEE Trans. Vis. Comput. Graph. 12(4), 569–580 (2006).

1. Introduction Real-time keypoint registration is a crucial component of many practical applications in computer vision, ranging from video retrieval to augmented reality (AR). Though current local feature algorithms, such as SIFT [1], GLOH [2] and SURF-128 [3], have obtained notable performance in static image registration when there are large viewpoint and illumination changes, their additional processing to eliminate the second-order effects (including skew and anisotropic scaling) in feature detection and description section brings much computational cost, and results in a negative impact on the number of correspondence. Moreover, these algorithms are too time-consuming to realize the frame-rate performance. Lepetit et al. advocated formulating wide-baseline matching as a classification problem [4,5], training the classifier by generating numerous synthetic views of the keypoints as they would appear under different perspective or scale distortions. This shifts much computational burden to training section, without sacrificing registration performance. Thus it is sufficiently fast for real-time applications. However, this type of algorithm requires large memory in building view-sets and randomized trees, and accordingly, the run time of the training section is usually quite costly. Adhering to the direction of pursuing a good compromise among accuracy, robustness and time cost, we explore a two-stage keypoint registration approach, designated as R3KR for the acronym of ‘Region Restricted Rapid Keypoint Registration’. Firstly, the feature regions with high distinctiveness are detected from two images using an effective edge significance measure, and their correspondences are then obtained. Next, a light-weight local feature registration algorithm is adopted to further obtain the exact location of the keypoints between the two corresponding regions. Note that keypoint extraction of the second stage is performed on the matched feature regions, which are derived with invariance to affine transformation. The influences of scale, illumination, contrast and rotation are also taken into consideration in the registration process of the second stage to further increase the robustness. Thus, the proposed approach is theoretically affine invariant. Furthermore, benefiting from the twostage framework from region correspondence to keypoint correspondence, the time cost is substantially reduced. We have applied the proposed algorithm to markerless tracking in AR environment, and obtained desirable results. The remainder of this paper is organized as follows. Section 2 introduces the implementation details of the proposed algorithm. In Section 3, experimental results are presented. An application for markerless AR tracking is also introduced in this section. Lastly in Section 4, conclusions are presented. 2. Region restricted rapid keypoint registration 2.1 Region correspondence stage As mentioned above, this stage is to derive the corresponding regions for keypoint detection. A straightforward way to realize this stage is to make use of image segmentation methods. The highly recognized maximally stable extremal region (MSER) proposed by Matas et al. [6], employs watershed-like algorithm to derive the feature regions and yields remarkable performance [7]. Forssén further extended MSER to color space utilizing an effective edge significance measure [8]. Inspired from their work, we obtain the candidate regions using the following procedures. As is known, most image acquisition devices are photon counters. When a large number of photons hit the surface of charge-coupled device (CCD), the image noise can be modeled as the discrete Poisson distribution, and can be well approximated by the Gaussian distribution [9]. Given the expected intensity µ, the probability of the measured intensity I can be defined as

#116233 - $15.00 USD

(C) 2009 OSA

Received 26 Aug 2009; revised 9 Nov 2009; accepted 9 Nov 2009; published 18 Nov 2009

23 November 2009 / Vol. 17, No. 24 / OPTICS EXPRESS 22097

P( I / µ ) = G ( µ , a µ ),

(1)

where a is the camera gain that converts the photon count to a pixel value (simply set a = 1). Therefore, the probability that a pixel location P has a larger mean than its neighbor Q can be described as P( µ (P) > µ (Q)) = 1 − Φ(−

µ (P) − µ (Q) µ (P) − µ (Q) ) = Φ( ), µ (P) + µ (Q) µ (P) + µ (Q)

(2)

where Φ is the cumulative distribution function (CDF) of the standardized normal distribution. Since Φ is monotonously increasing from 0 to 1, the absolute value of its argument can be utilized as a measure of edge significance. Meanwhile, µ(P) and µ(Q) can be replaced by their maximum likelihood estimates I(P) and I(Q). Thus, the measure of edge significance d can be represented as d=

µ (P) − µ (Q)

=

I (P) − I (Q)

(3) . I (P) + I (Q) Defining two pixels belonging to the same contiguous region if their edge significance is smaller than a certain threshold dthr, an image evolution process will be obtained when varying dthr with the increase of time step t. However, the area of region grows faster in the beginning and slower towards the end. In order to make the image evolution approximately proportional to the time steps, Ref. [8]. introduced an inverse conversion of CDF instead of directly measuring the distance threshold. Thus, it can be constructed as

µ (P) + µ (Q)

c( x) = P(d < x) =

2

π ∫

0

x/λ

2

e − y dy.

(4)

Consequently, the distance threshold at time step t can be computed as d thr (t ) = c −1 (t / T )

t ∈ [0, T ],

(5)

where T is the total number of time steps, and is set as 200. By increasing the time step t, numerous sets of contiguous regions with unique labels can be obtained. During the evolution process, the cardinality of every contiguous region C(t) and its evolution threshold dthr(t) are recorded. The expansion rate r at time step t is defined as rt =

C (t + ∆ ) − C (t ) , d thr (t + ∆ ) − d thr (t )

(6)

where ∆ is the increment. The region with small expansion rate, namely, the region which has small cardinality change but relatively large distance variation, is selected as the feature region for exact keypoint detection. Only the largest one of the nested feature regions in an evolution process is reserved. The regions which areas differ by less than 15% are defined as overlapping regions, and can also be pruned. We also restrict the number of feature regions by cardinality in order to reduce the redundant computational cost. Figure 1(b) shows the feature regions detected from Fig. 1(a). We use approximate ellipses of the regions instead of actual regions. The approximate ellipse for a detected region can be described as R(M , C) = {X : ( X − M )T C−1 ( X − M ) ≤ 4},

(7)

where M and C represent the centroid and inertia matrix of an ellipse respectively [10]. They can be derived from the raw moments µ as

#116233 - $15.00 USD

(C) 2009 OSA

Received 26 Aug 2009; revised 9 Nov 2009; accepted 9 Nov 2009; published 18 Nov 2009

23 November 2009 / Vol. 17, No. 24 / OPTICS EXPRESS 22098

M=

1  µ01   , µ00  µ10 

C=

1  µ02  µ00  µ11

µ11  − MM T . µ20 

(8)

Fig. 1. Reference image (a) and the detected feature regions for exact keypoint detection (b). The reference image is from the ‘Graffiti’ sequence provided by the Visual Geometry Group [11]. The cardinality of each feature region is restricted from 900 to 36000.

As can be observed from Fig. 1, though the number of the non-overlapping feature regions is limited, the reserved regions are capable of filling up almost the entire image. The remaining elliptical regions are then warped into circular regions with consistent radius to obtain the affine invariance. Next, the standard 128 dimensional SIFT descriptor [1] and the hybrid Spill Tree (SP-Tree) [12] are adopted in the description and matching section. 2.2 Keypoint correspondence stage After going through the processes mentioned above in the region correspondence stage, the corresponding feature regions are built and will be used in the second stage for fine matching. In order to guarantee sufficient number of candidate keypoints, modern detectors tend to locate keypoints by simply examining the intensities of certain pixels around the tentative locations. Following the methodology similar to the algorithms proposed in [4,13–16], we consider only the intensities along a circle of 16 pixels around each candidate keypoint. We locate a candidate keypoint at P if the intensities of at least n contiguous pixels are all above (negative) or all below (positive) the intensity of P over a certain threshold. This is illustrated in Fig. 2. Varying n in the range from 10 to 15, the best performance is achieved when n is set as 12 in our experiments. This may be because a keypoint tends to be located in a uniform area or along an edge when n is smaller than 12. On the contrary, if n is larger than 12, it appears to be so rigorous that some ‘good’ keypoints will be discarded. Usually, a featureless keypoint can be rejected very quickly without scanning the entire circle.

Fig. 2. Keypoint location.

Once a keypoint has been located, the intensities of every three contiguous pixels on the discrete circle R are weightedly summed, resulting in a total of 16 sums. The 16 sums are utilized to construct a compact descriptor D. Assuming that Ri is a pixel on the circle R, Ricw and Riccw represent the neighboring points in the clockwise and counter-clockwise directions. Each element of the descriptor D can be described as Di = 0.5 × I R icw + I R i + 0.5 × I R iccw

#116233 - $15.00 USD

(C) 2009 OSA

i ∈ [1,16].

(9)

Received 26 Aug 2009; revised 9 Nov 2009; accepted 9 Nov 2009; published 18 Nov 2009

23 November 2009 / Vol. 17, No. 24 / OPTICS EXPRESS 22099

This makes the description more stable than using single pixel intensity as an element of the descriptor. In fact, the intensity I is substituted by the gradient G from the point on the circle to its center P, in order to resist the illumination change. Thus Eq. (9) can be rewritten as Di = 0.5 × GR icw → P + GR i →P + 0.5 × GR iccw → P

i ∈ [1,16].

(10)

The largest sum is chosen as the first element of the descriptor, and the vector is filled with the remaining sums of the circle in a clockwise direction. If more than one sum has the same largest value, all the possible descriptors are stored. This simple process is sufficient for rectifying the detected keypoint with respect to 2D rotation. Moreover, it considerably reduces the computational cost required in calculating the orientation of the small image patch and sorting the descriptor elements using the mean squared deviation (MSD). In addition, the descriptor vector is normalized in order to remove the variance of contrast. The polarity of every descriptor is also recorded such that the keypoints can be categorized as positive or negative. This partition is efficient for searching, since positive features need not be compared with negative ones. Lastly, all the descriptors are stored in a list, ranked by the sum of all elements in a descriptor. It is known that the density of keypoints in a region depends on image content. Highly-textured regions usually generate more candidate keypoints. In order to make the distribution of keypoint more uniform, the searching range is set to be proportional to the cardinality of the region (the density is limited to below 1 keypoint per 12 pixels). In the matching section of the second stage, the hybrid SP-Tree is also employed for its high efficiency. Finally, RANSAC, the effective way of eliminating spurious matches, should not be omitted. 3. Experimental details

We have evaluated the proposed algorithm against two criteria, namely, the number of correspondences and the reconstruction similarity (RS). We advocate evaluating the performance of a registration algorithm using the RS metric as it can reflect the degree of matching error well. More details can be found in Ref [17]. The evaluations are conducted on the standard test library [11] provided by the Visual Geometry Group (test images are regulated to the size of 800 × 600). The experimental results on the ‘Graffiti’ sequence are shown in Fig. 3. As it can be observed from Fig. 3, the performance of R3KR does not deteriorate much with small rotation and viewpoint changes compared with SIFT and GLOH, and it even outperforms them when there are large perspective distortions. Moreover, the total registration time of R3KR (0.476s) is only 20.59% of the computational time of SIFT (2.312s), and 18.99% of GLOH (2.507s). These data are collected using an ordinary laptop with 2.4 GHz Intel® CoreTM2 Duo CPU and 2GB RAM. Figure 4 shows an application of indoor markerless AR tracking. It can be seen that the R3KR can track the object robustly in various viewing conditions, even when the virtual object is located in a textureless region. The video sequence is acquired by a Sony IPELA CM120 camera, and the frame size is 320 × 240. The real-time performance (about 12.9 fps) is achieved relying on the multi-thread technology. One thread handles the region correspondence stage on current frame, while the other addresses the keypoint correspondence stage on previous frame. It should be noted that currently, there is no existing algorithm based on computer vision that can directly perform tracking task in a textureless region due to the reason that current keypoint detection is intrinsically texture-based corner detection. In order to address this problem, we have proposed the point transferring method [18]. This effective strategy is integrated into the application. By estimating the projective matrices and transferring the keypoints, the location of the virtual teapot can be found.

#116233 - $15.00 USD

(C) 2009 OSA

Received 26 Aug 2009; revised 9 Nov 2009; accepted 9 Nov 2009; published 18 Nov 2009

23 November 2009 / Vol. 17, No. 24 / OPTICS EXPRESS 22100

Fig. 3. Performance comparisons on the ‘Graffiti’ sequence.

Fig. 4. Application of markerless AR tracking. (a) shows the reference frame, and (b) shows the detected keypoints and the initial location of the virtual object. (c) and (d), (e) and (f), and (g) and (h), show the object tracking under viewpoint change, rotation and scale variations respectively.

4. Conclusions

In conclusion, we presented a feasible keypoint registration approach using a two-stage framework. It has been proven in our experiments that this approach not only runs relatively fast, but also has the capability of tolerating perspective distortion and light changes, and handling partial occlusions. Thus, the proposed approach is applicable to keypoint tracking and on-line image registration. Acknowledgements

This research is supported by the National High-Tech Research and Development Plan of China (2007AA01Z423), and by the National Basic Research Project of the ‘Eleventh FiveYear-Plan’ of China (C10020060355). Portions of the research were conducted at CIPMAS AR Laboratory at National University of Singapore.

#116233 - $15.00 USD

(C) 2009 OSA

Received 26 Aug 2009; revised 9 Nov 2009; accepted 9 Nov 2009; published 18 Nov 2009

23 November 2009 / Vol. 17, No. 24 / OPTICS EXPRESS 22101