Automatic Aerial Image Registration Without ... - Semantic Scholar

5 downloads 514 Views 1MB Size Report
as the basis for registration and create a number of image patches .... In this paper, we present a new approach to obtain rota- ..... tration in log-polar domain.
In Proceedings of the 4th IEEE International Conference on Computer Vision Systems (ICVS 2006), January 5-7, 2006, Manhattan, New York City, New York, NY, USA, pp

Automatic Aerial Image Registration Without Correspondence Yingen Xiong and Francis Quek Virginia Polytechnic Institute and State University Center for Human Computer Interaction 621 McBryde Hall, MC 0106, Blacksburg, VA 24061, USA [email protected]

Abstract This paper presents an approach for registering aerial images taken at different time, viewpoints, or heights. Different from conventional image registration algorithms, our approach does not need image matching or correspondence. In this approach, we extract a number of corner features as the basis for registration and create a number of image patches with the corner points as centers on both reference and observed images. In order to let the corresponding patches cover same scene, we use a circle which the radius can be changed as the shape of the image patches. In this way, the image patches can handle the case in which there are rotation and scaling at the same time between reference and observed images. With the orientation differences of patches between these two images, we create an angle histogram with a voting procedure. The rotation angle between the two images can be determined by seeking the orientation difference that corresponds to the maximum peak in the histogram. Once we get the rotation angle, we seek back for the two corresponding patches which the value of orientation difference is the same as the rotation angle. The ratio of radii of these two patches is the value of the scaling. The proposed approach can handle the situation of large rotation and scaling between reference and observed images. It is applied to real aerial images and the results are very satisfying.

1. Introduction Image registration is required when we match two or more images of same scene taken at different times, from different sensors, or from different viewpoints. Registration is very important for aerial images/videos and remote sensing. During a registration process, we have two or more images. Among them, there is a reference image and others are observed images. Image registration geometrically aligns these images. A comprehensive survey of conven-

tional image registration approaches developed before 1992 is available in [2]. The conventional alignment techniques are liable to fail because of the inherent differences between the two imageries we are interested in, since many corresponding pixels are often dissimilar. Later, more and more new approaches appeared. Semi-automated and automated approaches are more interesting. A review of the recent approaches was published in 2003 by [25]. Image registration methods can be classified into two categories: feature-based and area-based. Feature-based methods [10, 11, 18, 16, 17, 20, 23] use common features such as region features detected by means of segmentation methods, line features detected by standard edge detection methods, point features such as line intersections, corner, centroids of closed-boundary region to perform accurate registration. Usually, automatic selection of features is always preferable. Since most of the proposed features do not depend on the gray-level characteristics, the feature-based method has been shown to be more suitable for the problems of multisensor image registration [19, 15, 5]. Featurebased approaches are efficient but work well only on cases where the feature information is well preserved on both reference and observed images. The area-based methods [3, 14, 4, 1, 12, 13, 22] usually adopt a window of points to determine a matched location using the correlation technique. The most commonly used measures are normalized cross-correlation and sum of squared differences (SSD). Area-based methods exploit for directly matching image intensities, without any structural analysis. They are sensitive to the intensity changes. In some situations, area-based methods work well and are more robust than feature-based methods. However, if the rotation between two images is large, the value of cross-correlation will be greatly influenced and the correspondence between the images will be difficult to obtain. To handle this problem, methods to estimate the rotation parameter in advance need to be developed . Zheng and Chellappa [24] proposed a method to determine the rotation parameter. In this method, a Lambertian model is used

to model an image. The rotation angle is obtained by taking the difference between the illuminant directions. This approach works well for most cases but a scene including many buildings and objects. it can not handle false matches. Methods based on the generalized Hough transform have also been proposed [6, 21, 7] to solve this problem. These methods map a feature space into a parameter space, by allowing each feature pair to vote for subspace of the parameter space. Clusters of votes in the parameter space are used to estimate rotation parameter values. These methods tend to produce a large number of false positives and the computations are very expensive. Shekhar et al [19] proposed an approach to solve these problems. Their approach is similar in spirit to generalized Hough transform style methods, but employs a different search strategy to eliminate the problems associated with them. In their approach, a feature consensus mechanism is used to estimate the values of transformation parameters including rotation, translation, and scaling. The generalized Hough transform based methods will work well if the line and point features are well preserved on both reference and observed images. Hsieh et al [9] proposed an edge-based approach to estimate the rotation parameter before correspondence process. In their approach, they use wavelet transform to detect feature points. Each selected feature point is an edge point whose edge response is the maximum within a neighborhood. The orientations of the feature points are estimated using the edge directions. With the orientation differences of the feature points on reference and observed images, an angle histogram is created to estimate the orientation difference between two partially overlapping images. This approach can handle the situation large orientation difference between the two images, but it can only tolerate roughly about 10% scaling variation.

2

As shown in Figure 1, we detect corner features to obtain corner points in reference and observed images with Harris corner detector. Image patches are created using these corner points as positions. We use a circle as the shape of image patches to deal with rotation situation. By changing the size of image patches, we can handle scaling situation. We create a patch set for reference and observed images separately. Orientations of image patches are computed with eigenvector approach. With the orientation differences of patches between reference and observed images, we create an angle histogram by a voting procedure. The orientation difference corresponding to the maximum peak of the histogram is the rotation angle between reference and observed images. To deal with the scaling problem, we change the size of image patches by changing the radius, so that the corresponding patches on reference and observed images can cover the same scene. Different sizes of image patches are used to create different angle histograms. The scaling value between the two images can be determined by the angle histogram which has the highest maximum peak. In the following sections, we describe our approach as follow. First we describe the method for image patch orientation with eigenvector. Then we describe the approach for creating the angle histogram and obtaining the rotation angle and the value of scaling. Finally, we show our experimental results.

3 3.1

In this paper, we present a new approach to obtain rotation and scaling parameters between reference and observed images. We are considering about aerial images; however, the presented approach is also suitable for other types of images. The approach is based on the following assumptions. First, since the distance between the camera on an aircraft and the target object on the ground is very far, it is reasonable to assume that the images are taken by cameras whose optical axes parallel. Second, the variations of the intensity characteristics between images are assumed to be small. In our approach, we create a number of image patches whose positions are determined by corner points detected by Harris corner detection algorithm [8] on observed and reference images. An angle histogram is calculated for the orientation differences of patches between these two images with a voting procedure. The rotation and scaling parameters can be determined with the angle histogram. This approach can handle the registration problem which the rotation and scaling between reference and observed images are large. We will give the approach in detail in following sections.

SUMMARY OF OUR APPROACH

AUTOMATIC REGISTRATION APPROACH The Orientation of an Image Patch

For a given patch p(i, j)(i = 1, 2, ..., m), the covariance matrix is defined as COVp = E{(X − mx )(X − mx )T } (1)   i where, X = is the position of a pixel; mx = j   mxi is the centroid of the image patch p(i, j), the first mxj order moment. n m  jp(i,j)   i=1 j=1 m = n m  xi p(i,j) j=1 ni=1 m (2) ip(i,j)  i=1 j=1    m =  xj n m p(i,j) i=1

j=1

From equation 1, we have COVp =

1 N2 − 1



C11 C21

C12 C22

 (3)

Reference Image I f

Observed Image I t

Corner Detection with Harris Corner Detector to Obtain Corner Features

Corner Detection with Harris Corner Detector to Obtain Corner Features

^c

Cf

i f

[ x if y if ]T , i 1,2,  n f

Ct

Create Image Patch Set with Corner Points as the Patch Positions

Pf

Rf

R 1f : R 2f

^p

i f

, i 1,2,  , n f

>M

i f

j

[ xtj ytj ]T , j 1,2, nt

t

`

^p , j j

Pt

, i 1,2,  , n f

V2

`

V1

Ǘ2

Create Image Patch Set with Corner Points as the Patch Positions

Compute the Orientation Set with all Image Patches

Mf

^c

X2

1,2,, nt

t

Compute the Orientation Set with all Image Patches

@

Mt

>M

j t

, j 1,2,, nt

Ǘ1

`

@

Rt

X1

Rt1 : Rt2

k=k+1 Compute the Orientation Difference of Image Patches between the Reference and Observed Images and Obtain a Set of Orientation

^'M , l l

Differences, 'M

1,2,, n f u nt

`

Figure 2. Image Patch Orientation

Do the Voting Process and Create

H k o ^H `

an Angle Histogram

or Select an Angle Histogram Which Has the Highest h f

n m    λ1 − p(i,j)(i−mxi )2  j=1   ϕ1 = arctan n i=1 m p(i,j)(i−mxi )(j−mxj ) j=1  i=1  n m p(i,j)(i−mxi )(j−mxj )  i=1 j=1  ϕ2 = arctan  n m 2

h t

Maximum Peak H h , 'M h , R , R

max{H k , k

Hh

1,2,}

k

Obtain Rotation Angle and Scaling between Reference and Observed Images

M

'M h

s

λ2 −

Rth R hf

(4)

The eigenvalues can be found by solving |COVp − λI| = 0

(5)

Equation 5 will give us two eigenvalues. Suppose λ1 is the largest eigenvalue and λ2 is the smallest eigenvalue. The normalized eigenvectors V1 and V2 that correspond to the eigenvalues λ1 and λ2 are of course orthogonal. The eigenvalues satisfy the relations  T V1 COVp V1 = λ1 (6) V2T COVp V2 = λ2 The direction of eigenvector V1 is defined as the orientation of image patch p(i, j). tan2ϕ1 =

2C12 C11 −C12 2

= n m i=1

n m

j=1

i=1

j=1

p(i,j)(i−mxi )(j−mxj )

p(i,j)(i−mxi )2 −

n m i=1

j=1

j=1

p(i,j)(i−mxi )

(8) where, ϕ1 and ϕ2 are the directions of eigenvector V1 and V2 separately. Figure 2 shows the orientation of an image patch. By applying this approach, we can compute orientations for all image patches on reference and observed images.

Figure 1. Automatic Image Registration Approach

where, n m  C11 = i=1 j=1 p(i, j)(i − mxi )2    C = n m p(i, j)(i − m )(j − m ) 12 xi xj j=1 i=1 n m C21 = i=1 j=1 p(i, j)(j − mxj )(i − mxi )   n m  C22 = i=1 j=1 p(i, j)(j − mxj )2

i=1

p(i,j)(j−mxj )2

(7)

3.2

Angle Histogram for Image Rotation and Scaling

3.2.1

The Scale between Observed and Reference Images is One

For observed image, we can create a patch set Pt = {pjt , j = 1, 2, ..., nt } and obtain an orientation set ϕt = {ϕjt , j = 1, 2, ..., nt }. Similarly, for reference image, we can create a patch set Pf = {pif , i = 1, 2, ..., nf } and an orientation set ϕf = {ϕif , i = 1, 2, ..., nf } . Suppose that the rotation angle between observed and reference images is ϕ and both images cover same scene. For an image patch pjt on observed image, we compute orientation differences with all patches Pf = {pif , i = 1, 2, ..., nf } on reference image. ∆ϕl = |ϕjt − ϕif |, i = 1, 2, ..., nf

(9)

Since both images cover same scene, there is a patch pm f on reference image corresponding to the patch pjt . So the value of orientation difference between these two patches is equal to the rotation angle between these two images, i.e. ∆ϕm = |ϕjt − ϕm f | = |ϕ|.

(10)

which has the highest maximum peak. Suppose the image patch size on the observed image is Ati (i = 1, 2, ..., n1 ) and the size on the reference image is Af j (j = 1, 2, ..., n2 ). For each pair of image patch sizes Ati and Af j , we can obtain an angle histogram H k (k = 1, 2, ..., n1 ×n2 ) and the corresponding orientation difference ∆ϕk (k = 1, 2, ..., n1 ×n2 ). Choose the one which has the highest maximum peak Hhighest

Histogram

Hhighest

Hh = max{H k , k = 1, 2, ..., n1 × n2 } k

(11)

Let At and Af denote the patch sizes on observed and reference images corresponding to the histogram Hh which has the highest maximum peak. The value of the scaling between observed and reference images can be computed by 'M

'M i

s=

Figure 3. Angle Histogram For other patches on reference image, the values of orientation differences will be different. If we do the same computation for all patches pjt , j = 1, 2, ..., nt on observed image, we will obtain a set of orientation differences ∆ϕ = {∆ϕl , l = 1, 2, ..., nt × nf } and find nt correspondence patches on reference image. For these nt pairs of correspondence patches, the value of orientation differences will be the rotation angle. The other nt × nf − nt orientation differences will be different each other. If we create a histogram for the orientation differences, the counts which the value of orientation difference between correspondence patches appears will be the highest. Because of the computation error and the size of the bins in histograms, the counts will not be exactly equal to nt . On the basis of above, we can obtain the rotation angle between observed and reference images by a voting procedure. First, we compute the orientation differences of all image patches between observed and reference images, so we obtain a number of ∆ϕl (l = 1, 2, ..., nt × nf ). Second, we create a histogram for the ∆ϕl (l = 1, 2, ..., nt × nf ). Finally, we choose the ∆ϕ corresponding to the maximum peak of the histogram as the rotation angle between observed and reference images shown in figure 3. 3.2.2

The Scale between Observed and Referenc Images is not One

In this situation, we can obtain the value of the scaling through a serial of voting processes. By changing the size of the image patches and computing the angle histograms, we can obtain a serial of angle histograms. Choose the one



At /Af

(12)

In the mean time, the orientation difference ∆ϕ corresponding to the histogram Hh is the rotation angle ϕ between observed and reference images

4 4.1

Experiments and Result Analysis Image Patch Shapes and Positions

In this approach, how to choose image patches is very important. Although we do not need to know which patch in observed image corresponds to which patch in reference image, we do need the corresponding patches to cover the same scene, so the positions and shapes of image patches are very important. In order to let the corresponding patches to cover same scene, we need to find some features in images to determine the patch positions. For aerial image, corners are good features for the positions. We apply Harris corner detection algorithm [8] to extract these corner points. As shown in figure 4 (a) (b), although the observed image has rotation related to the reference image, the corner features represented as plus signs have not changed. If we choose a proper shape for image patches, they can cover same scene. As mentioned above, the image patch shape is crucial for this registration approach. Because the observed image has rotation related to the reference image, the rectangle shape can not be used. To handle this rotation, we choose a circle as the shape of image patches. As shown in figure 5, although there is rotation between observed and reference images, the image patches cover same scene if the patches are in the right positions and proper sizes.

(a) Observed Image

(b) Reference Image (a) Reference Image

Figure 4. Image Patch Positions

(b) Observed Image

14

12

10

8

6

4

2

0 −200

(a) Observed Image Patch

(b) Reference Image Patch

Figure 5. Image Patch Shapes

4.2

−100

−50

0

50 o

Rotate Angle=−5.0107

100

150

200

Scale=1

(c) Results Figure 6. Results in the Case of Rotation Only

Image Registration and Result Analysis

We use several different cases to testify our approach. The situations include the observed image only has rotation related to the reference image, the observed image only has scaling, the observed image has both rotation and scaling, and the images taken by different time which have both rotation and scaling. Notice that the detected corners are represented as plus signs on images. 4.2.1

−150

Rotation Only

Figure 6 (a) shows an aerial image. We rotate it by 5◦ in clockwise (the rotation angle is negative if its direction is clockwise). The result image is shown in figure 6 (b). So there is only rotation (-5◦ ) between observed and reference images shown in figure 6 (b) and (a). First we detect corner points for both reference and observed images. The parameters which we use in corner detection are σ = 2, standard deviation of smoothing Gaussian; T = 500, threshold; and R = 3, radius of region considered in non-maximal suppression. The corner detection results are shown in figure 6 (a) and (b). From the results we can see that the corner feature points are not changed related to the image although the observed image is rotated by −5◦ . This is why we use corner points as the positions of image patches. Second we define image patches in reference and observed images us-

ing these corner points as the centers of the patches. As mentioned above, we use circle as the shape of the image patches. Because we know that there is only rotation between reference and observed images, we use same size image patches (same radius) for both images.Third we compute orientations for all image patches with equation 7 for both images. We can obtain a set of orientation angles of image patches for the reference image and another set for observed image. Finally we do voting process. For each patch in observed image we compute orientation differences with all patches in reference image with equation 11, so we obtain a set of orientation differences . We use these orientation differences to vote and create angle histogram shown in figure 6 (c). Then we choose the orientation difference corresponding to the maximum peak of the histogram to be the rotation angle between observed and reference images. As shown in figure 6 (c), the result is −5.0107◦ . The relative error is 0.214%, which is very good result. Before we compute orientations for all image patches and do voting, we need to decide the size of image patches. It depends on many facts such as image detail, image texture, and features. If the patch size is too small, it may not cover enough features for eigen direction. On the other hand, if patch size is too large, it may cover too much difference and distortion of the image, so that it is difficult for corresponding patches to cover same scene. In our ap-

proach, we apply different sizes to try. For each patch size, we can create an angle histogram. So we can obtain a set of histograms. We choose the one which has the highest maximum peak. In this case, we change the radius of the patches from 19 pixels to 60 pixels. At last we choose 38 as the radius value which gives us the highest maximum peak in angle histogram. 4.2.2

Scale Only

As shown in figure 7, we scale reference image (a) by 1.5 times to obtain observed image (b). In this case there is only a scale between observed and reference images and no rotation. The results of corner detection are also shown in figure 7 (a) and (b). We use these corner points as the positions of image patches. In order to let the corresponding patches on reference and observed images cover same scene, we need to use different patch sizes for these two images. Because the scale between these two images is unknown, we do not know the exact patch size which we need to use. We give a size range for each image. For example, we give a radius range [Rf a Rf b ] to reference image patches and a radius range [Rta Rtb ] to observed image patches. For a certain radius pair (Rf , Rt ) (Rf ∈ [Rf a Rf b ], Rt ∈ [Rta Rtb ]), we can compute orientations of all image patches in reference and observed images and obtain an angle histogram. For all radius pairs, we can obtain a set of angle histograms and a set of the maximum peaks corresponding to all histograms. Select the highest one from the set of the highest maximum peaks, so that we know the patch radius Rf h used in reference image and patch radius Rth used in the observed image. Then the value of scaling between these two images is Rth /Rf h . In this case, we set the radius range for both images [35 60]. Finally we obtain Rf h = 36 pixels and Rth = 53 pixels corresponding to the angle histogram which has the highest maximum peak shown in figure 7 (c). The value of the scaling is Rth /Rf h = 1.4722. The relative error is 1.85% 4.2.3

Rotation and Scale

In aerial image registration, usually there are both rotation and scaling between reference and observed images. We need to obtain rotation and scaling parameters during the registration process. Here we combine above two cases to testify our approach. For the reference image shown in figure 8 (a), we rotate it by 5◦ in clockwise (the rotation angle is −5◦ ) and scale it by 1.5 times. We obtain the observed image shown in figure 8 (b). Similar to above two cases, we detect corner points as the positions of image patches for both reference and observed images shown in figure 8 (a) and (b).

(a) Reference Image

(b) Observed Image

300

250

200

150

100

50

0 −200

−150

−100

−50

0

Rotate Angle=−0.0080779o

50

100

150

200

Scale=1.4722

(c) Detected Results Figure 7. The Detected Results in the Case of Scale Only Since the scale is not one, we need to use different patch sizes for the reference and observed images. In this case we set the patch radius range [35 60] for both images. For each pair of patch radii on reference and observed images, we compute orientations for all image patches and create an angle histogram. Figure 8 (c) shows the histogram which has the highest maximum peak. From the histogram shown in Figure 8 (c) we obtain the results of the registration. The rotate angle is −4.8906◦ . The relative error is 2.18%. The value of the scaling is 1.5143. The relative error is 0.95%. 4.2.4

Images Taken by Different Time

TownshipAerialImageRegistration Figure 9 (a) and (b) shows Township images taken by different time in almost same height. Although some details between these two images changed, the main features are almost same. We set image (a) as the reference image and image (b) as the observed image. We need to register them and obtain the rotation angle and the value of the scaling between these two images. The corner detect results are shown in figure 9 (a) and (b). We set the radius range of patches [15 30] for both ref-

(a) Image Taken Before

(b) Image Taken Later

350

(a) Reference Image

(b) Observed Image 300

250

250 200 200

150

150

100 100

50 50 0 −200

−150

−100

−50

0

50 o

Rotate Angle=5.9478 0 −200

−150

−100

−50

0

Rotate Angle=−4.8906o

50

100

150

100

150

200

Scale=1.0833

(c) Detected Results

200

Scale=1.5143

(c) Detected Results Figure 8. Detected Results in the Case of Rotation and Scale erence and observed images. Through the voting process, we obtain the results. The rotation angle is 5.9478◦ and the value of the scaling is 1.0833. ChipImageRegistration Figure 10 shows two images of a chip taken by different time. Image (a) was taken in 1999 and image (b) in 2001. We set the image (a) as the reference image and image (b) as the observed image. We register these two images and obtain the rotation angle and the value of the scaling. Similar to above, we detect the corner points as the positions of the image patches for both images shown in figure 10 (a) and (b). We set the patch radius range [15 30] for both images, compute orientations of patches for both images, and obtain a set of angle histograms. We choose the one which has the highest maximum peak shown in figure 10 (c). This angle histogram gives us registration results. The rotation angle is 0.14856◦ and the value of the scaling is 0.94118

Figure 9. Detected Results for Township Images taken in Different Time

(a)Image Taken by 1999

(b) Image Taken by 2001

900

800

700

600

500

400

300

200

100

5

Conclusion

0 −200

−150

−100

−50

0

Rotate Angle=0.14856o

Image registration is an important technique for a great variety of application. In this paper, we proposed an approach to estimate rotation and scaling parameters between two images. With these parameters, we can register these two images automatically. Different from conven-

50

100

150

200

Scale=0.94118

(c) Detected Results Figure 10. Detected Results for Chip Images Taken by Different Time

tional methods, the proposed approach does not need image matching or correspondence. We create a number of image patches using corner points as their centers. The orientation of each image patch is computed using eigenvector methods. With the orientation differences of image patches between reference and observed images, we can create angle histogram with a voting procedure. The rotation parameter can be determined by seeking the maximum peak of the angle histogram. In order to determine the value of scaling, we change the size of image patches, so that the corresponding patches on reference and observed images can cover the same scene. After the rotation angle is determined, we can seek back for the corresponding patches which the orientation difference is the same as the rotation angle. The value of the scaling is the ratio of radii of the patches between reference and observed images. The proposed approach can obtain rotation and scaling parameters automatically without a corresponding process. It can also handle the situation of large orientation differences and large scaling between reference and observed images.

6

Acknowledgments

This research has been supported by MTL WrightPatterson AFB program, Grant No. 03-577-13, Detecting and Extracting Image Similarities, Differences, and Target Patterns.

References [1] R. Althof, M. Wind, and J. Dobbins. A rapid and automatic image registration algorithm with subpixel accuracy. IEEE Transactions on Medical Imaging, 16:308–316, 1997. [2] L. Brown. A survey of image registration techniques. ACM Computing Surveys, 24(4):325–376, 1992. [3] R. Cannata, M. Shah, S. Blask, and J. V. Workum. Autonomous video registration using sensor model parameter adjustments. In Proc. 29th Applied Imagery Pattern Recognition Workshop, pages 215–222, Cosmos Club, Washington D.C., October 16-18 2000. [4] L. Fonseca and B. Manjunath. Registration techniques for multisensor remotely sensed imagery. In Photogrammetric Engineering and Remote Sensing, volume 62, pages 1049– 1056, 1996. [5] J. Garcia, J. Besada, F. Jimenez, and J. Casar. A block processing method for on-line multisensor registration with its application to airport surface data fusion. In The Record of the IEEE 2000 International Radar Conference, pages 643 – 648, 5 2000. [6] A. Goshtasby and G. Stockman. Point pattern matching using convex hull edges. IEEE Trans. Systems Man Cybernet, SMC-15:631–637, 1985. [7] W. Grimson and D. Huttenlocher. On the sensitivity of the hough transform for object recognition. PAMI, 12:255–274, 1990.

[8] C. J. Harris and M. Stephens. A combined corner and edge detector. In Proc. 4th Alvey Vision Conf., pages 147–151, Manchester, 1988. [9] J.-W. Hsieh, H.-Y. M. Liao, K.-C. Fan, M.-T. Ko, and Y.-P. Hung. Image registration using a new edge-based approach. Computer Vision and Image Understanding, 67(2):112–130, 1997. [10] T. S. J. Flusser. A moment-based approach to registration of images with affine geometric distortion. IEEE Transactions on Geoscience and Remote Sensing, 32:382–387, 1994. [11] O. C. J. Matas, S. Obdrzalek. Local affine frames for widebaseline stereo. In 16th International Conference on Pattern Recognition ICPR 2002, volume 4, pages 363–366, 2002. [12] S. Kaneko, I. Murase, and S. Igarashi. Robust image registration by increment sign correlation. Pattern Recognition, 35:2223–2234, 2002. [13] S. Kaneko, Y. Satoh, and S. Igarashi. Using selective correlation coefficient for robust image registration. Pattern Recognition, 36:1165–1173, 2003. [14] R. Kumar, H. Sawhney, J. Asmuth, A. Pope, and S. Hsu. Registration of video to georeferenced imagery. In Fourteenth International Conference on Pattern Recognition, volume 2, pages 1393–1400, 1998. [15] H. Li, B. S. Manjunath, and S. K. Mitra. A contour-based approach to multisensor image registration. IEEE Trans. Image Processing, 4(3):320–334, 1995. [16] B. Manjunath, C. Shekhar, and R. Chellapa. A new approach to image feature detection with applications. Pattern Recognition, 29:627–640, 1996. [17] S. Meshoul and M. Batouche. A fully automatic method for feature-based image registration. In Proceedings of 2002 IEEE International Conference on Systems, Man and Cybernetics), volume 4, page 57, 10 2002. [18] S. Moss and E. Hancock. Multiple line-template matching with em algorithm. Pattern Recognition Letters, 18:1283– 1292, 1997. [19] C. Shekhar, V. Govindu, and R. Chellappa. Multisensor image registration by feature consensus. Pattern Recognition, 32(1):39–52, 1999. [20] Y. Shi and F. Qi. Feature-based deformable registration of brain images. In Proceedings of the 17th IEEE Symposium on Computer-Based Medical Systems), pages 553 – 557, 6 2004. [21] G. Stockman, S. Kopstein, and S. Bennet. Matching images to models for registration and object detection via clustering. PAMI, 4:229–241, 1982. [22] C. Thitiporn and F. Guoliang. Hybrid retinal image registration. IEEE Transactions on Information Technology in Biomedicine, pp(99):1–1, 2005. [23] S. Thunuguntla and B. Gunturk. Feature-based image registration in log-polar domain. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’05), volume 2, pages 853–856, 3 2005. [24] Q. Zheng and R. Chellappa. A computational vision approach to image registration. IEEE Transactions on Image Processing, 2(3):311–326, 1993. [25] B. Zitova and J. Flusser. Image registration methods: A survey. Image and Vision Computing, 21:977–1000, 2003.