Correcting Chromatic Aberrations Using Image ... - Semantic Scholar

3 downloads 0 Views 291KB Size Report
Figure 2: Upper left shows a test image (the blue channel). The gures on the right show 2D histograms of color space top is blue vs green, bottom is red vs green ...
Correcting Chromatic Aberrations Using Image Warping Terrance E. Boult

Center for Research in Intelligent Systems, Columbia University Department of Computer Science, NYC, NY 10027. and George Wolberg

Department of Computer Science City College, CUNY, NYC NY.

Abstract

The problem of chromatic aberration arises because each wavelength of light is refracted di erently by the elements of a lens. Unfortunately, this means that is image will be blurred and distorted. In color imaging these distortions cause measurable di erences between the images. Recent research has proposed an approach for dealing with this aberrations by actively controlling the optics of the imaging system. This paper addresses the same problem, but instead of adapting the optics, we adapt the geometry of the (already obtained) images; we do chromatic aberration correction by image warping. We brie y discuss the image restoration/reconstruction techniques used, since they are non-standard. This is followed by a discussion of the techniques used to de ne the chromatic aberration correcting warp. The technique is demonstrated and analyzed on two test cases and is directly compared to the active optics approach.

1 Introduction

The rst stage of an imaging system is the lens, which refracts the incoming light to focus it on the image plane. It has long been known that the refraction of light depends upon the wavelength, a ray refracted at the lens surface becomes a small spectral fan. According to geometric optics for a a simple lens, if a point object is placed in front of a lens, there would be a plane at some focal distance where the image of that point would be in focus, see Fig. 1. Unfortunately, because refraction is wavelength dependent, this focal distance is also wavelength

dependent. The di erence, due to wavelength, between the ideally focused image and actual image is called chromatic aberration. Chromatic aberration is generally broken up into two categories: axial aberrations and lateral aberrations [Slama et al., 80]. In axial aberrations, the focal plane for one wavelength, say red, will be displaced along the optic axis from the focus plane for another wavelength, say blue. In lateral aberrations, the image of a one colored feature point will be displaced laterally within the image plane with respect to another color point. There are two parts to this lateral displacement. In general the largest component is a di erence in the magni cation, which causes a generally radial translation of image features. Secondly, there may be di erences in the optic axis for the di erent wavelength. The fact that some parts of a color image are more blurred that others, or that there the wavelength dependent distortions may not seem too important at rst. If our algorithms were truly robust this might be true, but increased accuracy is always desirable, especially if it does not require expensive new equipment. For example, consider a technique which requires clustering in color space. The e ects of chromatic aberration can be very important, especially if the clustering makes assumptions on the shape/distribution of the cluster. Examine the color space depicted in gure 2 and imagine trying to nd a \T" which is a process required in the color segmentation/highlight detection algorithm in [Klinker, 88]. Even if a color vision technique does not use the actual values of the RGB triples, but rather just uses

Image plane

Object

Point object on axis

Optic axis

Focus Rays for Blue, Green, and Red

Lateral chromatic aberration causes shifting and magnification.

Axial chromatic aberration causes blurring. The size of the "blur circle" is determined by distance between the focus rays when they intersect the image plane.

R, G, and B arrows shown in their plane of best focus. The imaged version would be about the same size, just blurred.

Figure 1: Figure showing the geometric optic interpretation for Chromatic aberration caused by a thin lens. Longer wavelength focus long, and are magni ed greater than the shorter wavelength. edges detected separately in each color band, the misregist 1000 pixels. On the left and we did not have radiometric calibration inof Fig. 2, we see the 2D histograms for the red formation. Thus we adopted the following apcomponents, this time restricted to a window proach. Because the only information is near in the lower left of the image, [15 65]  [15 the edges we restricted our attention to this 65]. This represents a region which is, approx- region.zz The rows in tables 1 and 3 marked imately, maximally distant from the optic axis \outside mask" computed their values outside and hence where one would expect to nd max- this mask region to show the size of the error imal artifacts. The plots are similar in nature zz Determined as an expansion of the region where the graexcept that they use a log scale with 100 items dient magnitude was > 4 for the CCTV example and > 2 for the Photometrics examples. (Obtained by the KBVision being black. sequence FastGauss(2), GradIm, NormIm , ThrshIm followed The remaining measures we will report are by a MorphOps with \d8 5e8".

Figure 4: 2D histograms showing dispersion in color space using di erent windows. The shows the results on the box from the lower left of the image, [15 65]  [15 65]. In this case the active optics (left) does not do as well, probably because of geometric lens distortions a ecting the image. (The open nature of the plot is indicative of a magni cation error.) The image warping approach, lower right, does reasonably well. It too has more problems in the corners, possibly because of inaccuracies in the warping, plus focus a ects. These observations are also supported by the quantitative results in table 1. Algorithm Uncorrected Uncorrected Active Optics Image Warping Uncorrected Active Optics Image Warping Uncorrected Active Optics Image Warping Uncorrected Active Optics Image warping

Region outside mask [15 465] [15 497] [15 465] [15 497] [15 465] [15 497] [ 15 65] [15 65] [ 15 65] [15 65] [ 15 65] [15 65] [ 50 400] [270 300] [ 50 400] [270 300] [ 50 400] [270 300] [ 235 260] [50 430] [ 235 260] [50 430] [ 235 260] [50 430]

Gray-line error 237.234 568.381 378.378 411.030 616.824 557.978 449.579 778.941 637.172 703.478 554.214 363.741 472.019

BW-RGB error 0.096 0.363 0.353 0.364 1.049 1.039 1.053 0.880 0.875 0.880 0.831 0.800 0.833

BW-R error 0.073 0.300 0.284 0.301 0.861 0.838 0.869 0.692 0.681 0.693 0.692 0.641 0.696

BW-G error 0.069 0.261 0.257 0.261 0.761 0.765 0.760 0.636 0.637 0.635 0.594 0.585 0.595

BW-B error 0.076 0.260 0.260 0.260 0.756 0.756 0.756 0.673 0.673 0.673 0.592 0.592 0.592

Table 1: Table of error for CCTV examples. As can be seen the active optics approach produces quantitatively better values for all examples except the one in the lower left part of the image. Note the di erences in the blur related measures on the vertical vs horizontal windows in the images. measures outside this region of interest. Given that we do not have radiometric calibration information, we use a simple heuristic to compute approximate information. We know that the calibration target was to be black and white. We computed a set of reference values for every 10 x 10 window in the input. These references values are obtained by considering only pixels outside the mask described above but within a window of 60x80 pixels for the CCTV (25 x 60 for the Photometrics) centered

around the current point. Within this window we consider average all those above 130 to get a \white" level and all those below 100 to get a \black" level. This was done separately for each color band. Using these reference values, we de ne ve error measures. The rst, which we call Gray-line error, is the average distance from a RGB triple to the line de ned by the reference values. This error measure relates to the color shift of a pixel. The remaining measures were meant to be more sensitive to blur

Figure 5: 2D histograms showing dispersion in color space using only edges of one type. The upper plots show the performance of the two algorithms (active optics on the left) when applied to vertical window with extents [50, 400]  [270, 300]. This window contains only horizontal edges. In this case the active optics does extremely well (it is best on or near axis). As can be seen in the the upper right the image-warping technique has a sigmoid shaped bias. This is caused by di erence in focus between the red and blue channels. The bottom row uses the horizontal box [235, 260]  [15, 430] which contains only vertical edges. Again, active optics is right on the money. The image-warping technique does not show the di erential blur as in the vertical case, but this time shows some residual magni cation error (mostly near the outer edges of the image.) We are not sure about the exact cause of this directionally selective behavior, but have found a similar di erence in the blur of uncorrected images. within the image. Note that the Gray-line error could be made small by overly blurring the image, because everything would approach gray! The second quantitative error measure, which we call BW-RGB error is de ned as the mean pointwise distance from each RGB triple to the nearer of the reference triples. Since the underlying image was supposed to be black and white this measure should be small. If the images are locally blurred this measure will grow. If there

is excessive blurring, the heuristic calibration processes will cause this measure to become too small. The remaining measures were meant to be sensitive to blur separate in each wavelength. The third measure, which we call BW-R error, is the distance between the R value of a pixel and closer of the reference values for R. The measures BW-G error and BW-B error are de ned similarly. Under ideal imaging, each of the error mea-

Figure 6: 2D histograms showing dispersion in color space using the big window and the red channel. Here we see the 4 di erent image warping results using di erent image reconstruction techniques. The upper left is the quadratic-box using cubic-convolution based edge values with a value of A = ?1. This results in a correction which is barely di erent from using linear interpolation to get the edges. The upper right shows the results using just linear interpolation for image reconstruction. While it still does well, it is not as tight a cluster as the quadratic-box restoration approach. In the lower right we see the results of applying the integrating resampler using cubic-convolution with A = ?1. While it may be a superior image reconstruction lter, it has signi cant problems in this application. We also tried cubic-convolution as it was originally intended (i.e. doing point sampling reconstruction), and the results were even worse. On the lower left we show the result of another restoration lter. This one is also a quadratic spline, but the point spread function used to de ne it was a 4 piece cubic approximation to a Gaussian. sures would be zero, and for non-ideal imaging smaller error measures are better. Unfortunately, interpretation of these measures is complicated by the fact that they do not have intuitive units of measure, so its not clear how important a di erence of 10 units is in the gray-line error. Additionally, because the images have noise and also because our approximated calibration data is not perfect, the error measures have an o set so that it is unlikely they would

ever attain the value zero. When we report the quantitative results, we will also report the error measures in the area outside the aforementioned edge mask. In these regions the image should have little chromatic aberration, and the residual error measure gives some indication of the base level of each error measures.

3.2 Analysis of results

The analysis is mostly the data itself, presented in gures 3{8 and tables 1{3. The cap-

Figure 7: Here we see some examples using a Photometrics camera and a Fujinon lens. The gure shows 2D histograms of color space, with blue vs green on theleft, and red vs blue on the right. The lens was supposed to have been corrected for chromatic aberration. Obviously it was not completely corrected. Note that these plots are on a di erent scale (black =100) than the CCTV examples (black=1000).

Figure 8: Here we see the corrected versions of the Photometrics example. Green is on top, red on the bottom. On the left are the active optics approach. On the right is the imaging warping approach. Qualitatively, we did comparably well, for a quantitative comparison see table 3. While the active optics approach reduced the RMS error for the red channel, it did increase the size of the envelope in color space.

Algorithm Uncorrected Active Optics Quadratic-box Restoration, edges from linear interpolation Quadratic-box Restoration, edges from Cubic Convolution, A=-1 Quadratic-gauss restoration, edges from Cubic Convolution, A=-1 Bi-linear Interpolation Cubic Convolution, A=0 Cubic Convolution, A=-1

Gray-line error 568.381 378.378 411.030 412.021 411.744 453.921 581.295 585.868

BW-RGB error 0.363 0.353 0.364 0.363 0.364 0.366 0.367 0.363

BW-R error 0.300 0.284 0.301 0.301 0.301 0.303 0.305 0.300

BW-RGB error 0.154 0.391 0.392 0.399 0.969 0.968 1.027

BW-R error 0.113 0.302 0.301 0.304 0.731 0.709 0.774

BW-G error 0.261 0.257 0.261 0.261 0.261 0.264 0.266 0.260

BW-B error 0.260 0.260 0.260 0.260 0.260 0.260 0.260 0.260

Table 2: Table of error for di erent reconstruction algorithms applied to CCTV example. There is a slight sharpening when using A = ?1, but because the warp is rather small, the sharpening is not very signi cant. Note that the di erence between the new reconstruction methods and linear interpolation is about the same as the di erence between active optics and the new methods. Finally, cubic convolution seems worse than the uncorrected image, although the qualitative results looked like some improvement. We are still investigating this behavior. Algorithm Region Uncorrected [outside mask] Uncorrected [15 323] [15 323] Active Optics [15 323] [15 323] Image Warping [15 323] [15 323] Uncorrected [ 15 65] [15 65] Active Optics [ 15 65] [15 65] Image Warping [ 15 65] [15 65]

Gray-line error 306.499 694.093 456.365 562.715 805.603 546.352 567.845

Table 3: Table of error for Photometrics example

tions in the gures are meant to be almost self contained, and present most of the commentary. The 2D histograms are meant to provide a qualitative measure of performance, and clearly show the largest errors. The tables provide a more quantitative comparison with a unweighted sum of distances type mixing of large and small errors. The references to the active optics approach referees to [Willson and Shafer, 91b], who were kind enough to supply us with their data. References to quadratic-box restoration refer to the method described in section 2.1, [Boult and Wolberg, 91] and [Wolberg and Boult, 91], with edge values determined with linear interpolation. Details on cubic convolution, which is use in gure 6 and table 2 can be found in the 2 previous references as well as [Rifman and McKinnon, 74] and [Park and Schowengerdt, 83]. We note that the gures use cubic convolution in the integrating resampling approach. We

BW-G error 0.119 0.288 0.291 0.305 0.670 0.694 0.775

BW-B error 0.119 0.301 0.301 0.301 0.791 0.791 0.791

also tested cubic convolution using point sampling, and its performance was slightly worse.

3.2.1 CCTV camera based examples

These images were collected at CMU, and used in [Willson and Shafer, 91b]. The used a General Imaging camera connected to Matrox frame grabber. The lens was a Cosmicar motorized zoom lens (12.5-75mm), with a minimum focus distance of 1.2m. For imaging they used a Corion IR block + lters for R,G and B. The the 1/2" checkerboard was imaged at a distance of 1.5m. The original blue channel and initial color histograms were shown in Figs 2.

3.3 Improvement to the image warping technique

If the original images had been taken at a point of maximal focus for yellow (or \white"),

 The used Wratten lters. #25 + 0.9ND for red, #58 + 0.6ND for green, and #47B for blue.

then the results of the image warping techniques would probably have been even better. A signi cant part of our error is due to the di erential blur between wavelengths, which is not corrected by warping. In the CCTV case this di erence should be maximal when focused on an extremal wavelength (e.g. blue) as was the case here. Note that focusing on yellow or an un ltered image should reduce the error in the uncorrected images as well, but should not affect the results of the active optics approach. The image warping technique would also likely bene t from a denser set of feature points, especially near the edges where the lens distortions are changing most quickly. Future work will examine the calibration techniques to correct both for chromatic aberration and radial/tangential lens distortion. Further we will be looking into determining a functional form, parameterized by zoom, focus and aperture settings, for the correction functions. As discussed below, one of the main advantages of the active optics approach is the ability to correct for axial aberrations, i.e. wavelength dependent blur. An interesting approach would be to use active focus for each channel, then use image warping on the results to correct for lateral chromatic aberration. This should provide a good tradeo between delity and cost of implementation since it would require only active focus control. In fact, the focus di erences might be implemented directly in an RGB sensor, e.g. by beam splitting and imaging each separately with a slightly di erent focal length.

4 Critical Comparison

+ +





?

?

is growing, the ability to precisely translate the camera between images is less common. Warping can handle signi cant chromatically varying geometric distortions even if no focus/zoom could correct for them (such as higher order lens distortions). The warping approach holds the potential to correct for other geometric and radiometric aberrations at the same time it corrects for chromatic aberrations. This, however, would need more calibration information. The warping approach can be successfully applied to lenses that have undergone \chromatic aberration correction". Because of the complex nature of the chromatic aberrations on such lenses, image warping may, depending on your error criterion, do better than active optics. Because image warping, as describe here, is local in nature errors in the localization of calibration features will have a local a ect. Such errors, however, will not be mitigated, except in spatial extent, by the number of features. If we used a global model for the distortion then we might o set feature localization errors by overconstraining the model. For simple (uncorrected) lenses, the active optics approach yields better overall results since it can refocus the channels. We can not quantitatively say how much active optics gains since the image warping results should improve if the images are taken focused with yellow or un ltered light. The active optics approach does need the same amount of calibration information for operation on a number of focus planes. They would only need to compute change in zoom, and shifts for each di erent depth. The current image warping would need a full cubicspline mesh for each depth, though a more global model might be developed.

We now critically compare the two techniques, in a relative fashion. We show a + when the image warping technique has the advantage, a ? when the active optics approach has the advantage, and a , when the advantage might depend on the application. + Warping can be applied to images taken with an \RGB" camera where each frame is col- 5 Conclusions and future work This paper demonstrated the idea of image lected simultaneously. Thus it has the powarping for the correction of chromatic aberratential for use in color sequences. tion using images from two di erent camera / + Warping does not require specialized equip We were not able to determine the importance of this last ment. While the use of motorized zoom/focus stage of the CMU active optics approach.

lenses. The method compared reasonably well, [Park and Schowengerdt, 83] S.K. Park and both qualitatively and quantitative, to the CMU R.A. Schowengerdt. Image reconstruction active optics approach, [Willson and Shafer, 91b]. by parametric cubic convolution. ComThe proposed warping methods used recently puter Vision, Graphics, and Image Processdeveloped image reconstruction/restoration meth- ing, 23:258{272, 1983. ods [Boult and Wolberg, 91], which were shown [Rifman and McKinnon, 74] S.S. Rifman and to out perform other techniques. D.M. McKinnon. Evaluation of digital correction techiques for ERTS images. TechAcknowledgments nical Report 20634-6003-TU-00, TRW SysThis work was supported in part by DARPA tems, Redondo Beach, Calif., July 1974. Contract #N00039-84-C-0165 and in part by the NSF PYI award #IRI-90-57951 with addi- [Slama et al., 80] C.C Slama, C. Theurer, and S.W. Henriksen. Manual of Photogrammetry. tional support from Seimans and A&TT. Thanks American Society of Photogrammetry, Falls to Reg Wilson, Steve Shafer and the folks at the Church, VA, Fourth Edition, 1980. CMU Calibrated Imaging Lab for images, the data, the early drafts of the tech. report, and [Willson and Shafer, 91a] Reg Willson and Steven A. Shafer. Active lens control for high for generally useful discussions about image acprecision computer imaging. In Proceedings quisition. of IEEE Conference on Robotics and AuReferences tomation, volume 3, pages 2063{2070, Sacra[Boult and Wolberg, 91] T. Boult and G. Wolmento, CA, 1991. berg. Local image reconstruction and sub- [Willson and Shafer, 91b] Reg Willson and Stepixel restoration algorithms. Technical reven A. Shafer. Dynamic lens compensation port, Center for Research in Intelligent Sysfor active color imaging and constant magtems, Department of Computer Science, ni cation focusing. Technical report, The Columbia University, 1991. Robotics Institute, Carnegie Mellon Univer[Fant, 86] K.M. Fant. A nonaliasing, real-time sity, 1991. spatial transform technique. IEEE Computer [Wolberg and Boult, 89] G. Wolberg and TerGraphics and Applications, 6(1):71{80, Janrance E. Boult. Separable image warping uary 1986. See also \Letters to the Editor" with spatial lookup tables. Computer Graphin Vol.6 No.3, pp. 66-67, Mar. 1986 and Vol.6 ics, 23(3):369{378, July 1989. (SIGGRAPH No.7, pp 3-8 July 1986. '89 Proceedings). [Green, 89] W.B. Green. Digital Image Pro- [Wolberg and Boult, 91] G. Wolberg and Tercessing: A Systems Approach. Van Nostrand rance E. Boult. Imaging consistent reconReinhold Co., NY, 1989. struction/restoration and the integrating re[Kingslake, 78] R. Kingslake. Lens Design Funsampler. Technical report, Center for Redamentals. Academic Press, NYC NY, 1978. search in Intelligent Systems, Department of Computer Science, Columbia University, [Klinker, 88] G.J. Klinker. A Physical Ap1991. proach to Color Image Understanding. PhD thesis, Carnegie Mellon University, Pitt. PA, May 1988. [Krotkov, 87] E.P. Krotkov. Exploratory visual sensing for determining spatial layour with an agile stereo camera system. PhD thesis, University of Pennsylvania, Phila. PA, April 1987. [Laikin, 91] M. Laikin. Lens Design. Marcel Dekker, Inc., NY, 1991.