Color Influence on Accuracy of 3D Scanners Based on Structured Light

2 downloads 0 Views 2MB Size Report
structured light with a color coding pattern, color influence on range accuracy ... The system is based on active stereovision to reconstruct 3D triangular mesh .... (r) Cyan. (s) White 9.5. (t) Neutral 8. (u) Neutral 6.3. (v) Neutral 5. (w) Neutral 3.5.
Color Influence on Accuracy of 3D Scanners Based on Structured Light Sophie Voisina b , David L. Pageb , Sebti Foufoua , Fr´ed´eric Truchetetc and Mongi A.Abidib a IRIS

Lab, Electrical & Computer Engineering Dpt, The University of Tennessee, 1508 Middle Drive, Knoxville, TN, USA; b Laboratoire Le2i UMR CNRS 5158, Universit´ e de Bourgogne, Facult´e Mirande, Dijon, France; c Laboratoire Le2i UMR CNRS 5158, IUT Le Creusot, 12 rue de la Fonderie , Le Creusot, France ABSTRACT The characterization of commercial 3D scanners allows acquiring precise and useful data. The accuracy of range and, more recently, color for 3D scanners is usually studied separately, but when the 3D scanner is based on structured light with a color coding pattern, color influence on range accuracy should be investigated. The commercial product that we have tested has the particularity that it can acquire data under ambient light instead of a controlled environment as it is with most available scanners. Therefore, based on related work in the literature and on experiments we have done on a variety of standard illuminants, we have designed an interesting setup to control illuminant interference. Basically, the setup consists of acquiring the well-known Macbeth ColorChecker under a controlled environment and also ambient daylight. The results have shown variations with respect to the color. We have performed several statistical studies to show how the range results evolve with respect to the RGB and the HSV channels. In addition, a systematic noise error has also been identified. This noise depends on the object color. A subset of colors shows strong noise errors while other colors have minimal or even no systematic error under the same illuminant. Keywords: 3D Scanner, Structured Light, Range Characterization, Color

1. INTRODUCTION Recent works on 3D scanners have improved their accuracy. However, differences still exist depending on the scanning techniques. In this paper, we consider scanners based on Coded Structured Light (CSL) due to their ability to provide fast data collection and even to allow real time applications such as security (face or object recognition), reverse engineering (rapid prototyping) and quality control (inspection). Although manufacturers provide specifications, usually the first step to evaluating system performance is to validate the performance of systems using characterization methods. In general, range characterization and color characterization are studied separately. Range characterization has been done with specific objects where neutral colors avoid the reflective problem. Separately, color characterization has been done with the well-known Macbeth ColorChecker. Publications dealing with range accuracy1–8 and how to evaluate it are separated into two distinct categories: the optical transfer function1–3 evaluation, and geometrical feature4–8 conservation. Publications dealing with color accuracy9–12 and how cameras reproduce color information are also divided into two categories: the colorimetricbased9 camera characterization and the spectral-based10–12 characterization. We have based our research on Clark and Robson8 who show color influences on range accuracy for a laser line scanner. The question we address is what happens when the 3D scanner is based on a color coding pattern and not a laser. We have measured different features such as planarity, relative height and sampling, and studied the results in regard Further author information: (Send correspondence to S.V.) S.V.: E-mail: [email protected], Telephone: 1 865 974 5457 D.L.P.: E-mail: [email protected], Telephone: 1 865 974 8520 S.F.: E-mail: [email protected], Telephone: +33 (0)3 80 39 38 05 F.T.: E-mail: [email protected], Telephone: +33 (0)3 85 73 10 92 M.A.A.: E-mail: [email protected], Telephone: 1 865 974 5454

to different color spaces such as RGB and HSV. Our set up consists on taking one “shot” of the Macbeth ColorChecker.13 As preliminary work we have taken several acquisitions under different standard illuminants provided by a light booth. We have observed that the system that we characterize is sensitive to the illuminant. Therefore we have decided to study the results under selected illuminants as well as daylight ambient light. This paper is organized as follows: the experimental system and its properties are presented in Section 2. Section 3 presents system setup and gives some preliminary investigation. Section 4 details the experimental results. Conclusions and future work are discussed in Section 5.

2. SCANNER PRESENTATION

(a)

(b)

(c)

(g)

(d)

(h)

(e)

(f)

(i)

Figure 1. These nine patterns compose the projected sequence of the system. Two patterns are projected with a spatiotemporal modulation to provide six images (a)-(f) then the three colors are projected, red (g), green (h) and blue (i). Because the system does not provide these images we have used an extra camera to take the picture.

The system we have tested is a commercial product. It is described in detail in the publications of Geng. 14, 15 The system is based on active stereovision to reconstruct 3D triangular mesh and is a CSL system with spatiotemporal modulation. More precisely, first the system projects a set of patterns, which is composed of vertical stripes that are spatio-temporally modulated three times, Figures 1(a-i). At the same time three CCD cameras acquire the distorted pattern by the surface relief of the captured object. Then the computing principle is based on active triangulation16, 17 with respect to the coded pattern as illustrated in Figure 2 and equation system given by Equation 1 where (x, y, z) are the cartesian coordinates of the computed point O, B is the baseline (distance between the optical center of the sensor and the light source center of the projector), f is the focal length of the sensor, (u, v) are the pixel coordinates of the sensor image, cot(.) is the cotangent function, and θ is the projection angle corresponding to the computed point O(x, y, z). Both, B and θ are computed with standard calibration.  B  x = f ·cot(θ)−u · u B ·v y = f ·cot(θ)−u (1)   B z = f ·cot(θ)−u · f In addition to this triangulation method, the system uses three different views, that each CCD camera provides, to reconstruct the 3D information. This approach favors computation with respect to noise, making it more robust. But it deals with occlusion problem, making the placement of two cameras more difficult with respect to complex object. Moreover the system has several specifications that we have to take in account, such as the fact that the system can take acquisition under an ambient illuminant and not only in a dark environment, as is the case for most 3D CSL scanners, that it has an option to adapt manually or automatically the intensity of the projector, or that it does not need a user calibration step before utilization. The latter specification provides an easy and friendly operation mode in favor of some systems that can require several hours of preparation and possible calibration mistakes. However, it does not allow us to know intrinsic and extrinsic parameters, nor to change the physical configuration of the system.

Figure 2. Geometric representation of multiple stripes : θi is computed by solving a one-to-one correspondence problem between the stripe λi and projection angle θi . αk,l is geometrically calculated using the coordinates of each pixel (k, l) in the image of the sensor. Then, using the triangulation principle, each visible point O of the object can be calculated.

3. EXPERIMENTAL SETUP AND PRELIMINARY INVESTIGATION In this section, we quickly present the experimental setup we used as well as some preliminary investigation. We investigated the features and illuminants we have to use and also implemented specific segmentation tools.

3.1. Setup We have performed different experiments using the commercial CSL scanner previously described. A constant setup was used during the experiments. It consisted of placing a well-known Macbeth ColorChecker chart, Figure 3, in a light booth. More precisely, we placed it perpendicular to the acquisition system that we consider as a rectangular black box and at 100 cm from the front of the scanner following the recommendation of the manufacturer, as seen in Figure 4. We utilized the six different illuminants that the light booth provides: “A”, “Daylight”, “Cool White”, “UV”, “U30” and “Horizon”.

(a)

(b)

Figure 3. (a) Image of the Macbeth ColorChecker chart and (b) the color names.

3.2. Preliminary investigation To evaluate the range accuracy of the obtained 3D points, we needed to know the deviation of the points from the position where they are supposed to be. Because the exact reference plane was unknown for our experimental setup, we had to estimate it using geometric tools. Therefore we computed for each color patch what we call the “flatness”, corresponding to the maximum deviation of the points from the plane which best fits the data. We assumed that the smaller the flatness, the closer the computed plane is from the reference one. So from

Figure 4. The recommended setup for the scanner. The suggested depth of field is between 85 cm and 115 cm in front of the system.

(a) Front view

(b) Front view sideways

(c) Back view sideways

Figure 5. These three views of pseudo range images represent the deviation of each point of the whole chart with respect to the reference plane (grid).

this first analysis we obtained the reference plane to compute what we call the “thickness” of the color patch, corresponding to the maximum deviation of the points from the reference plane. To perform our measurement and obtain the reference plane, we took acquisitions of the Macbeth ColorChecker under the six different illuminants of the booth light. We did not use all of the illuminants - “A” and “Horizon” were not used because they influence the result too much. In addition to this estimation of the reference plane, we investigated the influence of the illuminant on range accuracy to minimize the interference with the color study due to the correlation between color and illuminant (the color response of the camera change with respect to the illuminant). At the end of these preliminary works we obtained a reference plane and an acceptable illuminant to study the color influence on range accuracy. However, the way that we segmented the color patches was not satisfactory. The segmentation was performed manually and reduced to avoid human perception and possible false overlay from the acquisition system, so the dimensions of the patches were around 30mm × 30mm instead of 40mm × 40mm from the real Macbeth ColorChecker. Therefore we implemented a segmentation tool to automatically separate the different patches. It is a surface-based segmentation using color as feature. We have implemented a region growing algorithm with respect to two criteria, the color and the neighborhood membership to a part (for the robustness with respect to noise). We also implemented a merge algorithm to avoid over-segmentation.

4. RESULTS We used the results of the acquisition under the “Daylight” illuminant, which is close to the light that the manufacturer recommends: “acceptable lighting can be achieved in a typical office or retail environment.”Figure 5 (respectively Figure 6) represents the result of this acquisition with a pseudo gray scale for each point of the Macbeth ColorChecker (respectively of each patch). The result is represented as a range image with respect to the maximum deviation for the chart (respectively for each patch). The first observation was that the system does not react the same way with respect to the colors (patches). We found two categories of patches: a category composed of patches, which seem to have a systematic (sinusoidal)

(a) Dark skin

(b) Light skin

(c) Blue sky

(d) Foliage

(e) Blue flower

(f) Bluish green

(g) Orange

(h) Purplish blue

(i) Moderate red

(j) Purple

(k) Yellow green

(l) Orange yellow

(m) Blue

(n) Green

(o) Red

(p) Yellow

(q) Magenta

(r) Cyan

(s) White 9.5

(t) Neutral 8

(u) Neutral 6.3

(v) Neutral 5

(w) Neutral 3.5

(x) Black 2

Figure 6. These pseudo range images represent the deviation of each point for each patch, with respect to the reference plane (grid). For visibility we have adapt the scale for each patch - one for color display and the other for grey-scale printing.

noise and a category composed of the remainders. We have put the bluish-green Figure 6(f), the orange-yellow Figure 6(l), the yellow Figure 6(p), the white 9.5 Figure 6(s) and the neutral 8 patch Figure 6(t) in the first category. Therefore we have analyzed these patches with respect to RGB color space. These five original patches have their values for each RGB channel up to 100 units, Table 1. To generalize this fact we have looked at the similar patches. Four of the nineteen remaining patches have their values up to 100 for each RGB channel: the light-skin, Figures 6(b) and 7(a), the blue-flower, Figures 6(e) and 7(b), the magenta, Figures 6(q) and 7(c), and the neutral 6.3 patch, Figures 6(u) and 7(d). Therefore we have changed the way that we displayed the data, Figure 7, then we have observed that they also have this sinusoidal pattern, even if it is less obvious. We added the yellow-green patch, Figures 6(k) and 7(e), because it have this variation and it is just under the threshold considering the blue channel. We have investigated the thickness with respect to the RGB color space. Except for the white 9.5 patch, the patches have a minimum variation from -1.410mm (backward) to 0.134mm (forward) and a maximum variation from 0.324mm to 1.258mm with respect to the reference plane. We have observed, Figure 8, that the thickness describes a rough parabola with respect to each RGB channel. This means that range accuracy would be optimized if the object has a color composed of RGB values between 64 unit and 160 unit (0.25 and 0.625 for normalized values). At this point, we have not made more observation from studies with respect to the HSV color space. The

Deviation of the light−skin patch from thhe reference plane

Deviation of the blue−flower patch from thhe reference plane

2

0

150

−0.5 70

160

0

150

−1 210

140 80

1

1

100

240 250 260

mm

240

160

0

250

−0.5 70

0.9

80 100

120 110

0.8

0.4

0.7

0.3

0.6

2

0.2

1 0

−0.2

240

mm

120 250

mm

(a) Light skin

260

60

0

50

−0.5 210

40 220

0.1

240

0

mm

260

mm

(b) Blue flower

0.2 0.1

−0.1

20 250

110

0.3

0

30

230

mm

10

(c) Magenta

Deviation of the neutral 6.3 patch from thhe reference plane

Deviation of the yellow−green patch from thhe reference plane

1 20

0

10

−0.5 120

mm

1 0.5

mm

0.4

0.5

0.2

130

230

0.5 1

0.3

140 220

0.6

0.4

150

−1 210

0.7

0.5

160

110

120

mm

0.5

−0.1

130

90

0.8

0.9

0

140

mm

10

260

mm

1

0.1

150

20

mm

110

0.6

mm

mm

0.5

30

230

0.7

1

40 220

120

mm

110

120

50

130

230

120 110 mm

60

0 −0.5 210

140 220

130

90

0.5

mm

160

mm

0.5

mm

mm

1

Deviation of the magenta patch from thhe reference plane

150

100 90 220

−10

140

110

0 −0.5 210

0 130

0.5

80

230 240

−20 160 170

mm

70

mm

−30

250 260

mm

mm

60

0.9

0.5

0.8 0.4

0.7 0.6

0.3

0.5 1

0.2

0.5

20

0

0.1

10

−0.5 120

0 130

−10

140 150

−20 160

mm

170

mm

−30

(d) Neutral 6.3

0

mm

mm

1

0.4

0.5

110

0

100

−0.5 210

90 220

−0.1

80

230 240

70 250

mm

260

0.3 0.2 0.1 0

mm

−0.1

60

(e) Yellow green

Figure 7. These graphs represent the variation of certain patches with respect to the reference plane. There are two different displays per patch for visibility reason.

Color name Black 2 Neutral 3.5 Blue Foliage Green Neutral 5 Cyan Dark skin Purplish blue Purple Blue sky Neutral 6.3 Bluish green Yellow green ∗ Blue flower Red Moderate red Orange Neutral 8 Magenta Light skin Orange yellow Yellow White 9.5

Red cannel 13.148 33.427 39.078 49.815 59.982 62.283 66.831 69.421 78.037 87.210 93.876 108.325 131.049 131.083 136.763 142.591 145.887 169.913 170.630 187.200 187.380 203.594 223.311 241.781

Table 1. Thickness - RGB channels

Thickness 2.115 1.219 1.106 1.258 0.948 0.862 1.029 1.267 0.863 1.129 0.892 0.737 1.471 1.076 1.143 0.984 0.912 1.081 1.330 1.129 1.039 2.013 1.501 23.585

Color name Black 2 Neutral 3.5 Blue Foliage Dark skin Purple Neutral 5 Red Moderate red Purplish blue Green Cyan Blue sky Orange Magenta Neutral 6.3 Blue flower Light skin Yellow green ∗

Orange yellow Neutral 8 Bluish green Yellow White 9.5

Green cannel 14.181 35.558 44.813 54.446 56.260 60.459 66.269 66.644 79.019 79.782 83.331 100.369 102.691 106.068 108.091 116.139 133.305 142.288 149.764 154.834 180.057 183.313 183.608 242.537

Thickness 2.115 1.219 1.106 1.258 1.267 1.129 0.862 0.984 0.912 0.863 0.948 1.029 0.892 1.081 1.129 0.737 1.143 1.039 1.076 2.013 1.330 1.471 1.501 23.585

Color name Black 2 Neutral 3.5 Foliage Dark skin Green Red Neutral 5 Blue Purple Moderate red Orange Yellow green ∗ Orange yellow Yellow Purplish blue Neutral 6.3 Cyan Blue sky Magenta Light skin Blue flower Bluish green Neutral 8 White 9.5

2.5

20

2

15

1.5

10

5

0

Thickness 2.115 1.219 1.258 1.267 0.948 0.984 0.862 1.106 1.129 0.912 1.081 1.076 2.013 1.501 0.863 0.737 1.029 0.892 1.129 1.039 1.143 1.471 1.33 23.585

Thickness - RGB values

25

Thickness mm

Thickness mm

Thickness - RGB values

Blue cannel 13.056 32.880 40.166 51.104 56.034 60.992 63.096 65.315 69.469 80.141 81.489 91.256 100.667 106.446 107.106 109.029 111.734 114.910 129.326 129.610 151.665 163.426 168.898 239.297

1

0.5

0

0.1

0.2

0.3

0.4

0.5 RGB 0-1

0.6

0.7

Green channel

Red channel

0.8

0.9

0

1

0

Blue channel

0.1

0.2

0.3

0.4

0.5 RGB 0-1

0.6

0.7

Green channel

Red channel

(a)

0.8

0.9

1

Blue channel

(b)

Figure 8. Thickness with respect to the RGB color space for each patch of the Macbeth ColorChecker (a) and zoomed in (b).

evolutions of the thickness with respect to the hue channel, Figure 9, and the saturation channel, Figure 10, do not show correlation that we might use for correction. And the evolution of the thickness with respect to the value channel, Figure 11, logically confirms the conclusion already done with the RGB color space due to the relation between the two color spaces. Thickness - HSV values 2.5

20

2

15

1.5

Thickness mm

Thickness mm

Thickness - HSV values 25

10

5

0

1

0.5

0

60

120

180 Hue 0-360

(a)

240

300

360

0

0

60

120

180 Hue 0-360

240

300

360

(b)

Figure 9. Thickness with respect to the Hue values from the HSV color space for each patch of the Macbeth ColorChecker (a) and zoomed in (b).

In addition, we have observed that certain patches as blue-sky Figure 6(c), foliage Figure 6(d), moderatered Figure 6(i) and purple Figure 6(j) are only deviated in front of the reference plane but the studies were inconclusive about the source of this phenomenon with respect to the RGB color space as well as with respect to the HSV color space.

Thickness - HSV values 2.5

20

2

15

1.5

Thickness mm

Thickness mm

Thickness - HSV values 25

10

5

0

1

0.5

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0

1

0

0.1

0.2

0.3

0.4

Saturation 0-1

0.5

0.6

0.7

0.8

0.9

1

Saturation 0-1

(a)

(b)

Figure 10. Thickness with respect to the Saturation values from the HSV color space for each patch of the Macbeth ColorChecker (a) and zoomed in (b). Thickness - HSV values 2.5

20

2

15

1.5

Thickness mm

Thickness mm

Thickness - HSV values 25

10

5

0

1

0.5

0

0.1

0.2

0.3

0.4

0.5 Value 0-1

0.6

0.7

0.8

0.9

0

1

(a)

0

0.1

0.2

0.3

0.4

0.5 Value 0-1

0.6

0.7

0.8

0.9

1

(b)

Figure 11. Thickness with respect to the Value values from the HSV color space for each patch of the Macbeth ColorChecker (a) and zoomed in (b).

5. CONCLUSION AND FUTURE WORK In summary, in this paper we have shown that with our proposed setup we can identify the influence of the color on 3D scanner accuracy. We have run this experiment with a commercial 3D scanners based on structured light. We observed that for our particular coding pattern, the color implies systematic noise whose magnitude depends on the color of the captured object. This work characterizes the effect of color on range and should serve as a useful resource for selecting object colors prior to scanning. We also studied the range accuracy with respect of RGB and HSV color spaces to quantify good color compositions for the captured object. Future work will investigate the systematic noise in an effort to correct it and improve the quality of the data, as well as increasing the number of the color samples to have more than 24 values per RGB channel for possible improvement with respect to the object color.

REFERENCES 1. M. Goesele, C. Fuchs, and H.-P. Seidel, “Accuracy of 3D Scanners by Measurement of the Slanted Edge Modulation Transfer Function,” in 3-D Digital Imaging and Modeling. 3DIM 2003. Proceedings. Fourth International Conference on, pp. 37–44, (Banff, Alberta, Canada), 2003. 2. S. E. Reichenbach, S. K. Park, and R. Narayanswamy, “Charaterizing Digital Image Acquisition Devices,” Optical Engineering (SPIE) 30, pp. 170–177, February 1991. 3. S. Dor´e and Y. Goussard, “Experimental Determination of CT Point Spread Function Anisotropy and ShiftVariance,” in Engineering in Medicine and Biology society. Proceedings of the 19th Annual International Conference of the IEEE (IEEE/EMBS), (2), pp. 788–791, (Chicago, Illinois, USA), 1997. 4. J.-A. Beraldin, S. El-Hakim, and F. Blais, “Performance Evaluation of Three Active Vision Systems Built at the National Research Council of Canada,” in Proceedings of the Conference on Optical 3D Measurement Techniques, pp. 352–361, 1995.

5. S. El-Hakim, J.-A. Beraldin, and F. Blais, “A Comparative Evaluation of the Performance of Passive and Active 3D Vision Systems,” in St. Petersburg Great Lakes Conference on Digital Photogrammetry and Remote Sensing (SPIE), 2646, pp. 14–25, (St. Petersburg, Russia), 1995. 6. J.-A. Beraldin and M. Gaiani, “Evaluating the Performance of Close Range 3D Active Vision Systems for Industrial design applications,” in Proceeding on Electronic Imaging Science and Technology - Videometrics VIII (IT&T/SPIE), (5665), pp. 67–77, (San Jos´e, California, USA), 2005. 7. G. Sansoni, M. Carocci, and R. Rodella, “Calibration and Performance Evaluation of a 3D Imaging Sensor Based on the Projection of Structured Light,” IEEE Transactions on Instrumentation and Measurement 49(3), pp. 628–636, 2000. 8. J. Clark and S. Robson, “Accuracy of Measurements made with a Cyrax 2500 Laser Scanner against Surfaces of Known Colour,” in Geo-Imagery Bridging Continents, XXth ISPRS Congress., pp. 1031–1036, (Istanbul, Turkey), 2004. 9. T. Johnson, “Methods for Characterizing Colour Scanner and Digital Cameras,” Displays 16(4), pp. 183– 192, 1996. 10. G. D. Finlayson, S. Hordley, and P. M. Hubel, “Recovering Device Sensitivities with Quadratic Programming,” in IS&T/SID Sixth Color Imaging Conference: Color Science, Systems and Applications, 6, pp. 90– 95, (Scottsdale, Arizona, USA), 1998. 11. J. Y. Hardeberg, H. Brettel, and F. Schimitt, “Spectral Characterisation of Electronic Cameras,” in International Symposium on Electronic Image Capture and Publishing EICP.98, (3409), pp. 100–109, (Z¨ urich, Switzerland), 1998. 12. L. MacDonald and W. Ji, “Colour Characterisation of a High-Resolution Digital Camera,” in The First European Conference on Color in Graphics, Imaging and Vision (CGIV), (1), pp. 433–437, (Poitiers, France), 2002. 13. C. McCamy, H. Marcus, and J. Davidson, “A Color-Rendition Chart,” Journal of Applied Photographic Engineering 2(3), pp. 95–99, 1976. 14. J. Geng, P. Zhuang, P. May, S. Yi, and D. Tunnell, “3D FaceCamT M : A Fast and Accurate 3D Facial Imaging Device for Biometrics Applications,” in Proceeding SPIE – Biometric Technology for Human Identification, 5404, pp. 316–327, (Bellingham, WA), 2004. 15. Z. J. Geng, “Rainbow 3-Dimensional Camera: New Concept of High-Speed 3-Dimensional Vision System,” Optical Engineering 35, pp. 376–383, February 1996. 16. R. I. Hartley and P. Sturm, “Triangulation,” Computer Vision and Image Understanding 68, pp. 146–157, November 1997. 17. J. Forest i Callado, New methods for triangulation-based shape Acquisition using laser scanners. PhD thesis, Universitat de Girona, Girona, May 2004.