Three-dimensional machine vision utilising optical ...

3 downloads 0 Views 1MB Size Report
higher AC/DC ratio of the light intensity components, as discussed in the previous section. The camera was .... hence the dark spots in the live camera images. 3.
Three-dimensional machine vision utilising optical coherence tomography with a direct read-out CMOS camera Patrick Eganαβ a , Fereydoun Lakestaniβ , Maurice P. Whelanβ b and Michael J. Connellyα α Optical β Photonics

Communications Research Group, University of Limerick, Ireland; Sector, Institute for Health and Consumer Protection, European Commission Joint Research Centre, Ispra, Italy ABSTRACT

Presented is a comprehensive characterisation of a complementary metal-oxide semiconductor (CMOS) and digital signal processor (DSP) camera, and its implementation as an imaging tool in full-field optical coherence tomography (OCT). The camera operates as a stand-alone imaging device, with the CMOS sensor, analogueto-digital converter, DSP, digital input/output and random access memory all integrated into one device, autonomous machine vision being its intended application. The 1024 × 1024 pixels of the CMOS sensor function as a two-dimensional photodiode array, being randomly addressable in space and time and producing a continuous logarithmic voltage proportional to light intensity. Combined with its 120 dB logarithmic response range and fast frame rates on small regions of interest, these characteristics allow the camera to be used as a fast full-field detector in carrier based optical metrology. Utilising the camera in an OCT setup, three-dimensional imaging of a typical industrial sample is demonstrated with lateral and axial resolutions of 14 µm and 22 µm, respectively. By electronically sampling a 64 × 30 pixel two-dimensional region of interest on the sensor at 235 frames per second as the sample was scanned in depth a volumetric measurement of 875 µm × 410 µm × 150 µm was achieved without electromechanical lateral scanning. The approach presented here offers an inexpensive and versatile alternative to traditional OCT systems and provides the basis for a functional machine vision system suitable for industrial applications. Keywords: Optical coherence tomography, full-field, electronic scanning, CMOS camera, machine vision.

1. INTRODUCTION With micrometre resolution and cross-sectional imaging capabilities, optical coherence tomography (OCT)1 has become a prominent biomedical imaging technique, particularly suited to ophthalmic applications. However, the current trend of OCT research has attempted to achieve ultra-high resolution for various biomedical applications while ignoring many of the industrial and machine vision type applications, where the typical tens of micrometre resolutions of conventional OCT are sufficient. Methods using an expensive light source2 or adaptive optics3 have provided ultra-high resolution optical coherence tomography (UHR OCT) at considerable cost and complexity and are indicative of current OCT development. Furthermore, as the goal of higher resolution OCT is pursued, its three-dimensional imaging capabilities, called full-field OCT, have been largely ignored. An exploitation of lower specification OCT for simplistic and functional three-dimensional machine vision has yet to be demonstrated. The principal of OCT is low coherence interferometry. The optical setup typically consists of a Michelson interferometer with a low coherence light source, light being split into and recombined from reference and sample arms. The path length of one arm is translated longitudinally in time. A property of low coherence interferometry is that interference, i.e. the series of dark and bright fringes, is only achieved when the position of the translating arm is inside the coherence gate of the light source. This creates an optical carrier amplitude modulated envelope as path length is varied, where the peak of the envelope corresponds to path length matching. It is this coherence gating feature that allows OCT to resolve sample microstructure in depth. Therefore translating one arm of the Further author information send correspondence to: a [email protected]; http://www.ece.ul.ie/research/ocrg/ocrg.htm b [email protected]; phone +39 0332 78 6063; http://ihcp.jrc.cec.eu.int

Figure 1. Active pixel sensor architecture. The logarithmic resistance of the photodiode load resistor gives a logarithmic response to light intensity.

Figure 2. Functional blocks of the CMOS-DSP camera. Pixel output voltage is digitised by an internal 8-bit ADC.

interferometer has two functions; depth scanning and a Doppler-shifted optical carrier are accomplished by path length variation. To achieve three-dimensional imaging utilising OCT requires two-dimensional lateral scanning as the sample under test is scanned in depth. Using various techniques with CCD cameras, parallel OCT has been reported,4, 5, 6, 7 however the fixed frame rate of the CCD and expense of the setup make these methods impractical for machine vision. Full-field OCT was demonstrated in a relatively simple optical setup using a 58 × 58 pixel CMOS smart detector that performed signal demodulation analogically.8 The drawbacks of this approach are cost and complexity in design and fabrication of a custom sensor, inflexibility of hardware based demodulation, and reduced sensor fill-factor by the fact that demodulation circuitry surrounds the pixel. An analysis of CMOS and CCD image sensors in speckle interferometry was presented9 where random pixel access and a large dynamic response were highlighted as two remarkable characteristics of the CMOS technology. Furthermore, a unique application of a direct read-out CMOS camera in machine vision, which exploited both these features, was reported.10 Two-dimensional imaging of a continuous spot-welding process was implemented with 2 µm lateral resolution. The large dynamic range of the CMOS sensor prevented saturation and blooming that a CCD sensor would suffer when imaging welding flashes, and the direct read-out facilitated region of interest (ROI) imaging that reduced redundant image content, allowing real-time image processing. Operating as a standalone device in a high electrical noise environment, the CMOS camera proved more immune to noise than a CCD device due to its on-board digitisation. The development of the CMOS-DSP camera11 offered an autonomous and inexpensive machine vision system, integrating a CMOS image sensor12 with a digital signal processor and communication interfaces. When applied to optical metrology in a technique using heterodyne interferometry13 the camera achieved extremely fast frame rates on small ROIs and the technique proved sensitivity comparable to a commercial laser Doppler vibrometer. In recent machine vision developments, the determination of tool wear has been implemented in a CCD vision system,14 by white light interferometry with a CCD camera,15 and in combination with computational metrology.16 Full-field OCT utilising a CMOS-DSP camera provides a plausible alternative to these techniques, with the added benefits of low cost and functionality, fast three-dimensional acquisition, integrated image/signal processing capabilities, and region of interest imaging. Presented here is a characterisation of the CMOS-DSP camera and its implementation as an imaging device for low specification full-field optical coherence tomography. Demonstrating non-invasive three-dimensional imaging of a rough aluminium sample, the technique offers a simplistic, versatile and inexpensive solution to fast industrial measurement applications.

Figure 3. Optical setup of CMOS-DSP characterisation experiments. Digitised camera value was recorded as pixel output.

Figure 4. Full-field OCT optical setup. Components include: super luminescent diode (SLD), convex lens (L1), 50/50 beam splitter, polariser (POL), camera objective (CO), CMOS-DSP camera (CAM). The reference (REF) and sample (SMP) are of the same rough aluminium surface.

2. METHODOLOGY 2.1. Characterisation of the CMOS-DSP camera 2.1.1. Logarithmic response and camera noise The diode connected MOSFET of the CMOS pixel (Fig. 1) gives a logarithmic output voltage in response to light intensity, i.e. the output voltage of the pixel, VD , decreases logarithmically as input photocurrent, IP , increases. The digital signal processor of the camera functions as the central processing unit, controlling what pixels are accessed, the analogue-to-digital converter (ADC) parameters, and the pixel processing (Fig. 2). The logarithmic response of the camera was verified by recording the digitised pixel value as light intensity was varied over a 120 dB range, using a 750 nm laser diode in a simple optical setup (Fig. 3). To qualitatively measure the noise of the CMOS-DSP camera a DC light intensity was incident on the camera and the digitised pixel output was recorded, using the same optical setup. In repeated measurements a random pixel was sampled at 99.571 kHz for 8096 samples and the data was downloaded to the external PC from the RAM of the camera for analysis. When modelling the 8-bit digitised output pixel value as a function of input light intensity, two internal camera parameters, gain and offset, were taken into account. These parameters are electronic multiplication and addition, respectively that operate on the analogue voltage output of the CMOS pixel before digitisation. The pixel output voltage response to light intensity is defined as   I V = m · log10 , (1) I0 where I is the light intensity on the pixel and m and I0 are constants. Using experimental data and curve fitting, a model was developed relating input light intensity to output pixel value, and defined as     a I +c·O , (2) P IX = G · log10 2 b where a, b, and c are constants, I is the light intensity on the pixel and G and O are the values the camera parameters gain and offset, respectively. The benefit of (1) is that given an 8-bit output pixel value it is possible to determine the light intensity acting of the pixel, and hence the pixel response time. This would be important in carrier-based detection at low intensities when the pixel response might not be fast enough. In interferometry, and more specifically interferometry with partially coherent light, the intensity produced by the two interfering arms can be expressed in terms of the source intensity, IS , as p I = k1 · IS + k2 · IS + 2 · (k1 · IS ) · (k2 · IS ) · ℜ [γ (τ )] , (3)

120 100 80 60 40 20 0 0

500 1000 Position (µm) [arb.]

Pixel value (norm.)

Depth (µm) [abs.]

120 100 80 60 40 20 0 0

Depth (µm) [abs.]

1 0.8 0.6

G = 3: Exp. data PIX3 Curve Fit G = 2: Exp. data PIX2 Curve Fit

0.4

G = 1: Exp. data PIX1 Curve Fit

0.2

G = 0: Exp. data PIX Curve Fit 0

0 1e−5

500 1000 Position (µm) [arb.]

Figure 5. Rough aluminium sample plan, elevation and profile. Note 64 × 30 pixel ROI and surface roughness much greater than λSLD (820 nm).

1e−4

1e−3 1e−2 1e−1 1e+0 Light Intensity (W⋅m−2)

1e+1

1e+2

Figure 6. Logarithmic response of the CMOS-DSP camera with model response. The camera is sensitive to light over a 120 dB range.

where k1 + k2 ≈ 1 represents the interferometer beam splitting ratio, and γ (τ ) is called the complex degree of coherence, i.e. the interference envelope and carrier dependent on reference arm scan or time delay τ , and whose recovery of interest in OCT. Substituting equation (3) for in equation (1) results in   p IS IS V = m · log10 · (k1 + k2 ) + · 2 · (k1 · k2 ) · ℜ [γ (τ )] . (4) I0 I0 Factorising, and using the logarithm property, log (a · b) = log (a) + log (b), the output pixel voltage becomes   o n p IS (5) V = m · log10 + m · log10 (k1 + k2 ) + 2 · (k1 · k2 ) · ℜ [γ (τ )] . I0 Only the second term in (5) is time dependent and it is clear that the AC term, γ (τ ), is independent of source intensity, IS , due to the logarithmic response property of the CMOS sensor. Hence, as a consequence of the logarithmic response of the CMOS-DSP camera increasing the source intensity acts simply as a DC offset. Furthermore, to get a higher AC signal requires a higher ratio of AC component with respect to the DC component, i.e. higher visibility. Therefore increasing the source intensity will in fact decrease the effective signal-to-noise ratio (SNR) of the OCT signal. 2.1.2. Pixel response time Residual capacitance and MOSFET logarithmic resistance result in a CMOS pixel response time that is light intensity dependent. Behaving like a classical RC electronic circuit, the pixel response time was analysed in time and frequency domains, assuming small signals; the former displaying exponential rise and decay with a time constant, and the latter exhibiting a low-pass filter characteristic with a −3 dB cut-off frequency. Using the simple optical setup (Fig. 3) with the 750 nm modulatable laser diode, the temporal and frequency responses were analysed. In the case of the time analysis, the laser diode was modulated with a low frequency (1 Hz) square waveform. Due to the parasitic capacitances and charge storing in the photodiode, the digitised pixel response was exponential. Despite non-linear behaviour, the responses were approximated by normalised exponential rise and decay defined as hR (t) = 1 − e−βR ·t , (6) hD (t) = e−βD ·t ,

(7)

where βD and βR are exponential decay and rise time constants, respectively. To determine the frequency response, the laser diode was modulated with a sinusoidal waveform whose frequency was varied (1–10000 Hz).

1 0.8 0.6 0.4 0.2 0 0

I = 0.841 W⋅m−2: β =0.01978 µs−1 D1 I = 0.0841 W⋅m−2: βD2=0.002943 µs−1 I = 0.00841 W⋅m−2: β =0.0005565 µs−1

1e0 −3dB cut−off

D3

100 200 300 400 500 600 700 800 900 1000 Samples (∆T = 10.5×10−6 s)

Pixel RMS (norm.)

Pixel value (norm.) Pixel value (norm.)

1 0.8 0.6 0.4 0.2 0 0

I = 0.841 W⋅m−2 H1(f): τ1=17.11×10−6 −2

I = 0.0841 W⋅m −6 H2(f): τ2=171×10

I = 0.841 W⋅m−2: βR1=0.07397 µs−1 −2 −1 I = 0.0841 W⋅m : βR2=0.01386 µs −2 −1 I = 0.00841 W⋅m : βR3=0.003066 µs 50

100 150 200 250 300 350 400 450 500 Samples (∆T = 10.5×10−6 s)

Figure 7. Time analysis of the logarithmic CMOS pixel with curve fitting. Pixel decay dominates time response.

−2

I = 0.00841 W⋅m −3 H3(f): τ3=2.068×10 1e−1 1e0

1e1

1e2 1e3 Frequency (Hz)

1e4

1e5

Figure 8. Frequency analysis of the logarithmic CMOS pixel. Cut-off frequency increases as light intensity increases.

The decrease in the digitised output voltage of the pixel with increasing frequency was modelled using the low-pass filter characteristic 1 , (8) H (f ) = √ 1+2·π·f ·τ

where τ is the time constant of the system determining the −3 dB cut-off frequency and f is the frequency of the input, i.e. the laser diode modulation. In both these analyses the pixel sampling rate was 99.571 kHz, and in the time analysis the pixel was sampled 1024 times.

2.2. Full-field optical coherence tomography of a rough aluminium sample 2.2.1. Optical setup and sample specification In a relatively simple optical setup (Fig. 4) integrating a low coherence light source with a Michelson interferometer, a full-field optical coherence tomography of a rough aluminium surface was performed. The light source∗ was a super-luminescent diode and supplied 3.6 mW of power with a central spectral wavelength of 830 nm and full-width-at-half-maximum bandwidth of 14 nm. The convex lens (L1) collimated the light to a beam diameter of 5 mm. The camera objective (CO) allowed optical zoom and focusing of the sample image onto the CMOS sensor. The three notable aspects of the setup are the off-centred camera, polariser, and rough reference surface. These measures were taken to maximise the SNR of the CMOS sensor, that is, to achieve a higher visibility or higher AC/DC ratio of the light intensity components, as discussed in the previous section. The camera was off-centred to prevent DC light reflections from the beam splitter. For each pixel there is a given orientation of the polariser that maximises the visibility and hence, the presence of the fixed polariser improves the SNR for some pixels. The rough reference surface, i.e. the same as the sample surface, facilitated a maximum visibility. Depth scanning was facilitated by axially translating the sample using an electromechanical mini-shaker† driven by a LabVIEW generated triangle waveform at a speed of 120 µm/s. The LabVIEW interface also triggered camera sampling using one digital input port of the camera‡ . A customised program was uploaded to the camera accessing and processing a 64 × 1 pixel region of interest and sampling 4096 times at a rate of 1900 frames per second (four samples per period of the optical carrier) during one depth scan of the sample. The result was a two-dimensional data-set corresponding to a B-scan (X-Z cross-section) of the sample. A 64 × 30 pixel two-dimensional region of interest was sampled 256 times at a rate of 235 frames per second during one depth scan of the sample. This corresponded to an under-sampling at a rate of 0.125 times the Nyquist frequency of ∗

OSLD-82-HP1, Opto-Link Corporation Ltd. Type 4810, Br¨ uel & Kjær. ‡ iMVS-155, AKAtech SA. †

s = 1.0938 1 s2 = 1.0774 s3 = 1.057 s4 = 1.1477 s = 1.0665 5 s6 = 1.2443 s7 = 1.1071 s = 1.2724 8 s9 = 1.1685 s = 1.1413 10 s11 = 1.2891 s12 = 1.3681 s = 1.1572 σ s = 0.098135

180

1e−1

160

1e−2

120 Pixel value

Pixel response time (s)

140 1e−3 1e−4 1e−5

100 80 60

1e−6

40 1e−7 1e−8 1e−4

Exp. freq. est. Exp. time est. Theoretical (iMVS−155 datasheet) 1e−3

1e−2 1e−1 1e0 −2 Light intensity (W⋅m )

20

s

1e1

1e2

Figure 9. Time and frequency analyses comparison. Pixel response time decreases as light intensity increases.

0 0

1000 2000 3000 4000 5000 6000 7000 8000 900010000 Samples (f = 95.751 kHz) s

Figure 10. Time analysis of the CMOS-DSP camera noise. The noise is independent of light intensity over a 22 dB range.

the optical carrier. For both B-scan and full-field scan all data was stored on the internal memory of the camera and was downloaded from the camera memory to the external computer for presentation. The sample (Fig. 5) was a rough aluminium surface of two concentric circle steps, machined to a depth of ∼ 100 µm with a rough surface (≫ λSLD ) finish. The camera was focused to the lower left quadrant of the 5 mm radius step and a ROI of 64 × 30 pixels was chosen on the CMOS sensor to overlap the step transition. The sample was thought to mimic the very low reflectivity of a typical non-specular surface such as semiconductor or a rough metallic surface common in industrial environments. 2.2.2. Signal post-processing Knowing that an OCT signal has properties analogous to an amplitude modulated signal, i.e. a low frequency intensity envelope modulated by a high frequency optical carrier, a suitable post-processing method was used. The experimental signal was first band-pass filtered and then the corresponding analytic signal was calculated. The obtained envelope was low-pass filtered to further increase the SNR. The band-pass filter used was flat top and the rising and falling edges were built using the following formula   2 sin (2 · π · f ) sin (4 · π · f ) W (f ) = f − · − . (9) 3 π 8·π The impulse response of this filter has a roll-off of 1/t6 , and thus vanishes rapidly. These filtering techniques enabled easier detection of the envelope, but to determine where the point of path length matching occurred and hence reconstruct a surface visualisation, peak detection was applied. To illustrate the real-time signal processing functionality of the camera’s DSP, a simple envelope detection algorithm was implemented. The experimental signal was band-pass filtered using a 78-tap FIR filter, and the absolute value of the filtered signal was displayed, hence the dark spots in the live camera images.

3. RESULTS AND DISCUSSION The digitised output voltage of the CMOS-DSP camera was found to have a logarithmic response to light intensity (Fig. 6), with a range controlled by the internal camera parameter ’gain’ having a maximum of 120 dB. The camera model, expressed in (2) and graphed for each value of gain, correlates with experimental data. The time domain analysis of the pixel time response is shown (Fig. 7) with Matlab curve fitting using equations (6) and (7). It is evident that the pixel responds faster as light steps from low to high intensity, i.e.

800

600

600

400 200 0 0

400 200

0 5000 10000 0 Frequency (Hz) 1000

800

800

600

600

400

PSD

PSD

1000

200 0 0

0

5000 10000 Frequency (Hz)

400

Samples: depth (∆z ≈ 1.25µm)

1000

800 PSD

PSD

1000

20 40 60 80 100 120

200 5000 10000 Frequency (Hz)

0 0

5000 10000 Frequency (Hz)

Figure 11. Spectral analysis of CMOS-DSP camera noise applying a Fourier Transform to Figure 10.

140

10

20 30 40 50 Pixels: lateral (∆x = 8.0µm)

60

Figure 12. Unprocessed 64 × 1 pixel B-scan of aluminium surface. Fixed pattern noise present, but step definition unobservable.

the CMOS pixel responds faster with increasing light intensity. In the frequency analysis (Fig. 8), shown with Matlab curve fitting using equation (8) it is clear that the pixel has a low-pass filter characteristic with a −3 dB cut-off frequency. It is evident that as light intensity increases so does the cut-off frequency of the pixel, i.e. the CMOS pixel can respond to higher frequencies with increasing light intensity. The time and frequency analyses of the CMOS pixel demonstrate the same fact, i.e. the pixel responds faster at high intensities and the experimental analyses correlated with theoretical data (Fig. 9). This pixel property places a restriction on fast imaging with the CMOS sensor, as it is always the minimum intensity of the signal that determines what the maximum frame rate can be. Hence, although it is possible to address small ROIs very fast, the light intensity and pixel response time must be considered to achieve good image quality. The noise of the camera is significant (Fig. 10), with an experimentally measured value of 1.1572 RMS levels of the 8-bit ADC with gain set to zero. This corresponds to a value of 0.95 dB on the 120 dB range of the sensor. The noise is independent of light intensity, verified over a 22 dB range of the sensor dynamic range. The noise spectral plots (Fig. 11) reveal a uniform spectrum with some low frequency component relating to the pixel settling to the DC intensity value. With an interferometric signal, as in equation (3), it is typical to increase the source intensity and hence increase the interferogram, i.e. the signal of interest. However, this is not the case when detecting with a logarithmic sensor, as shown in equation (5). Increasing the source intensity increases only the DC intensity of the interference signal, and coupled with the logarithmic response decreases the SNR. The frequency of the optical carrier is determined by the velocity of the depth scanning and fast imaging demands fast scanning. Due to the pixel speed of response being dependent on light intensity, in order to permit fast imaging a level of DC intensity is required. A fundamental trade-off exists between signal-to-noise ratio, imaging speed and source intensity when using a logarithmic sensor response in OCT and it can be expressed as SN R · vs · PS = k

(constant),

(10)

where vs is the velocity of the scanning arm and PS is the source power. In the OCT measurements the lateral resolution, determined by the pixel and camera zoom, was 14 µm, and the axial resolution, related to the coherence length of the source, was 22 µm. The unprocessed data-set (Fig. 12) shows little definition, but fixed pattern noise, i.e. a pixel value offset variation fixed in time and easily corrected using offset subtraction, is evident. The post-processed data-set (Fig. 13) features a SNR increase of approximately 10. The real-time processed and displayed camera images (Fig. 14) correspond to the location of an interference envelope i.e. reflective surface as the sample was scanned in depth. Dark spots are related to the algorithm used, and better surface location/definition would have been achieved by peak detection.

20

20

20

20

40

40

40

60

60

60

80 100

80 100

Depth (µm)

60

Depth (µm)

40 Depth (µm)

Samples: depth (∆z ≈ 1.25µm)

0

80 100

80 100

120

120

120

140

140

140

160

160

160

120 140

10

20 30 40 50 Pixels: lateral (∆x = 8.0µm)

60

Figure 13. Post-processed 64 × 1 pixel B-scan of aluminium surface. The SNR improvement over Figure 12 is approximately 10.

20 40 60 X−pixels

20 40 60 X−pixels

20 40 60 X−pixels

Figure 14. Real-time camera image of aluminium step surface. Better definition would be achieved by peak detection.

Peak detection of the post-processed interference envelope allowed the three-dimensional surface visualisation (Fig. 15). The full-field OCT image of the aluminium surface relates to a volumetric measurement of 875 µm × 410 µm× 150 µm. The post-processed surface visualisation is presented (Fig. 16) as a C-scan (X-Y en face) where the circular step shape is evident.

4. CONCLUSIONS AND FUTURE WORK A detailed description of a CMOS-DSP camera has been reported. The pixels of the camera were found to have a logarithmic response to light intensity of 120 dB. The response time of the pixel was established as being dependent on the light intensity, with a longer response time at lower intensities. Extensive modelling of the camera response to light intensity and pixel time response has been presented and all models correlate with experimental data. A qualitative analysis of the camera noise highlighted that the noise level corresponds to 0.95 dB of the 120 dB sensor dynamic response. The implications and trade-offs of the logarithmic response and pixel response times have been discussed. Simple post-processing techniques illustrated a SNR improvement of a factor of 10; furthermore the camera demonstrated real-time signal processing capabilities through filtering and under-sampling. Full-field OCT of a rough aluminium surface was demonstrated without electromechanical lateral scanning, using electronic pixel scanning through region of interest sampling. The relatively simplistic and inexpensive optical setup offers a versatile and functional autonomous machine vision technique, with applications to non-invasive three-dimensional industrial applications. Although the direct-readout feature of the CMOS-DSP camera has proven a novel and viable alternative to electromechanical lateral scanning, the restrictions imposed by the logarithmic response and pixel response time limits the imaging capabilities of the camera to surface topology. Future work will investigate alternatives to the analogue electromechanical axial scanning to offer truly digital and functional three-dimensional machine vision.

ACKNOWLEDGMENTS This project is supported by the University of Limerick Foundation and Enterprise Ireland International Collaboration grant. The work was undertaken within the framework of a Collaboration Agreement (No. 18697-2001-11 SOSC ISP IE) between the University of Limerick, Ireland, and the Institute for Health and Consumer Protection, EC DG-JRC, Italy. The authors thank Alessio Munari for his assistance in the profilometer measurements of the aluminium step sample.

60

30 Y−pixels (1 pix = 8µm)

Depth (1 sample ≈ 1.25µm)

120 110 100 90 80 70 60 50 40 30

25 20 15 10 5 60

40

20

X−pixels (1 pix = 8µm)

30 20 10

50

40 30 20 X−pixels (1 pix = 8µm)

10

Y−pixels (1 pix = 8µm)

Figure 15. Three-dimensional surface visualisation of the 64 × 30 pixel ROI. Volumetric camera data-set was post-processed and peak detection determined surface topology.

Figure 16. Reconstructed en face tomograph (X-Y) of the rough aluminium surface. The 5 mm circular step profile is easily distinguishable.

REFERENCES 1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, pp. 1178– 1181, Nov. 1991. 2. A. Unterhuber, B. Povazay, K. Bizheva, B. Hermann, H. Sattmann, A. Stingl, T. Le, M. Seefeld, R. Menzel, M. Preusser, H. Budka, C. Schubert, H. Reitsamer, P. K. Ahnelt, J. E. Morgan, A. Cowey, and W. Drexler, “Advances in broad bandwidth light sources for ultrahigh resolution optical coherence tomography,” Physics in Medicine and Biology 49, pp. 1235–1246, 2004. 3. B. Hermann, E. J. Fernndez, A. Unterhuber, H. Sattmann, A. F. Fercher, W. Drexler, P. M. Prieto, and P. Artal, “Adaptive-optics ultrahigh-resolution optical coherence tomography,” Optics Letters 29:18, pp. 2142–2144, 2004. 4. C. Dunsby, Y. Gu, and P. M. W. French, “Single-shot phase-stepped wide-field coherence gated imaging,” Optics Express 11, pp. 105–115, Jan. 2003. 5. M. Roy, P. Svahn, L. Cherel, and C. J. R. Sheppard, “Geometric phase-shifting for low-coherence interference microscopy,” Optics and Lasers in Engineering 37, pp. 631–641, 2002. 6. M. Akiba, K. P. Chan, and N. Tanno, “Full-field optical coherence tomography by two-dimensional heterodyne detection with a pair of CCD cameras,” Optics Letters 28:10, pp. 816–818, May 2003. 7. A. Dubois, G. Moneron, K. Grieve, and A. C. Boccara, “Three-dimensional cellular-level imaging using full-field optical coherence tomography,” Physics in Medicine and Biology 49, pp. 1227–1234, Mar. 2004. 8. S. Bourquin, P. Seitz, and R. P. Salath, “Optical coherence tomography based on a two-dimensional smart detector array,” Optics Letters 26:8, pp. 512–514, Apr. 2001. 9. H. Helmers and M. Schellenberg, “CMOS vs. CCD sensors in speckle interferometry,” Optics and Laser Technology 35, pp. 587–595, Nov. 2003. 10. J. Little, “Random access CMOS imaging technology applied to an industrial machine vision problem,” in IEE Seminar on On-line Monitoring Techniques for the Off-Shore Industry, pp. 9/1–9/2, 1999. 11. S. Fischer, N. Schibli, and F. Moscheni, “Design and development of the smart machine vision sensor (SMVS),” in Proc. SPIE Advanced Focal Plane Arrays and Electronic Cameras II, pp. 186–192, (Zurich, Switzerland), May 1998. 12. S. Kavadias, B. Dierickx, D. Scheffer, A. Alaerts, D. Uwaerts, and J. Bogaerts, “A logarithmic response CMOS image sensor with on-chip calibration,” Solid State Circuits 35, pp. 1146–1152, Aug. 2000.

13. M. V. Aguanno, F. Lakestani, M. P. Whelan, and M. J. Connelly, “Single pixel carrier based approach for full-field laser interferometry using a CMOS-DSP camera,” in Proc. SPIE Detectors and Associated Signal Processing, pp. 304–312, (St. Etienne, France), Oct. 2003. 14. J. Jurkovic, M. Korosec, and J. Kopac, “New approach in tool wear measuring technique using CCD vision system,” International Journal of Machine Tools & Manufacture , 2005. 15. A. Devillez, S. Lesko, and W. Mozer, “Cutting tool crater wear measurement with white light interferometry,” Wear 256, pp. 56–65, 2004. 16. T. G. Dawson and T. R. Kurfess, “Quantification of tool wear using white light interferometry and threedimensional computational metrology,” International Journal of Machine Tools & Manufacture 45, pp. 591– 596, 2005.