Evaluating the capability of time-of-flight cameras ...

1 downloads 0 Views 460KB Size Report
measured for the beam top surface as well as at the witness plates (as a ... Time-of-flight cameras allow acquisition of 3D point clouds at video frame rates.
Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically-loaded beam Hervé Lahamy1*, Derek Lichti1, Mamdouh El-Badry2, Xiaojuan Qi1, Ivan Detchev1, Jeremy Steward1 and Mohammad Moravvej2 1 Department of Geomatics Engineering, University of Calgary, 2500 University Dr NW, Calgary, Alberta, T2N 1N4 Canada 2 Department of Civil Engineering, University of Calgary, 2500 University Drive N.W., Calgary, Alberta T2N 1N4, Canada

ABSTRACT Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed. Keywords: Time-of-flight cameras, Calibration, Scattering, Deflection measurement, Accuracy assessment.

1. INTRODUCTION The wide variety of applications using time-of-flight cameras includes mobile robotic search and rescue [1, 2], gesture recognition [3, 4], and monitoring of construction workers’ posture and movements for health related purposes in their workplace [5]. Nitsche et al. develop a new method for high-resolution topographic measurements in small and medium scale field sites using range cameras [6]. While several papers report on the calibration of time-of-flight cameras highlighting the evaluation and the removal of the systematic errors affecting captured data [7, 8], others focus on their ability of measuring accurately a given distance [9]. However, very few papers evaluate the ability of those sensors for accurately imaging a moving surface. Concrete beams are integral components in the construction of civil infrastructures such as bridges. Years of traffic overloading and insufficient maintenance have left infrastructure systems in a poor state of repair. Many options for their reinforcement exist such as adding fiber-reinforced polymer composites and steel plates to the structural elements. The efficacy of such methods is evaluated through fatigue load testing in which cyclic loads are applied to individual beams under laboratory conditions. This requires the measurement of deflection in response to the applied loading conditions. Many optical sensors such as digital cameras and laser scanners have been used for the measurement of the beam deflection. Janowski et al. have demonstrated the efficiency of terrestrial laser scanners for the diagnosis of reinforced concrete beams deterioration [10]. In [11, 12], the authors make use of multiple synchronized digital cameras and projectors for measuring the deflection of the concrete beam under both static and dynamic loading conditions. Several experiments implemented in [13] demonstrate the capability of the SR4000 range cameras to sense the periodic surface movement with half-millimeter accuracy. The Microsoft Kinect 1.0 sensor has also demonstrated its capability for

* Corresponding author. Videometrics, Range Imaging, and Applications XIII, edited by Fabio Remondino, Mark R. Shortis, Proc. of SPIE Vol. 9528, 95280V · © 2015 SPIE · CCC code: 0277-786X/15/$18 · doi: 10.1117/12.2184838

Proc. of SPIE Vol. 9528 95280V-1

accurately sensing dynamic motion [14]. However, in the past this deflection was measured at several witness plates attached to the concrete beam. This essentially point-wise measurement mode was adopted simply to overcome the problem of occlusion of 50% of the beam surface by a spreader beam needed to distribute the applied load. Even though the top surface of the concrete beam was of primary interest, the deflections were only estimated at the aluminum witness plate locations. In the current experimental setup, no spreader beam was used and the actuator only covers about 25 cm of the 3 m long beam. This allows nearly the entire top surface of the beam to be imaged. Deflections were thus measured for the beam top surface as well as at the witness plates (as a check) from time-of-flight camera imagery captured from two cameras: a Microsoft Kinect 2.0 and a Mesa Imaging SR4000. Accuracy was assessed by comparing the results with those of displacement transducers setup beneath selected witness plates. Maas et al. have reported that accurate concrete beam deflection measurements can be performed with laser displacement sensors with high geometric precision, accuracy and reliability [15]. The remainder of the paper is organized as follows: Section 2 presents the different steps of the methodology considered. Section 3 describes the experiment setup as well as the different datasets collected. Section 4 presents the estimated deflections and their evaluation. The final section provides the conclusions of the study.

2. METHODOLOGY Time-of-flight cameras allow acquisition of 3D point clouds at video frame rates. In contrast to stereo cameras where 3D information is obtained from overlapping images, the time-of-flight cameras produce a 3D point cloud for every frame acquired from a single sensor. Both range and amplitude images are simultaneously captured and the range information is used to generate the 3D coordinates for every pixel. The SR4000 has a resolution of 176 × 144 pixels and produces images at a rate of up to 54 frames per second. The Kinect 2.0 generates images with a resolution of 512 × 424 pixels at a rate of 30 frames per second. The Kinect 2.0 sensor also includes a built-in color camera and a microphone. Lange and Seitz [16] provide an exhaustive explanation on time-of-flight camera operational principles. The methodology used to reconstruct the moving surface from time-of-flight camera data includes the following steps: correction of the distortions in the acquired images; segmentation of the region of interest; and estimation of the deflections by image differencing. The image differencing operation is performed at each pixel location by subtracting the zero-load image data from a dataset acquired with the beam subjected to periodic loading. In principle, this operation also removes the systematic effects due to internal scattering and imaging distortions as they are common to all frames and the scene changes very little over time (e.g. ~ 6 mm peak-to-peak beam displacement at mid-span). A time-of-flight camera also suffers from significant range distortions due to the scattering artifact caused by secondary reflections occurring between the lens and the image. This biases the depth measurement thus limiting the use of range imaging cameras for high precision close-range photogrammetric applications. More details on the scattering effect can be found in [17]. However, the effect as displayed in Figure 1 can be removed by image differencing. Though lens distortions can also be mitigated by the differencing operation, the SR4000 cameras were calibrated to determine their interior orientation parameters. A detailed report on the calibration methodology can be found in [7]. The 3D coordinates of the points were re-computed using the estimated parameters. The effectiveness of the calibration was evaluated by imaging a flat wall and determining the deviations from the best-fit plane. Figure 2 shows a significant (i.e. several centimetres) reduction of the distortions in the data after lens distortion correction.

Proc. of SPIE Vol. 9528 95280V-2

SR4000 #1 (Zero Load)

8x

6L'1

4a2

+

3

t-

u-

ñ 1a.)

m 0a) -_c E

*4-

-2-

-3 -

-4-

°' -5 U

-6-

ñ -7-8 -2

I

-0.8

-1.2

-0.4 0 0.4 0.8 Position along the beam (m)

2

Figure 1. Scattering error on the top surface of an unloaded concrete beam as imaged by the Mesa Imaging SR4000

0.06

+ Before Calibration o After Calibration

o o oO

°

a)

0. 04 -

o o

crs

-0

oo

O



o°6?)

0.02

a)

°

o

o 0

° °°o°cf 0 00°: o _. °

o

°

o

ai a)

_c

° -0.02 a)

o crs

N -0.04

Figure 2. Distortions before and after the SR4000 #2 calibration and lens distortion correction.

After correcting the time-of-flight camera data for imaging distortions, the region of interest, the beam top surface, was extracted from all acquired images. The 3D point cloud of the top surface of the beam was segmented semiautomatically using a bounding box obtained by manual extraction from a selected frame and computation of the minimum and maximum coordinate values of the obtained point cloud. The witness plates were also segmented with bounding boxes.

Proc. of SPIE Vol. 9528 95280V-3

Deflections were estimated by pixel-wise differencing of the point cloud at every frame of the time series captured during the cyclic loading and the point cloud captured at zero load. Since the differencing operation amplifies noise, a filtering operation was performed whereby local statistics of the vertical displacements were computed and used to exclude outlying points. A third-order polynomial derived from beam theory [18] was fit to the resulting vertical component of the displacement point cloud in order to estimate deflection for each frame. The periodic displacement at any point on the loaded beam can be automatically reconstructed from a time series of depth measurements. The displacement, h, is modeled with a single-frequency sinusoid wave for which the amplitude and the loading frequency of the movement are estimated by least squares adjustment.

h(t ) = C cos 2πf 0t + D sin 2πf 0 t + E

(1)

where f0 is the loading frequency; C and D are the amplitude coefficients; and E is the mean value of the time series. The amplitude A of the motion is derived from C and D as follows.

A = C 2 + D2

(2)

The reconstruction algorithm procedure is described in detail in [11]. The deflections were also estimated at the centroid of the witness plates attached to the concrete beam using the segmentation procedure described in [11] and Equation (1). To evaluate the capability of the time-of-flight cameras to sense the moving beam surface, the deflections obtained from points selected on the beam top surface and from the centroid of the witness plates were both compared with the measurements from the laser transducers. The latter sensors have the ability to measure displacements with micrometre precision but only along a single direction. The vertical distances measured with the laser transducers were likewise modeled using Equation 1 and the estimated motion amplitude and loading frequency were used as benchmarks to assess the parameters obtained from the beam top surface and from the witness plates.

3. EXPERIMENT DESCRIPTION The structure studied in this work was a 3 m long concrete beam having a 150 mm × 300 mm rectangular cross-section. It was reinforced internally with steel bars and stirrups and externally with a steel fiber reinforced polymer sheet bonded to the beam soffit over the entire span. Thirteen white-washed thin aluminum witness plates were glued to the side of the beam at an interval of 250 mm along its length and numbered 1 through 13. A hydraulic actuator was used to apply a periodic load at the beam’s mid-span. Loads were applied at two frequencies: approximately 1 Hz and 3 Hz. The periodic load was applied following a static loading procedure. Two SR4000s and one Microsoft Kinect 2.0 sensor were mounted on a rigid scaffold assembly approximately 1.7 m above the top surface of the concrete beam. Moreover, a photogrammetric system of eight digital cameras and two projectors was also installed; however this is not analyzed in the current paper. Five laser displacement sensors were placed under the centroid of five witness plates along the length of the beam. These active triangulation systems acquired data at 300 Hz at the same time as the other sensors. An additional transducer was set up along the axis of the beam to measure the longitudinal displacement of the beam. To check the flatness and the horizontality of the top surface prior to the static and dynamic loadings, the terrestrial laser scanner HDS6100 was used to scan the beam top surface. Several circular black and white targets were affixed to the floor to allow registration of the different datasets. Figure 3 shows the experiment setup area. Figure 4 schematically highlights the relative position of the time-of-flight cameras with respect to the beam top surface, the witness plates and the laser transducers as well as the approximate field of view of the timeof-flight cameras. The experiment lasted a total of seven days culminating in the failure of the beam due to fatigue. Several datasets were collected during the load testing procedure. The Kinect 2.0 and the SR4000s were used to collect time series of 10 to 40s

Proc. of SPIE Vol. 9528 95280V-4

in length on each day. The SR4000 cameras captured data at 23 frames per second and the Kinect 2.0 acquired data at a frame rate of 30 frames per second. A 10s time series of data captured with the Kinect 2.0 corresponds to 300 images and a 40s time series of data with an SR4000 corresponds approximately to 730 images. Data from these sensors were not captured simultaneously in order to avoid any interference.

O.

Figure 3. Experiment setup

Witness plrite ISR4QOR P2I

SR440C! #1

n wit9

41 t}

Top Surfase

*11

7b12

*13

i~

Transd áE&

Transd f#1 Transd ä2 Transd

Traimi 44 Transsl;

Figure 4. Relative position of the time-of-flight cameras with respect to the beam top surface, the witness plates and the laser transducers

4. EXPERIMENT RESULTS AND DISCUSSION From the multiple time-series acquired during the cyclic loading, eight datasets were used to reconstruct the beam motion: the first four were captured with the Kinect 2.0 and the other four captured with one of the SR4000 cameras (#2). For the Kinect 2.0, the time series #1 and #3 correspond to a 3 Hz loading frequency while the other two selected (time series #2 and #4) are captured when the loading frequency was 1 Hz. For the SR4000 #2, the time series #5 and #7 correspond to a 3 Hz loading frequency and the time series #6 and #8 are captured with a loading frequency of 1 Hz. Table 1 shows the reconstructed beam displacement amplitudes and loading frequencies from the entire top surface range

Proc. of SPIE Vol. 9528 95280V-5

camera data (but computed at the witness plate locations), the witness plate range camera data and the laser transducer data for times series #1 and #3. Table 2 shows the corresponding results for the times series #2 and #4. Table 1. Reconstructed displacement amplitudes and loading frequencies for the 3 Hz Kinect 2.0 time series.

Time Series

Witness Plate #7

Witness Plate #9

Witness Plate #11

Top Surface

1

2.832 mm

2.333 mm

1.195 mm

Witness Plate

1

1.797 mm

2.239 mm

0.891 mm

Laser Transducer

1

2.795 mm

2.383 mm

1.317 mm

Top Surface

3

2.569 mm

2.365 mm

1.230 mm

Witness Plate

3

1.960 mm

2.253 mm

0.867 mm

Laser Transducer

3

2.899 mm

2.432 mm

1.371 mm

Top Surface

1

3.084 Hz

3.086 Hz

3.084 Hz

Witness Plate

1

3.085 Hz

3.083 Hz

3.084 Hz

Laser Transducer

1

3.086 Hz

3.086 Hz

3.082 Hz

Top Surface

3

3.111 Hz

3.109 Hz

3.110 Hz

Witness Plate

3

3.109 Hz

3.110 Hz

3.119 Hz

Laser Transducer

3

3.082 Hz

3.082 Hz

3.082 Hz

Amplitude

Frequency

Table 2. Reconstructed displacement amplitudes and loading frequencies for the 1 Hz Kinect 2.0 time series.

Time Series

Witness Plate #7

Witness Plate #9

Witness Plate #11

Top Surface

2

2.489 mm

2.323 mm

1.246 mm

Witness Plate

2

2.009 mm

2.247 mm

0.959 mm

Laser Transducer

2

2.673 mm

2.317 mm

1.231 mm

Top Surface

4

2.495 mm

2.255 mm

1.160 mm

Witness Plate

4

1.906 mm

2.060 mm

0.795 mm

Laser Transducer

4

2.850 mm

2.379 mm

0.529 mm

Top Surface

2

1.029 Hz

1.030 Hz

1.032 Hz

Witness Plate

2

1.028 Hz

1.028 Hz

1.034 Hz

Laser Transducer

2

1.027 Hz

1.027 Hz

1.027 Hz

Top Surface

4

1.069 Hz

1.068 Hz

1.064 Hz

Witness Plate

4

1.067 Hz

1.066 Hz

1.056 Hz

Laser Transducer

4

1.027 Hz

1.027 Hz

1.027 Hz

Amplitude

Frequency

Proc. of SPIE Vol. 9528 95280V-6

Tables 3 and 4 summarize the accuracy of the estimated displacement amplitudes and frequencies from the four Kinect 2.0 time series datasets. Though the witness plate method produces the expected 0.3 mm to 0.5 mm level accuracy from previous studies [11], the method of deriving displacement from the entire top surface is more accurate, up to 0.1 mm. The loading frequencies have been reconstructed at the expected accuracy for the time series #1 and #2, but the frequencies for #3 and #4 are less accurate than expected. Table 3. Accuracy of the displacement amplitude estimated using the time series captured with the Kinect 2.0

Time series #1

Time series #2

Time series #3

Time series #4

Top Surface

0.0 mm

0.1 mm

0.2 mm

0.1 mm

Witness Plates

0.5 mm

0.3 mm

0.5 mm

0.3 mm

Table 4. Accuracy of the loading frequency estimated using the time series captured with the Kinect 2.0

Time series #1

Time series #2

Time series #3

Time series #4

Top Surface

0.000 Hz

0.003 Hz

0.028 Hz

0.040 Hz

Witness Plates

0.000 Hz

0.003 Hz

0.031 Hz

0.036 Hz

Figure 5 show examples of the raw measurements and the reconstructed deflections estimated from the top surface and the corresponding witness plate using images captured by the Microsoft Kinect 2.0 and the laser transducer at the 1 Hz loading frequency. The raw and reconstructed curves exhibit congruence thus indicating the effectiveness of the reconstruction methodology. In addition, the three reconstructed curves have approximately the same amplitude and the same period meaning that they all describe the same motion. The beam motion generated by the 1 Hz loading frequency can be accurately reconstructed from any point on the beam top surface. From Top Surface at Witness Plate #9 Centroid Position Raw data Reconstructed data

1

2

3

4

5

4

5

4

5

From Witness Plate #9 Centroid

ó -40

1

2

3

From Laser Transducer #4

E4 2

E

0

a)

U -2 f6

ó

-4 0

1

2

3

t(s)

Figure 5. Raw measurements and reconstructed displacements at the 1Hz loading frequency using the Kinect sensor

Table 5 and 6 respectively show the reconstructed amplitudes and loading frequencies estimated from the beam top surface, the corresponding witness plates and from the corresponding laser transducers for the time series captured with the SR4000 #2.

Proc. of SPIE Vol. 9528 95280V-7

Table 5. Reconstructed displacement amplitudes and loading frequencies for the 3 Hz SR4000 #2 time series.

Time Series

Witness Plate #7

Witness Plate #5

Witness Plate #3

Top Surface

5

2.651 mm

2.186 mm

1.199 mm

Witness Plate

5

1.690 mm

1.588 mm

0.851 mm

Laser Transducer

5

2.624 mm

2.249 mm

1.343 mm

Top Surface

7

2.380 mm

2.084 mm

1.240 mm

Witness Plate

7

1.904 mm

1.757 mm

0.914 mm

Laser Transducer

7

2.899 mm

2.378 mm

1.374 mm

Top Surface

5

3.078 Hz

3.081 Hz

3.076 Hz

Witness Plate

5

3.080 Hz

3.083 Hz

3.079 Hz

Laser Transducer

5

3.082 Hz

3.083 Hz

3.082 Hz

Top Surface

7

3.079 Hz

3.078 Hz

3.079 Hz

Witness Plate

7

3.076 Hz

3.072 Hz

3.073 Hz

Laser Transducer

7

3.082 Hz

3.082 Hz

3.082 Hz

Amplitude

Frequency

Table 6. Reconstructed displacement amplitudes and loading frequencies for the 1 Hz SR4000 #2 time series.

Time Series

Witness Plate #7

Witness Plate #5

Witness Plate #3

Top Surface

6

2.419 mm

2.167 mm

1.293 mm

Witness Plate

6

2.108 mm

1.617 mm

0.906 mm

Laser Transducer

6

2.850 mm

2.354 mm

1.351 mm

Top Surface

8

2.121 mm

1.592 mm

0.998 mm

Witness Plate

8

0.896 mm

1.290 mm

0.988 mm

Laser Transducer

8

2.673 mm

2.321 mm

1.299 mm

Top Surface

6

1.028 Hz

1.027 Hz

1.029 Hz

Witness Plate

6

1.037 Hz

1.038 Hz

1.028 Hz

Laser Transducer

6

1.027 Hz

1.027 Hz

1.027 Hz

Top Surface

8

1.026 Hz

1.023 Hz

1.024 Hz

Witness Plate

8

1.039 Hz

1.014 Hz

1.025 Hz

Laser Transducer

8

1.027 Hz

1.027 Hz

1.027 Hz

Amplitude

Frequency

Proc. of SPIE Vol. 9528 95280V-8

Tables 7 and 8 show the accuracy of the estimated motion amplitude and loading frequencies using the four time series datasets captured with the SR4000 #2. Though slightly less accurate then the Kinect 2.0 results, similar trends are visible: the top surface method is more accurate than the witness plate centroid method. The loading frequencies have been accurately reconstructed with an overall accuracy of 0.002 Hz for the top surface method and 0.004 Hz for the witness plate method. Table 7. Accuracy of the displacement amplitude estimated using the time series captured with the SR4000 #2

Time series #5

Time series #6

Time series #7

Time series #8

Top Surface

0.1 mm

0.2 mm

0.3 mm

0.5 mm

Witness Plates

0.7 mm

0.6 mm

0.7 mm

1.0 mm

Table 8. Accuracy of the loading frequency estimated using the time series captured with the SR4000 #2

Time series #5

Time series #6

Time series #7

Time series #8

Top Surface

0.004 Hz

0.001 Hz

0.003 Hz

0.003 Hz

Witness Plates

0.001 Hz

0.007 Hz

0.008 Hz

0.001 Hz

Figure 6 shows examples of the raw measurements and the reconstructed displacements estimated from the top surface method, the corresponding witness plate using images captured by the SR4000 #2 and the laser transducer at the 1 Hz loading frequency. While the reconstructed signal from the top surface and the witness plate methods share the same phase, the reconstructed signal from the laser transducer does not. This simply means that the sensors are not synchronized. It is clear though that the reconstructed curves describe the same beam motion. From Top Surface at Witness Plate #7 Centroid Position Raw data Reconstructed data

1

3

2

4

5

4

5

From Witness Plate #7 Centroid

From Laser Transducer #3

E

E4

:.C. N

U f6

2 0

-2 1

2

3

t(s)

Figure 6. Raw measurements and reconstructed displacements at the 1Hz loading frequency using the SR4000 #2

From Table 3 it can be concluded that the Microsoft Kinect 2.0 was able to accurately reconstruct the beam displacement using the data from the top surface of the beam and from the witness plate data. However, from Table 7, using the witness plates for reconstructing the beam motion from the SR4000 #2 images was less successful in terms of accuracy for both loading frequencies considered. This is also noticeable by comparing Figures 5 and 6. It appears that most of the witness plates’ centroids underestimate the amplitude. Averaging the vertical measurements of the witness plates provides insufficient accuracy compared to the polynomial fitting considered for the top surface method. A hypothesis to

Proc. of SPIE Vol. 9528 95280V-9

justify this could be the difference between the number of points describing a witness plate (approximately 300) and the number of points describing the top surface (approximately 9500). In addition, previous studies [11] using the SR4000 has come up with an overall accuracy lower than 0.5 mm for the reconstruction from the witness plates when selecting an appropriate sampling frequency and modulation frequency. In this study, an analysis of the absolute reconstruction accuracy with respect to the sampling frequency and the modulation frequency of the SR4000 has not been achieved and may explain the lower accuracy obtained. Figure 7 shows the third-order polynomials fit to the segmented point clouds of the beam top surface from six points of time series #1. The curves describe the deflections of the surface within the time. 0.17s are enough to capture a full cycle of the beam motion. The range of deflections represents twice the displacement amplitude. The maximum displacement of 2.820 mm observed at the left side of the figure, which corresponds approximately to the location of witness plate #7, is expected (c.f. Table 1, the estimated amplitude of 2.832 mm).

X 103 2

';.'-st: ;'''

,M J J

o -4

VA, J

.:

7+.11tAAAl

Ú

o

-5

.

+ Time =0.300 s Time =0.333 s

- - Time =0.367 s Time =0.400 s Time =0.433 s Time =0.467 s

...«...«««»««

-6 -7

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Position along the beam (m)

Figure 7. Dynamic deflections determined from the modeled beam top surface at the 3Hz loading frequency using the Kinect 2.0 sensor

5. CONCLUSION The objective of this work was to evaluate the capability of the Microsoft Kinect 2.0 time-of-flight sensor and the Mesa Imaging SR4000 time-of-flight camera for accurately imaging a beam subjected to cyclic loading in laboratory conditions. Whereas in previous studies the deflections were estimated point-wise, here the deflections were reconstructed for the whole top beam surface but evaluated at some selected points. In addition, the deflections were also measured from several witness plates attached to concrete beam and from laser displacement sensors for accuracy assessment. From the eight time series datasets processed, it can be concluded that for both cameras, the beam motion could be more accurately reconstructed using the entire beam-top surface data than from the witness plate centroids. Indeed, from the first four time series captured with the Kinect 2.0, the overall accuracy from the top surface method reconstruction was 0.1mm while the corresponding accuracy was 0.4mm from the witness plates. For the SR4000, considering the four time series datasets processed the accuracies for the top surface reconstruction method and from the witness plates are respectively 0.3 mm and 0.8 mm.

Proc. of SPIE Vol. 9528 95280V-10

REFERENCES [1] Bostelman, R., T., Hong, Madhavan R., and Weiss B., “3D range imaging for urban search and rescue robotics research,” In IEEE International Conference on System Safety, Security and Rescue Robotics Gaithersburg MD USA, 164-169 (2005). [2] Bostelman, R. and J. Albus, “A multipurpose robotic wheelchair and rehabilitation device for the home,” In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, San Diego California USA, 3348-3353 (2007). [3] Li, Z. and Jarvis R., “Real time hand gesture recognition using a range camera,” In Australian Conference on Robotics and Automation (ACRA) Sydney Australia, 1-7 (2009). [4] Lahamy, H. and Lichti D., “Towards real-time and rotation-invariant American sign language alphabet recognition using a range camera,” Sensors 12 (11), 14416-14441 (2012). [5] Gonsalves, R. and Teizer J., “Human motion analysis using 3d range imaging technology,” In 26th International Symposium on Automation and Robotics in Construction Georgia USA, 76-85 (2009). [6] Nitsche, M., Turowski J. M., Badoux A., Rickenmann, D., Kohoutek, T. K., Pauli, M., and Kirchner, J. W., “Range imaging: a new method for high-resolution topographic measurements in small- and medium-scale field sites,” Earth Surface Processes and Landforms 38(8), 810–825 (2013). [7] Lichti, D. D., Kim, CJ, Jamtsho, S., “An integrated bundle adjustment approach to range camera geometric selfcalibration,” ISPRS Journal of Photogrammetry and Remote Sensing 65(4), 360-368 (2010). [8] Steward, J., Lichti, D. D., Chow, J., Ferber R., and Osis, S. “Performance assessment and calibration of the Kinect 2.0 time-of-flight range camera for use in motion capture applications,” in FIG Working Week 2015 Sofia Bulgaria, 1-14 (2015). [9] Chiabrando, F., Piatti D., Rinaudo F., “SR-4000 TOF camera: Further experimental tests and first applications to metric surveys,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVIII, Part 5 Commission V Symposium Newcastle upon Tyne UK., (2010). [10] Janowski, A., Nagrodzka-Godycka K., Szulwic J., Ziółkowski P., “Modes of Failure Analysis in Reinforced Concrete Beam Using Laser Scanning and Synchro-Photogrammetry” Second International Conference on Advances in Civil, Structural and Environmental Engineering, Zurich Switzerland Volume: ISBN 978-1-63248030-9, 16-20 (2014). [11] Detchev, I., A. Habib, F. He, and M. El-Badry "Deformation Monitoring with Off-the-shelf Digital Cameras for Civil Engineering Fatigue Testing." ISPRS Commission V Mid-term Symposium Riva del Garda Italy, (2014). [12] Kwak, E., I. Detchev, A. Habib, M. El-Badry, and C. Hughes "Precise photogrammetric reconstruction using model-based image fitting for 3D beam deformation monitoring." Journal of Surveying Engineering 139(3), 143-155 (2013). [13] Qi, X., Lichti, D. D., El-Badry, M., On Chan, T., El-Halawany, S., Lahamy, H., Steward, J., “Structural dynamic deflection measurement with range cameras,” The Photogrammetric Record 29(145), 89-107 (2014). [14] Qi, X., Lichti, D. D., El-Badry, M., Chow, J. and Ang, K., “Vertical Dynamic Deflection Measurement in Concrete Beams with the Microsoft Kinect,” Sensors 14, 3293-3307 (2014). [15] Maas, H. and Hampel, U. “Photogrammetric techniques in civil engineering material testing and structure monitoring,” Photogrammetric Engineering and Remote Sensing 72 (1), 39-45 (2006). [16] Lange, R. and Seitz, P., “Solid-state time-of-flight range camera. IEEE Transactions on Quantum Electronics,” 37 (3), 390-397 (2001). [17] Jamtsho, S. and Lichti, D. D., “Modelling scattering distortion in 3d range camera,” In International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Commission V Symposium, Newcastle upon Tyne, UK, Vol. XXXVIII, Part 5. (2010). [18] Gordon, S.J. and Lichti, D.D., “Modeling terrestrial laser scanner data for precise structural deformation measurement,” ASCE Journal of Surveying Engineering 133(2), 72-80 (2007).

Proc. of SPIE Vol. 9528 95280V-11