SEGMENTATION-BASED SPATIALLY ADAPTIVE ... - CiteSeerX

9 downloads 0 Views 260KB Size Report
INTRODUCTION. Motion blur occurs in many image formation systems due to limited performance of both optical and electronic sys- tems. The shutter speed, for ...
138

S. K. Kang, J. H. Min, and J. K. Paik, "Segmentation-based spatially adaptive motion blur removal and its application to surveillance systems, " Proc. International Conf. Image Processing, Vol. I, pp. 245-248, Thessaloniki, Greece, October 2001.

SEGMENTATION-BASED SPATIALLY ADAPTIVE MOTION BLUR REMOVAL AND ITS APPLICATION TO SURVEILLANCE SYSTEMS Sang Kyu Kang*, Ji Hong Min**, and Joon Ki Paik** *Department of Electrical and Computer Engineering The University of Tennessee, Knoxville, Tennessee **Department of Image Engineering Graduate School of Advanced Imaging Science, Multimedia, and Film Chung-Ang University, 221 Huksuk-Dong, Dongjak-Ku, Seoul 156-756, Korea E-mail: [email protected]*, [email protected]** ABSTRACT Various image restoration methods have been studied for removing space-variant motion blur such as iterative and POCS method. However, their computational complexity of the methods, such as regularized iteration and POCS method, is so high that they can hardly be implemented in real-time. In this paper, we address the method to reduce the computational complexity by selecting the region to be restored. The primary application area of the proposed method is a surveillance system which requires accurate object extraction, identification, and tracking functions. To remove motion blur, we propose a new spatially adaptive regularized iterative image restoration algorithm. Experimental results show the the proposed algorithm can efficiently remove spacevariant motion blur with significantly reduced computational overhead. 1. INTRODUCTION Motion blur occurs in many image formation systems due to limited performance of both optical and electronic systems. The shutter speed, for example, is the main factor deciding the amount of motion blur. If a camera has slow shutter speed relative to the object velocity, the obtained image is degraded by motion blur. In a surveillance system, it results in fatal failure of detecting important information, such as the license plate of a fast moving car. Because of this reason, a high-performance, high-cost camera is generally used for detecting fast-moving objects. To reduce the cost of the surveillance system, we can apply the digital image restoration algorithm to accurately detect such objects

2. IMAGE DEGRADATION MODEL FOR MOVING OBJECTS 2.1. Motion blur model Image degradation model is formulated in the vector-matrix form as Ý  ÀÜ  Ú (1) where Ý represents the observed image. We consider that the original image Ü is degraded by À with additive noise Ú. The size of vectors Ü, Ý, and Ú are    , and the size of matrix À is      . Generally, Ú is modeled as zero mean white Gaussian and the degradation matrix À is modeled as a block-circulant matrix if the degradation process occurs in the space-invariant manner.

This research was supported in part by the University Research Program in Robotics under the United States Department of Energy, DOEDE-FG02-86NE37968, and in part by the Brain Korea 21 project under the Korean Ministry of Education.

0-7803-6725-1/01/$10.00 ©2001 IEEE

with a low-cost camera. In order to remove motion blur, various image restoration algorithms have been proposed [1, 2, 3]. The most closely related research can be found in [4]. It adopts regularized iteration to restore the degraded image and considers the single segment that covers the uniform moving region. The degraded image in low-frequency region, however, will be similar to the restored image because the motion blur process can be considered as the one dimensional uniform blur. For this reason, we try to restore the image region that has a high-frequency component. In this paper, we propose the selection rule to reduce the computational complexity with acceptable restoration results and the segmentation-based spatially adaptive image restoration method. This is based on the space-variant motion blur model, which is divided into the background model and the object model [5, 6]. We also apply this model to surveillance system.

245

In case of space-variant motion blur, is divided into and  which represent the degradation of object and background, respectively [5, 6]. Figure 1 shows a simple graphical representation for one-dimensional motion blur.

computational overhead if the moving object is mainly consisted by low frequency components. This is because that the low frequency region, such as no texture region and a interior region of simple object, has the almost same result after suffering with 1D motion blur. The captured image from the surveillance system that is installed over the road, for example, is generally blurred in the moving object’s boundary and interior boundary. If we remove that region during the restoration process, we can reduce a computational complexity. The criterion of the selection rule is the standard deviation of the small region in the direction of moving object. In Figure 1, we can find that the left region of the region 1 and the interior of the region 2 have the same gray values after suffering the motion blur. This region has a small standard deviation, because it is the measure of the spread in each region about the mean The standard deviation can be formulated as



Background (a)

t=0 (b)

t=1 (c)

t=2

region 1

region 2

region 3

Image Plane

Fig. 1. The example of 1D motion blur. In Figure 1, the image plane represents the plane of imaging sensor array. The blurred region in the object plane in Figure 1 is considered that the object at time    is blurred by motion in right direction. This is the same process if the object at    is blurred by symmetric motion. When we formulate the image degradation model for motion blur, we assume symmetric motion blur without loss of generality. Two-dimensional image degradation model can be formulated as



































 



¾  ¾ 

   







(3)

where   represents the mean value of   block and  the    -th pixel in the   block. The   block is determined by the direction of motion vector. If a object has mainly the horizontal moving direction, the horizontal length of the   block is longer than the vertical length. We apply the restoration method to a region if the standard variation is larger than the threshold like as

                                (2) where  represents the  -th pixel in the degraded im











age,  and  the vertical and the horizontal sizes of the image, and  the -th unit vector of size   , where      . In (2),    and    respectively represent      degradation matrices for the object and background, which corresponds to the  -th pixel. And    and

   represent     vectors for the object and background containing the  -th pixel, respectively. In the right part of (2), the first term,       , represents the convolution process between the object and the point spread function due to the corresponding motion blur, and the second term,       , represents the background with the appropriate ratio in the boundary region. The detail of this degradation model has been addressed with the example of -th row of an image in the previous work [5, 6]. It should be noted that the sum of the object ratio and background ratio is always unity. 2.2. Selection of regions to be restored The conventional restoration method finds the homogeneous moving region and applies the restoration algorithm to its region. This is a general method, but it is considered as





(4)

  

The threshold can be determined by inspecting intensity values of the     . 3. SEGMENTATION-BASED ADAPTIVE ITERATIVE IMAGE RESTORATION In order to remove the Space-Variant (SV) motion based on the proposed degradation model, we use the modified version of the regularized iterative image restoration method proposed in the previous work[1]. The Space Invariant (SI) restoration method can be expressed as

·½ 









  











This method, however, can not be used for the restoration of a SI blurred image. In order to remove the SV motion blur, (5) should be modified, for each pixel  which is included in the segment denoted by R, as

                 

·½ 

246

(5)





 





 









 

(6)

where  represents the Point Spread Function (PSF) of the image segment containing  ,  the -th unit vector,  the vertical size of the image, and  the number of different segments. In (6),   is determined by the local degradation matrix for segment R,   as 



  









Input image sequences

Background generation

Tracking*

De-interace

Object Extraction Space-variant regurized iterative image restoration algorithm Find the motion blurred object and apply the exception rule

(7)

* The tracking process is not implemented yet.

where  is a smoothness constraint, and  represents the regularization parameter which controls the fidelity to the original image and smoothness of the restored image. The restoration algorithm given in (6) is shown in Figure 2. Switches 2 and 3 in Figure 2 are used for selecting the different degradation processes corresponding to each object.

Fig. 3. Surveillance system with motion blur detecting and removal function.

5. EXPERIMENTAL RESULTS The original blurred image is shown in Figure 4(a). The target is the lower moving car that is moved down in the image. To extract the moving object, we generated the active background which is shown in Figure 4(b). The detected moving object with that method is shown in Figure 4(c). The final segment after selecting the region to be restored is shown in Figure 4(d). The total area to restore is reduced by  so we can reduce the computation complexity during the restoration process. For selecting the region to be restored, we use a    block because the object is moved down and 7.0 is used as the Threshold in Eq 4. The final restoration is shown in Figure 5. We can observe that the license plate “5286” become recognizable. In order to have restored images shown in Figure 5(b), regularization parameter    is used, and the iteration step is 40. The extent of motion blur   is used because the displacement of the . moving object is   and the shutter speed is 

Fig. 2. The space-variant regularized iterative image restoration algorithm.

4. SURVEILLANCE SYSTEM WITH MOTION BLUR REMOVAL

6. CONCLUSIONS

To verity the usefulness of the region selection rule, we implemented the system that is composed by the object extraction method with active background subtraction which can extract the moving object with the background [7]. The shutter speed of the system is used for decision whether the object is blurred or not because the extent of the motion blur is proportional to the displacement of the object. The relation between the displacement and the extent can be formulated as

In this paper, we propose the region selection rule to reduce the computational complexity and the segmentation-based spatially adaptive image restoration method. We show the relation between the extent of motion blur and the shutter speed. If we know the shutter speed and estimate the motion vector of a object, we can determine whether a moving object is motion blurred or not. The experimental result shows that the restored image with that method has acceptable performance.



  

 

 



7. REFERENCES

(8)

[1] A. K. Katsaggelos, “Iterative image restoration algorithms,” Optical Engineering, vol. 28, no. 7, pp. 735-748, July 1989.  zkan, A. Murat Tekalp, and M. Ibrahim Sezan, [2] M. K.  “POCS-based restoration of space-varing blurred images,” IEEE Trans. Image Processing, vol. 3, no. 4, pp. 450-454, July 1994.

where  represents the displacement vector,   

 the shutter speed of the camera,  the interval between frames, and  the extent of the motion blur. The entire system is shown in Figure 3. In this system, we assume the tracking part is implemented, because this paper is mainly about the selection rule to reduce the computational complexity.

[3] D. T. Tull and A. K. Katsaggelos, “Iterative restoration of fast-moving objects in dynamic image sequences,” Opti-

247

(a)

(b)

(c)

(d)

Fig. 4. The experimental result (a) The original blurred image, (b) The generated background for moving object extraction, (c) The result segment before applying the region selection rule. The lower object is used in restoration process, (d) The final segment after selecting the region to be restored. (a) cal Engineering, vol. 35, no. 12, pp. 3460-3469, December 1996. [4] D. T. Tull and A. K. Katsaggelos, “Regularized restoration of partial-response distortions in sporadically degraded images,” Proc. 1998 Int. Conf. Image Processing, vol. 3, pp. 732-736, October 1998. [5] Y. C. Choung, J. H. Shin, and J. K. Paik, “Object-Based Analysis of Motion Blur and Its Removal by Considering Occluded Boundaries,” Proc. Visual Comm., Image Processing, vol. 3653, part. 1, pp. 687-697, January 1999. [6] S. K. Kang, Y. C. Choung, and J. K. Paik, “Segmentationbased image restoration for multiple moving objects with different motions,” Proc. 1999 Int. Conf. Image Processing, Vol. 1, pp 376 380, October 1999. [7] Collins, Lipton, Kanade, Fujiyoshi, Duggins, Tsin, Tolliver, Enomoto, and Hasegawa, “A System for Video Surveillance and Monitoring: VSAM Final Report,” Technical report CMU-RI-TR-00-12, Robotics Institute, Carnegie Mellon University, May, 2000.

(b) Fig. 5. The experimental result (a) The real blurred moving object, (b) The restored image after selecting the region to be restored.

248