An Intelligent Sensor Network for Object Detection ... - Semantic Scholar

9 downloads 16613 Views 792KB Size Report
Computer Vision Laboratory, College of Computing Sciences ... Keywords: surveillance system, sensor networks, support vector machine, pattern recog- nition.
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 23, 1955-1969 (2007)

Short Paper_________________________________________________ An Intelligent Sensor Network for Object Detection, Classification and Recognition FRANK Y. SHIH, YI-TA WU, CHAO-FA CHUANG, JIANN-LIANG CHEN1, HSI-FENG LU1 AND YAO-CHUNG CHANG2 Computer Vision Laboratory, College of Computing Sciences New Jersey Institute of Technology Newark, New Jersey 07102, U.S.A. 1 Department of Computer Science and Information Engineering National Dong Hwa University Hualien, 974 Taiwan 2 Department of Information Management National Taitung University Taitung, 684 Taiwan In this paper, an intelligent sensor network is developed for object detection, classification and recognition. We utilize wireless sensors as the first layer to detect coordinates of moving objects in a secured area. Cameras are activated to capture image features for object classification and recognition. In order to reduce processing time, a hierarchical image extraction approach is developed. Global object features such as size and motion are acquired for classifying objects into a number of classes. If the moving object is considered suspicious, the cameras will be requested to capture detailed images for object recognition. Experimental results show that our system can achieve a high face recognition rate of 95.4% for the testing images captured by the surveillance system. Keywords: surveillance system, sensor networks, support vector machine, pattern recognition

1. INTRODUCTION A surveillance system or a closed-circuit television system is used to maintain a close observation of a person or a group [1]. It has been widely adopted nowadays to help a guard with consecutive sensing information. Unfortunately, the concurrent observations of several monitors and the long-term exhausting visualization have caused the major defect of decaying attention. Consequently, the need for developing an efficient, automated surveillance system is critical to ensure robust security. Researchers have paid attentions on combining surveillance systems with wireless sensor networks (WSNs) in various applications. Wireless sensor network [2, 3] is an emerging field that generates a broad range of applications including environment moniReceived November 11, 2005; revised March 24, July 7 & August 30, 2006; accepted September 27, 2006. Communicated by Pau-Choo Chung.

1955

1956

F. Y. SHIH, Y. T. WU, C. F. CHUANG, J. L. CHEN, H. F. LU AND Y. C. CHANG

toring, smart spaces, medical systems, military, and robotic exploration. It consists of enormous amounts of distributed mobile sensor nodes that organize themselves into a multi-hop wireless network. Each node has one or more sensors, embedded processors and low-power radius, and is normally battery operated. Typically, these nodes are incorporated together to aim at a common goal by monitoring the phenomena continuously and transmitting the sensory data through the network to the observer. In literature, different types of distributed sensor networks for surveillance have been presented. Each system focuses on a specific function. Since mobile sensing lacks of suitable distributed machine learning algorithms that operate on devices with limited computing capability, Talukder et al. [4] proposed a feature extraction algorithm that can transmit sensory data in low bandwidth and still achieve critical classification. Foresti and Snidaro [5] presented a system that can handle heterogeneous sensor data such as optics, infrared and radar under adverse weather conditions. Chandramohan and Christensen [6] introduced a system using both wired and wireless sensor networks for video surveillance. Chang and Huang [7] presented a prototype of the wireless-sensor based video surveillance system using hybrid classifiers. In this paper we first use wireless sensors as guarders to detect the directional, magnetic, and seismic information of any moving object in a secured area by transmitting a few bits of data to the server. The surveillance system enacts cameras to monitor the desired areas based on the signals provided by wireless sensors. Since the sensors can sense the coordinates of moving objects, the cameras can be adjusted to track them for object classification and recognition. If there is no signal from wireless sensors, the cameras will periodically rotate for traditional surveillance. To perform the surveillance system efficiently, a hierarchical feature extraction approach is adopted. The cameras extract coarse image features for object classification if vigilant signals from wireless sensors are received. If the object is classified as a dangerous intrusion (human), the cameras will continuously extract fine image features for object recognition. Note that, during the classification and recognition procedures, the system will trigger a resend mechanism if the data sent by sensors are missing or unclear such that the cameras cannot locate the moving objects or the image features extracted by cameras are insufficient in making decisions. This paper is organized as follows. In section 2 we present an overview of the intelligent sensor network system. In section 3 we describe the techniques of wireless sensor network for object detection. In section 4 we apply coarse image features for object classification. In section 5 we use fine image features for object recognition. We show our experimental results in section 6. We provide comparisons of our system with other systems in section 7. Finally, we draw conclusions in section 8.

2. OVERVIEW OF THE INTELLIGENT SENSOR NETWORK SYSTEM There are two main stages in the intelligent sensor network (ISN) system: routine and alarm. If no intrusion is detected by wireless sensors, the system will perform routine tasks, such as periodically rotating cameras for detection. If wireless sensors detect the moving object, the system will activate the alarm stage by adjusting cameras toward the area to obtain coarse image features for object classification. If the object is classified as

SENSOR COMPUTING FOR PATTERN RECOGNITION

1957

suspicious, the cameras will extract fine image features for object recognition. Finally, the system will continuously track the object and issue an alarm if the object is classified as dangerous. Note that, the decision of a dangerous situation is varied with different applications. On the other hand, the system can be extended to some applications by combining with different techniques. For example, an ID verification system can be performed by combining with the radio frequency identification (RFID) [8] to distinguish unauthorized intruders from authorized people. That is, a person is permitted to enter a secured area if he/she carries an ID card or badge with a RFID tag from which RFID reader can read the identification of the person. Otherwise, the system will fire an alarm. The proposed ISN system contains the following three tasks: (1) The wireless sensors will trigger cameras to detect any moving object. (2) The coarse image features will be calculated for object classification. (3) The fine image features will be extracted for object recognition. The algorithm of our intelligent sensor network system is described below, and its flowchart is shown in Fig. 1. Note that, we divide the algorithm into two parts: the wireless sensors and the camera. For simplicity, the “wireless sensors” and the “camera” are respectively denoted as “W” and “C.”

Fig. 1. The flowchart of the ISN system.

Note that the flowchart in Fig. 1 only presents a prototype of the intelligent sensor network. Various types of applications such as tracing multiple objects and continuously monitoring an object can be achieved by using a priority queue to record the position of all the invaders, and utilizing flags to indicate the current condition and related actions. The priority queue is generated from the keep-tracing factor and the distance between a moving object and a camera. Our system can pay attention to the closest object first when tracing multiple objects, and will continuously monitor the object based on the keep-tracing factor. However, our system may fail in the condition of overlapping ob-

1958

F. Y. SHIH, Y. T. WU, C. F. CHUANG, J. L. CHEN, H. F. LU AND Y. C. CHANG

jects, and might fail to distinguish each individual object from an object group if the distances between multiple objects are too small. That is, the system cannot recognize the occluded object since the camera cannot capture the required images for further analysis. The Algorithm of the Intelligent Sensor Network 1. Both the wireless sensors and the camera perform their routine tasks. W: Only minimum functions are enabled to detect the location of any moving object. C: If there are no specific signals from wireless sensors, the system will continuously perform routine tasks. 2. A moving object is detected by wireless sensors. W: Calculate the location of intrusion and enable all functions of sensors for collecting related information. C: Rotate the camera based on the signals of wireless sensors to aim at the suspicious area and extract coarse image features for object classification. 3. The data fusion system will check whether the received data are sufficient for making a decision. If it needs additional data, go to step 5. 4. The data fusion system will check whether the condition is dangerous (human) or not. If it is dangerous, an alarm will be issued and go to step 2 for continuous detection; otherwise, some functions of the sensors will be disabled and go to step 1 for routine tasks. 5. The system will determine the required information. If coarse features are required, it will go to step 2; otherwise, it will request the camera for fine image features and go to step 6. 6. The system will perform object (face) recognition to determine whether the moving object is dangerous. If it is dangerous, an alarm will be issued and go to step 2 for continuous detection; otherwise, some functions of the sensors will be disabled and go to step 1 for routine tasks.

3. THE WIRELESS SENSOR NETWORK FOR OBJECT DETECTION There are two issues in traditional surveillance systems. First, object resolutions are changed due to the varying distances between the object and the camera. This causes a problem in object (face) recognition since we have to apply different sizes of the mask to properly extract object features. Fig. 2 shows an example of different resolutions by varying distances. The rectangular boxes indicate different masks used for recognition.

Fig. 2. An example of different resolutions by varying distances.

SENSOR COMPUTING FOR PATTERN RECOGNITION

1959

(a) (b) (c) Fig. 3. An example of detecting the moving object when the camera is not fixed.

The other issue is that the detection of moving objects becomes difficult when the camera is not fixed as illustrated in Fig. 3, where (a) and (b) are captured when both the person and the camera are moving toward right-hand side. The difference image in Fig. 3 (c) shows some background appearance because the camera is not fixed. Although Nelson [9] presented two complementary methods to solve this problem, two constrains are restricted: velocity and time-rate-of-change. In order to solve the above two issues, the wireless sensors are utilized as the first level guarders to detect the directional, magnetic, and seismic information of any moving object by transmitting a few bits of data to the server. The surveillance system will arrange cameras to monitor assigned areas based on the signals provided by the wireless sensors. Since the sensors can sense the coordinates of moving objects, the cameras can be adjusted to track them for classification and recognition. The sensors will trigger the cameras to extract images toward the suspicious area. A personal computer is used as the gateway or base station to be the interface between cameras and WSNs as shown in Fig. 4. The base station receives the packet data from all sensor nodes through a serial port, aggregates the data, and forwards them over Internet.

Fig. 4. The interface between the camera and wireless sensor networks.

Sensor nodes are deployed at any desired locations. We built a table to map them to reference coordinates. The captured data are sent to the base station by wireless mode only if higher than a threshold. After all the data from wireless sensors are received in a specific interval, the gateway will find the centroid of the event and forward the sensed event over TCP/IP to the camera. The event from the base station will trigger the cameras, so that they can be properly rotated toward the suspicious area to capture the moving object. To calculating the centroid of the event, we use Eq. (1), where (X, Y) denote the coordinates of the centroid of the event and mi denotes the reading from the sensor node i at (xi, yi). Our coordinate system is shown in Fig. 5. For example, when an object appears in the sensing field, these sensors in the area of the dashed circle will detect the presence and transmit acoustic reading to the gateway. These sensor nodes include nodes 4, 5, 8 and 9, and their acoustic readings are respectively m4, m5, m8 and m9.

F. Y. SHIH, Y. T. WU, C. F. CHUANG, J. L. CHEN, H. F. LU AND Y. C. CHANG

1960

Fig. 5. The coordinate system of the sensing field.

X =

∑ mi xi , Y = ∑ mi yi . ∑ mi ∑ mi

(1)

Note that, the arrangement of sensor positions depends on the environment. In other words, they may be arranged dynamically. Eq. (1) is used to adapt any kind of sensor arrangement since the locations of the objects are calculated based on the signal strength and coordinates of the sensors. Once the coordinates are obtained, the tilting and panning angles can be determined accordingly as shown in Fig. 6 (see [10] for details). Fig. 6 (a) shows an example that the panning angle, θ, can be derived using Eq. (2).

θ = tan −1

x y

(2)

Fig. 6. An example of determining the tilting and panning angles.

Fig. 6 (b) shows an example that the tilting angle, ϕ, can be derived based on the distance between the moving object and the camera using Eq. (3).

ϕ = tan −1

z = tan −1 d

z x + y2 2

(3)

For the zooming factor, it is also related to the distance between the moving object and the camera. We build a lookup table to record the zooming factor according to different distances by a series of tests. A sensed event is composed of four fields: SensorType, SensedValue, x-, and y-coordinates. The first field indicates the type of sensor which triggers the event. In our im-

SENSOR COMPUTING FOR PATTERN RECOGNITION

1961

plementation, the valid values are magnetometer, microphone, and thermistor. The second field is the sensed value of the event. The third and fourth fields are x- and y-coordinates of the centroid of the event. We adopt the Crossbow Mote Mica2 [11] as our sensor platform. In particular, the Mica2 motes we use in the test bed run on two AA batteries, whose communication ranges vary greatly with environmental conditions, having an average indoor range of 6-7 m observed during our experiments. The mote running on an open source operating system is called TinyOS. The sensor components include accelerometer, buzzer, light, microphone, magnetometer, thermopile and thermistor. The accelerometer (Analog Devices, ADXL202JE) is an MEMS surface micromachined 2-axis, ± 2g device. It features a very low current draw (< 1mA) and a 10-bit resolution. The sensor can be used for tilt detection, movement, vibration, and/or seismic measurement. The sounder is a simple 4 kHz fixed frequency piezoelectric resonator. The maximum sensitivity of the photocell (Clairex, CL94L) is at the light wavelength of 690 nm. Typically, the on-resistance exposed to light is 2 Kohm, and the off-resistance in dark is 520 Kohm. The microphone circuit has two principal usages: one for acoustic ranging and the other for general acoustic recording and management. It can sense people talking at a radius of 6 meters. The magnetometer (Honeywell, HMC1002) circuit can measure the Earth’s field and other small magnetic fields. A useful application is vehicle detection. Tests for disturbance have been conducted effectively on automobiles at a radius of 15 feet. The thermistor (Panasonic, ERT-JIVR12J) can measure temperatures between – 40°C and 70°C. With a proper calibration, the accuracy of 0.2°C can be achieved. To enhance classification, every sensor node is installed with four sensors: the acoustic sensor, the thermopile, the thermistor and the magnetometer. The thermistor is used to measure the temperature of the cold junction on the thermopile and calculate the body temperature. Thus, we can detect and trace a moving object if it moves silently. The acoustic and thermo reading can be combined to verify the detection of the moving object. The metallic message can be obtained by the magnetometer sensor. In the application of detecting a moving object, the microphone, the thermopile, the thermistor and the magnetometer are essential, but the accelerometer, the buzzer and the light sensor are unnecessary. As the sensor node is powered by battery, our strategies for power saving in the sensor network are described as follows: (1) Turn off the unnecessary sensors according to the requirement of the application. (2) Minimize the transmission power level. (3) Utilize the low power mode of sensor node if possible. The proposed WSN is designed to be cluster-based, self-organized [12], secure, and power efficient [13]. Link layer security mechanism and group key management play major roles in our WSNs.

4. COARSE IMAGE FEATURES FOR OBJECT CLASSIFICATION The hierarchical feature extraction approach is adopted to reduce unnecessary proc-

1962

F. Y. SHIH, Y. T. WU, C. F. CHUANG, J. L. CHEN, H. F. LU AND Y. C. CHANG

essing. In this section, the procedure of coarse image feature extraction for object classification is presented. As aforementioned, after the signals from WSNs are received, the camera will be properly rotated toward the moving object and the focus of lens will be suitably adjusted to the desired resolution. After camera parameters are initialized, the motion filter containing five operations is used to determine candidate windows. Note that this step not only classifies the moving object into human or non-human, but also determines whether the situation is dangerous by combining other information such as metallic detection. The metallic message is obtained by the magnetometer sensor function. If a moving object is classified as a human and the metallic message is strong, we consider this situation to be dangerous because this person may possess a knife or a gun. The motion filters provided by Viola et al. [14] is used to detect the moving object by applying the difference images. The algorithm of object classification in our ISN system is described below and its flowchart is shown in Fig. 7.

Fig. 7. The flowchart of object classification of our ISN.

The Algorithm of the Object Classification of Our ISN 1. Both wireless sensors and camera perform their routine tasks. W: Only the minimum function is enabled to detect the location of any moving object. C: If there are no specific signals from wireless sensors, the system will continuously perform routine tasks. 2. A moving object is detected by wireless sensors. W: Add this record to the queue. 3. Obtain and delete the first record from the queue. W: Enable all desired functions of sensors for collecting related information. C: Rotate the camera based on the signals of wireless sensors to aim at the suspicious area. 4. Obtain candidate windows by motion filters and perform object classification by Viola et al.’s approach [14]. 5. If the moving object is a human, the fusion system will determine whether the situation is dangerous by the integrated data. 6. If the queue is empty, go to step 1; otherwise, go to step 3.

SENSOR COMPUTING FOR PATTERN RECOGNITION

1963

5. FINE IMAGE FEATURES FOR OBJECT RECOGNITION If the moving object is classified as a human, face recognition is performed to identify whether the person is in our database. Note that, in the recognition step, wireless sensors are only utilized to determine camera parameters for grasping the moving object, so that the size of the object in the candidate window of each frame is approximately the same. Two kinds of image features are extracted to determine the candidate windows for recognition: the differences of consecutive images and the skin color pixels. The pixel whose color value is within a predefined range is collected, and then we apply the labeling procedure. If the pixels belong to the same connected component, they will be labeled as the same number. By combining the labels with the difference information, the candidate window is obtained. Note that, In order to correctly utilizing the differences of consecutive images, the camera needs to be static during the feature extraction procedure, and the skin color pixels are determined by analyzing the ranges of Cb and Cr in the YCbCr color space. Fig. 8 shows an example of obtaining the candidate windows for object recognition. Figs. 8 (a) and (b) are the consecutive image frames, (c) is the difference image, and (d) shows the skin color pixels. By combining Figs. 8 (c) and (d), the candidate window can be determined as indicated by the rectangle in Fig. 8 (d).

(a) (b) (c) (d) Fig. 8. An example of extracting candidate windows for object recognition.

Fig. 9. An example shows the subcandidates selection.

Fig. 10. The flowchart of object recognition in our ISN.

Note that, all the steps prior to determining the candidate windows are the preprocessing procedures, and several sub-candidates around each candidate window will be chosen for achieving the recognition procedure using Support Vector Machine (SVM) [15, 16]. The sizes of sub-candidates are slightly different from the candidate window. Fig. 9 shows an example of the sub-candidates selection. The detailed steps for performing

1964

F. Y. SHIH, Y. T. WU, C. F. CHUANG, J. L. CHEN, H. F. LU AND Y. C. CHANG

object recognition using SVM are presented in the next section. The algorithm of the recognition procedure in our ISN system is described below and its flowchart is shown in Fig. 10. The Algorithm of the Object Recognition in the ISN 1. The object is classified to be a human, and the recognition procedure is triggered. W: Only minimum functions are enabled for continuously detecting the location of the desired object. C: Properly adjust camera parameters to grasp the desired object. 2. Extract two image features: the difference of two consecutive frames and the connected components based on skin colors. 3. Determine the candidate windows based on these two image features. 4. Perform the object recognition procedure on these candidate windows using SVM.

After determining the candidate window, we normalize it into size of 112 × 92. Since lighting conditions are varied, histogram equalization is applied to eliminate illumination variations. Wavelet transform [17] is used to extract image features. We utilize the features from LL part of the second-level wavelet decomposition because it reserves necessary information and reduces the dimensionality for computation. After that, 2DLDA [18] is applied to reduce the features further. At last, the SVM is used to confirm whether the detected person is in our database. Fig. 11 illustrates the object recognition procedure applied on a candidate window.

Fig. 11. Experimental procedures.

The Algorithm of the Recognition Procedure on the Candidate Window 1. Normalize the detected image into size of 112 × 92. 2. Apply histogram equalization to eliminate lighting effect. 3. Use wavelet transformation and 2D-LDA to extract principal features and discard the irrelevant parts of the image. 4. Adopt the linear SVM to identify persons in database. To handle the multi-class problem, we construct the tree-based one-against-one SVMs [15, 16].

6. EXPERIMENTAL RESULTS In this section, two types of experiment are conducted in our intelligent sensor network: secured laboratory monitoring and parking area monitoring. 6.1 Secured Laboratory Monitoring System

Fig. 12 shows an example of the secured laboratory monitoring system (SLMS). The wireless sensors are arranged in the central line of a gallery, and a camera is installed in the end of the gallery. The distance between each wireless sensor is 1.5 meters.

SENSOR COMPUTING FOR PATTERN RECOGNITION

1965

Fig. 12. The structure of the secured laboratory monitoring system.

(a) (b) (c) (d) Fig. 13. An example of the secured laboratory monitoring system.

Fig. 13 gives an example of the SLMS. Fig. 13 (a) shows the moving object detected by wireless sensors as enclosed by a rectangle. Our system will properly zoom in the object to extract the candidate window. Fig. 13 (b) shows that our system classifies the moving object as a human. Fig. 13 (c) shows the candidate windows obtained by zooming in the object further. After object recognition, our system can identify that the moving object is the person named “Yi-Ta Wu” as in Fig. 13 (d). 6.2 Parking Area Monitoring Systems

Fig. 14 shows an example of the parking area monitoring system (PAMS). Fig. 14 (a) shows the moving object detected by wireless sensors as enclosed by a rectangle. Our system will properly zoom in the object to extract the candidate window. Fig. 14 (b) shows that our system classifies the moving object as a human. Fig. 14 (c) shows the candidate windows obtained by zooming in the object further.

(a)

(b) (c) Fig. 14. An example of the parking area monitoring system.

(d)

After object recognition, our system can identify that the moving object is the person named “Chao-Fa Chuang” as in Fig. 14 (d). Note that, although the network uses multiple cameras to monitor a person, the decision is made based upon one camera; for example, in the PAMS, the camera only analyzes the person who moves toward it, and

1966

F. Y. SHIH, Y. T. WU, C. F. CHUANG, J. L. CHEN, H. F. LU AND Y. C. CHANG

the priority of selecting the best camera is based on the distance between the moving object and the camera. The strategy of managing multiple cameras can be varied with the user’s requirement.

7. COMPARISONS WITH EXISTING SYSTEMS In this section, we will devote performance comparisons of our system with currently existing systems [4, 5] in terms of reliability, mobility, accuracy, and complexity. For reliability, we can solve the problem of data missing like [4]. Our ISN has the mechanism to request the sensor for resending the information of interest. Data missing is controlled at a minimum level. For mobility, our system can be deployed in indoor and outdoor environments. Like [5], our system can also process heterogeneous data since our sensors have multiple functions like microphone and magnetism installed. The sensors can send necessary data according to the request of the camera. For accuracy, since the sensors first send the coordinates of moving objects such that the camera can zoom in or out to control the resolution of images, the images received are almost of a fixed resolution. In this way, the face candidates are easily captured and unnecessary computation can be avoided. Compared with traditional surveillance systems that the detected object is too big or small due to the distance between the camera and the object, our system is superior. In addition, we do not need to rescale the image for object recognition. Therefore, the image distortion and computational time are decreased and the recognition rate is increased. The database we used in our training set for face recognition is composed of the ORL database and the database created by ourselves. We generated our database based on the ORL database. In the ORL database, there are 40 persons and each person has 10 face images. Those face images were captured at different time and under varying lighting conditions, different facial expressions (e.g., open/closed eyes, smiling/non-smiling) and different facial details (e.g., glasses/no-glasses). In addition, we generated 10 pictures per person captured from our lab members in a similar style as in the ORL database, and randomly replace some persons in the ORL database by our members. Therefore, our database also contains 40 persons and each person has 10 face images, which give us a total of 400 face images. We adopted the validation strategy to test our performance. That is, each time we use 50% of the database as the training set and then test the remaining 50%. The classifier will be used to recognize the images obtained by the cameras. We repeat the experiments 10 times and average the individual 10 results to obtain the final result. Note that, we set up the system environment such that the camera can capture the images in a similar style as in the ORL database, so that the classifier trained by the ORL-style images can be used for face recognition. The objective of object recognition is to identify persons accurately and efficiently. Experimental results show that our system can achieve high recognition rate, 95.4%, during the face recognition procedure. In our experiments, the SVM-based approach works well for detecting and recognizing objects having around 15 degrees slanting. Our system takes only 0.03 second to process an image of size 112 × 92 after training. The comparisons are provided for the reference in system performance. However, the face

SENSOR COMPUTING FOR PATTERN RECOGNITION

1967

Table 1. Performances of different methods. Algorithms Recognition Rates

Eigenface 94.75%

NFL 96.875%

HMM 95%

CNN 96.17%

recognition based on image intensity is compared with other face recognition systems such as the eigenface approach, the nearest feature line algorithm (NFL), the hidden Markov model (HMM) and the convolutional neural network method (CNN). Guo et al. [19] provided the performances of different methods as shown in Table 1. Note that, their test images are in the same way like their training images, but our testing images obtained by the surveillance system are more difficult for face recognition.

8. CONCLUSIONS By using wireless sensors as the guarders for detecting any moving objects, the camera can be properly and quickly rotated to extract the features of moving objects. By comparing the sensing and imaging results, we can determine the situation of monitored objects. In order to reduce unnecessary processing, a hierarchical feature extraction approach is developed. Experimental results show that our system can achieve a high face recognition rate of 95.4% for the testing images captured by the surveillance system.

REFERENCES 1. W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Transactions on Systems, Man and Cybernetics − Part C, Vol. 34, 2004, pp. 334-352. 2. I. F. Akyildiz, S. Weilian, Y. Sankarasubramaniam, and E. Cayirci, “A survey on sensor networks,” IEEE Transactions on Communications Magazine, Vol. 40, 2002, pp. 102-114. 3. S. Tilak, N. B. Abu-Ghazaleh, and W. Heinzelman, “A taxonomy of wireless microsensor network models,” ACM SIGMOBILE Mobile Computing and Communications Review, Vol. 6, 2002, pp. 28-36. 4. A. Talukder, S. Monacos, and T. Sheikh, “Distributed multisensor processing and classification under constrained resources for mobile health monitoring and remote environmental monitoring,” in Proceedings of International Conference on Intelligent Sensing and Information, 2004, pp. 212-217. 5. G. L. Foresti and L. Snidaro, “A distributed sensor network for video surveillance of outdoor environments,” in Proceedings of International Conference on Image Processing, 2002, pp. 525-528. 6. V. Chandramohan and K. Christensen, “A first look at wired sensor networks for video surveillance systems,” in Proceedings of the 27th IEEE Conference on Local Computer Networks, 2002, pp. 728-729. 7. C. K. Chang and J. Huang, “Video surveillance for hazardous conditions using sensor networks,” in Proceedings of IEEE International Conference on Networking, Sensing and Control, 2004, pp. 1008-1013.

1968

F. Y. SHIH, Y. T. WU, C. F. CHUANG, J. L. CHEN, H. F. LU AND Y. C. CHANG

8. R. Want, “Enabling ubiquitous sensing with RFID,” IEEE Computer, Vol. 37, 2004, pp. 84-86. 9. R. C. Nelson, “Qualitative detection of motion by a moving observer,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1991, pp. 173-178. 10. L. Jiao, Y. Wu, G. Wu, E. Y. Chang, and Y. F. Wang, “Anatomy of a multi-camera security surveillance system,” ACM Multimedia System Journal, Vol. 10, 2004, pp. 144-163. 11. Crossbow Technology Inc., http://www.xbow.com/, 2004. 12. C. A. Lee, Y. C. Chang and J. L. Chen, “Intelligent self-organization management mechanism for wireless sensor network,” in Proceedings of the 16th International Conference on Wireless Communications, 2004, pp. 47-54. 13. H. F. Lu, Y. C. Chang, J. L. Chen, and H. H. Hu, “Power-efficient scheduling method in sensor networks,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, 2004, pp. 4705-4710. 14. P. Viola, M. J. Jones, and D. Snow, “Detecting pedestrians using patterns of motion and appearance,” in Proceedings of International Conference on Computer Vision, 2003, pp. 734-741. 15. F. Y. Shih, K. Zhang, and Y. Fu, “A hybrid two-phase algorithm for face recognition,” Pattern Recognition and Artificial Intelligence, Vol. 18, 2004, pp. 1423-1435. 16. F. Y. Shih and S. Cheng, “Improved feature reduction in combinational input and feature space,” Pattern Recognition, Vol. 38, 2005, pp. 651-659. 17. E. Morales and F. Y. Shih, “Wavelet coefficients clustering using morphological operations and pruned quadtrees,” Pattern Recognition, Vol. 33, 2000, pp. 1611-1620. 18. M. Li and B. Yuan, “2D-LDA: a novel statistical linear discriminant analysis for image matrix,” Pattern Recognition Letters, Vol. 26, 2005, pp. 527-532. 19. G. Guo, S. Z. Li, and K. L. Chan, “Support vector machines for face recognition,” Image and Vision Computing , Vol. 19, 2001, pp. 631-638. Frank Y. Shih (施永強) received the B.S. degree from National Cheng Kung University, Taiwan, in 1980, the M.S. degree from the State University of New York at Stony Brook, in 1984, and the Ph.D. degree from Purdue University, West Lafayette, Indiana, in 1987, all in Electrical Engineering. He is presently a professor jointly appointed in the Department of Computer Science, Department of Electrical and Computer Engineering, and Department of Biomedical Engineering at New Jersey Institute of Technology, Newark, NJ. Prof. Shih currently serves on the editorial board for five international journals. He received Research Initiation Award from NSF and best paper awards from journals and conferences. He holds Research Fellow at the American Biographical Institute and the IEEE senior membership. He has published seven book chapters and over 180 technical papers, including 85 in well-known prestigious journals. His current research interests include image processing, computer vision, sensor networks, pattern recognition, bioinformatics, information security, robotics, fuzzy logic, and neural networks.

SENSOR COMPUTING FOR PATTERN RECOGNITION

1969

Yi-Ta Wu (吳易達) was born in Taipei, Taiwan. He received his Ph.D. degree in the Computer Science Department from New Jersey Institute of Technology, Newark, U.S.A. in May 2005. He is now a postdoctoral researcher at Ann Arbor, University of Michigan. His research interests include image processing, multimedia security, digital watermarking, pattern recognition, and medical imaging. Chao-Fa Chuang (莊朝發) received MBA degree from National Chung Hsing University, Taiwan, and Ph.D. degree in Computer Science from New Jersey Institute of Technology in January 2006. He is now working as a system analyst at the enfoTech & Consulting, Inc. His research interests include image processing, pattern recognition, artificial intelligence, and machine learning Jiann-Liang Chen (陳俊良) was born in Taiwan on December 15, 1963. He received his Ph.D. degree in Electrical Engineering from National Taiwan University, Taipei, Taiwan in 1989. Since August 1997, he has been with the Department of Computer Science and Information Engineering of National Dong Hwa University, where he is now a professor and the department chair. His research interests include wireless sensor networks, cellular mobility management, and personal communication systems. Hsi-Feng Lu (陸錫峰) received the B.S. and M.S. degrees in Computer Science and Information Engineering from National Chiao Tung University, Hsinchu, Taiwan, R.O.C. in 1987 and 1992, respectively. Since 2003, he has been in the Ph.D. program in the Computer Science and Information Engineering Department, National Dong Hwa University. His research interests include wireless sensor network, wireless PAN and home networking. Yao-Chung Chang (張耀中) is an assistant professor in the Department of Information Management, National Taitung University, Taitung, Taiwan, and serves as the Chief of Academic Service and Exchange Division, Research and Development Office. He received his Ph.D. degree (2006) in Computer Science and Information Engineering from National Dong Hwa University, Hualien, Taiwan. His main research focuses on the network related topics including transition of IPv4/IPv6, network mobility, RFID and sensor network.