Illumination Normalization for Color Face Images - CiteSeerX

0 downloads 0 Views 684KB Size Report
Images. Faisal R. Al-Osaimi, Mohammed Bennamoun, and Ajmal Mian ... illumination can be approximated by the first nine spherical harmonics. Basri et al.
Illumination Normalization for Color Face Images Faisal R. Al-Osaimi, Mohammed Bennamoun, and Ajmal Mian The University of Western Australia 35 Stirling Highway, Crawley, WA 6009, Australia {faisal, bennamou, ajmal}@csse.uwa.edu.au Abstract. The performance of appearance based face recognition algorithms is adversely affected by illumination variations. Illumination normalization can greatly improve their performance. We present a novel algorithm for illumination normalization of color face images. Face Albedo is estimated from a single color face image and its co-registered 3D image (pointcloud). Unlike existing approaches, our algorithm takes into account both Lambertian and specular reflections as well as attached and cast shadows. Moreover, our algorithm is invariant to facial pose and expression and can effectively handle the case of multiple extended light sources. The approach is based on Phong’s lighting model. The parameters of the Phong’s model and the number, direction and intensities of the dominant light sources are automatically estimated. Specularities in the face image are used to estimate the directions of the dominant light sources. Next, the 3D face model is ray-casted to find the shadows of every light source. The intensities of the light sources and the parameters of the lighting model are estimated by fitting Phong’s model onto the skin data of the face. Experiments were performed on the challenging FRGC v2.0 data and satisfactory results were achieved (the mean fitting error was 6.3% of the maximum color value).

1

Introduction

The acquisition of face biometrics is non-intrusive which makes it suitable for many identification and verification applications. These applications include forensics, security, access control to buildings and services. High recognition accuracy is very crucial for these applications. Unfortunately, variations in lighting conditions significantly degrades the performance of 2D face recognition. The differences amongst images due to identity variations may be obscured by variations caused by illumination. Many approaches have been proposed to handle the illumination problem. These approaches fall into three main categories namely, (1) illumination insensitive representations, (2) modeling of illumination variations and (3) illumination normalization to a canonical form. Approaches in the first category are the earliest and the most widely used [3]. In this category, illumination invariant features are generally extracted from the image for better recognition performance. Chen et al. [1] showed that there are G. Bebis et al. (Eds.): ISCV 2006, LNCS 4291, pp. 90–101, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Illumination Normalization for Color Face Images

91

no discriminative illumination invariant functions for objects with a Lambertian surface. Therefore, these feature can more appropriately be termed as illumination insensitive as opposed to illumination invariant. Another approach in this category extracts edge maps for recognition [4][5]. Edge maps are compact representations but lack the important information encoded in the intensity shade. Moreover, illumination variations may also create incorrect edges. One may expect edge maps to work better in the case of object recognition compared to face recognition because human faces have a similar structure. Derivatives of gray scale images are also used to overcome the illumination problem [6][7] since they are less sensitive to illumination. However, derivatives are sensitive to noise. Inspired by the fact that the human eye cortex enhances edges, some researchers convolved facial images with Gabor-like filters (which enhance edges) [8][9][10]. However, Adini et al. [2] have shown that edge maps, image derivatives and Gabor-like filters are insufficient to overcome the illumination problem. The second category i.e. illumination variations modeling, models the face under all possible illuminations. The illumination cone method [11][12] selects three images with constant pose but varying illumination to estimate the set of images under all possible lighting conditions. Ramamoorthi et al. [13] and Basri et al. [14] independently showed that a Lambertian surface under varying illumination can be approximated by the first nine spherical harmonics. Basri et al. [14] performed recognition by comparing the distance between a query face with the nearest images that can be synthesized by the 3D face models and their corresponding albedos under lighting conditions spanned by the first four harmonics [14]. Both [13][14] assume known pose, a Lambertian surface and no cast shadows. The third category normalizes images to a canonical form by compensating for illumination variations. Histogram equalization [15] and gamma intensity correction [16] fall in this category. However, these methods are global and their output is affected by directional lighting. Shan et al. [17] partitioned a face image into four quarters and used histogram equalization and gamma intensity correction in each quarter to eliminate side-lighting effects. However, due to the complexity of human faces, patterns created by directional lighting cannot be correctly represented by predefined fixed regions. Another approach in this category estimates the direction of a single light source from a generic 3D face and average albedo and relights the input face to a canonical form [18]. Their approach [18] does not cope well with complex lighting conditions caused by multiple light sources with varying intensities. Quotient image relighting is also used to relight face images to a canonical form [19][17]. However, it requires the computation of bootstrap set (the ratio of images in non-canonical forms to a canonical form image) and assumes a similar shape (for a given individual’s face) which is not true as the shape of the face significantly changes with expressions. They [19] also claim that their technique could be extended to color images assuming illumination does not affect hue and saturation of an image colors. However, our findings (Section 2.1) show that the saturation of facial skin color varies significantly with changes in illumination.

92

F.R. Al-Osaimi, M. Bennamoun, and A. Mian

We present a fully automatic illumination normalization algorithm for color facial images. Our algorithm falls in the third category and unlike other techniques it takes into account the cast shadows, multiple directional light sources (including extended light sources), the effect of illumination on colors and both Lambertian and specular light reflections. In addition, it does not assume any prior knowledge about the facial pose or expressions. Our approach is based on Phong’s lighting model which is widely used for rendering in computer graphics. We calculate the face albedo from a single colored face image and its registered 3D model by reversing the image rendering process. First, the number and directions of the dominant lights sources are automatically determined from the specularities in the facial skin. Next, the parameters of Phong’s model and the intensities of the light sources are estimated by fitting Phong’s model onto the red, green and blue channels of the facial skin. Finally, the parameters of Phong’s model, the intensities and directions of light sources and the original facial image and its 3D model are used to calculate the colored face albedo. Experiments were performed on the Face Recognition Grand Challenge (FRGC) v2.0 [27] dataset (9,900 2D and 3D faces) which is challenging in the sense that the faces have major expression variations and are illuminated by varying extended light sources. Our results show that our algorithm can compensate for lighting variations without compromising the local features or effecting the color of the face albedo.

2

Reflective Properties of the Human Face

There are two types of light reflections from surfaces namely Lambertian and specular. A Lambertian surface reflects the incident light equally in all directions but a specular surface reflects the incident light mostly in the mirror reflection direction. In practice, surfaces reflect light with varying proportions of Lambertian and specular components. Shafer et al. [20] show that the sensor response of a red, green or blue color channel R to spectral light e(λ) reflected by a surface is given by   n, sˆ) f (λ)e(λ)cL (λ)dλ + KS (ˆ n, sˆ, vˆ) f (λ)e(λ)cS (λ)dλ (1) R = KL (ˆ where f (λ) is the sensitivity of the color channel (different f (λ) for every color channel). cL and cS are the albedo and Fresnel reflectance of the surface, respectively. The functions KL and KS scale the Lambertian and the specular components. KS depends on the the normal of the surface n ˆ , light source direcˆ and sˆ. tion sˆ and viewer direction vˆ but KL depends only on n As stated in Section 1, existing illumination normalization algorithms assume that the human face is a Lambertian surface. In this section, we show that it has a considerable specular component which is responsible for the variations in the color saturation of the facial skin. In fact, illumination causes variations in the saturation even more than the value of the color of face skin. Before proceeding to the details in Section 3, it is important to introduce Phong’s lighting model (Section 2.1) and the HSV color space (Section 2.2).

Illumination Normalization for Color Face Images

2.1

93

Phong’s Lighting Model

Phong’s lighting model is widely used in computer graphics for rendering [21][22]. The model takes into consideration both the Lambertian and specular light reflections. Given a 3D computer model, its albedo, lighting sources and the Phong’s model parameters, the image is rendered according to the following equation. ⎤ ⎡ ⎤ ⎛ ⎤ ⎡ ⎤⎞ ⎡ ⎡ n AR DR FR LRi AR LRi TR  ˆ · S) ˆ ⎣ AG LGi ⎦ + ks (Vˆ · R) ˆ m ⎣ FG LGi ⎦⎠ (2) ⎝kl (N ⎣ TG ⎦ = ⎣ AG DG ⎦ + i=1 TB AB DB AB LBi FB LBi where A, D and Li are the albedo of the surface, ambient diffused light and the ˆ · Sˆ is the dot product between surface normal intensity of the i-th light source. N ˆ and light source direction S, ˆ and Vˆ · R ˆ is the dot product between the viewer N ˆ The parameters direction Vˆ and the mirror reflection angle of the light source R. kl , ks and m determine the extent to which a surface is Lambertian. F represents the Fresnel reflectance parameters of a surface i.e. the ratio of red, green and blue components reflected in a specular fashion. Since specularly reflected light from the facial skin has the same color as the incident light, F = [1 1 1] . 2.2

HSV Color Space

In RGB format, a color is represented by the amount of red, green and blue components it contains. Fig. 1.(a) shows the RGB color cube. If the three color channels are equally balanced, the color is on the gray line (the diagonal connecting the black to the white color). RGB is the most common color format because it is suitable for sensors and display devices. However, with regards to color perception, RGB is not always the best format and sometimes it is desirable to represent colors by their hue, saturation (lightness) and value (brightness) [23]. Fig 1.(b) shows the HSV color space which is an affine transformation of the RGB color cube to a cone. The hue is the angle around the gray line starting from the red color. The saturation axis is perpendicular to the gray line and ranges from 0 to 1. The further the color is from the gray line, the more saturation it has (0 on the gray line and 1 on the sides of the cone). The value of a color (range 0 to 1) is the distance from the black color to the projection of the color on the gray line. 2.3

Variations in Facial Skin Color Due to Illumination

Phong’s lighting model is based on the physics of light reflections and is very analogous to the sensor response in Eq. 1. This is the main reason we use this model in our illumination normalization algorithm and analysis. Assuming white light sources (equal R, G and B), Phong’s model reduces to ⎤ ⎡ ⎤ ⎛ ⎤ ⎡ ⎤⎞ ⎡ ⎡ n AR 1 AR TR  ˆ · S) ˆ ⎣ AG ⎦ + ks Ii (Vˆ · R) ˆ m ⎣ 1 ⎦⎠ ⎝kl Ii (N ⎣ TG ⎦ = ID ⎣ AG ⎦ + (3) i=1 TB AB AB 1 where Ii is the intensity of the i-th light source.

94

F.R. Al-Osaimi, M. Bennamoun, and A. Mian Value

Blue

Green

Yellow

120

Cyan e

ay

Magenta

e

Th

Lin

Cyan

Gr

1.0 White

Red 0.0

White

Green

Blue 240

Magenta

ion

rat

tu Sa

1.0

Hue

Yellow

Red (a) RGB color space

(b) HSV color space

Fig. 1. An affine transformation relates HSV and RGB color spaces

Facial skin of the same person generally has a fixed albedo. Let the number of the light sources, their intensities, the geometry of the face, kl , ks and m have arbitrary values. Eq. 3, shows that the total color T equals the sum of n + 1 vectors (the diffused and Lambertian components) in the direction of the albedo and n vectors (the specular component) in the direction of the white color. Since there are only two independent vectors (the albedo and the white color), the resultant color T will always be in the plane spanned by the albedo and the white color (see Fig. 2.(a)). This implies that T has a constant hue because it does not rotate around the gray line. However, the saturation and value will vary accordingly. As the value increases, the saturation decreases (also see Fig. 2.(d)) because the specular components bring T closer to the gray line. Scatter plots of hue, saturation and value of a facial skin data (each point represents a pixel) agree with these conclusions (see Fig. 2). Fig. 2.(a) shows that there is very limited variation in the hue of the skin but the value varies significantly. Fig. 2.(b) shows that there is significant variation in the saturation which means that the human face reflects large proportion of the incident light in specular reflection. Results of fitting Phong’s lighting model on facial skin data shows that the ratio of the Lambertian to the specular reflection (kl :ks ) ranges from 1:4 to 1:6.5 (see Section 3.4 and 4).

3

Illumination Normalization Algorithm

In real world environments, illumination conditions are very complex. For example, there are light reflections by objects to the face and extended light sources. However, these can be approximated by a single diffused and a few directional light sources. These unknowns and the parameters of Phong’s model are estimated by exploiting the variations in color saturation and value in a facial skin image which are mainly caused by illumination (Section 2). The following subsections give details of our algorithm.

Illumination Normalization for Color Face Images Blue

95

ne

y

Li

ra

he

G

T

or

nt Col

esulta

The R

Sp ec ul ar C

om

bertia Lam and sed Diffu nts pone Com

Red

po ne nt s

Green

n

(a)

(b)

(c)

(d)

Fig. 2. Illustrations and tests which show that white illumination causes variations in the value and the saturation but not the hue of a facial skin image

3.1

Skin Detection

The skin hue of different human races is very similar [24]. Hue statistics are widely used in skin detection [24][25]. The mean μh and standard deviation σh of the skin hue are computed from many training images. A pixel p is considered a skin pixel if its hue hp is within ± 2.5μh . Sg = {p ∈ Sg |hp > μh − 2.5σh and hp < μh + 2.5σh }

(4)

However, the skin pixels set Sg will include pixels from non-skin regions like lips and eye brows because σh might be larger than the hue differences between skin and other facial regions of the same face as it is computed from a large set of skin training data. To overcome this problem, we compute the person specific hue statistics μhs and σhs from Sg and apply another skin detection iteration using μhs and σhs to produce a more accurate skin pixel set Ss . Since σhs  σh , the pixels which have slight hue differences from the skin are not included in Ss . The pixels which have color values less than a threshold Vth are dropped from Ss because they are not reliable. Fig. 3.(a) and 3.(b) show a face image and its detected skin mask, respectively. 3.2

Detection of the Dominant Light Sources

Specularities (highlights) occur at regions at which the mirror reflection direction ˆ of a light source is very close to the direction of the viewer (sensor) Vˆ . At R

96

F.R. Al-Osaimi, M. Bennamoun, and A. Mian

ˆ is maximum which means that there are more specular these regions Vˆ · R than Lambertian light reflections. In Section 2, we showed that the specular reflection makes the saturation of a color decrease. As specular light reflection from facial skin has more effects on saturation than value, it is more reliable to use saturation for the detection of specularities. A few hundred skin pixels with the minimum saturations are selected. These highlighted pixels may correspond to one or more light sources. For each pixel, the direction of the light source Sˆ is ˆ from computed given the viewer direction Vˆ =[0 0 1]T and the surface normal N the corresponding point in the 3D model as shown in the following equations.

ˆ · 0 0 1 T) (5) θo = cos−1 (N

T ˆ× 001 Pˆ = N (6) ⎡ ⎤ ⎡ ⎤−1 ⎡ ⎤ Sx 0 0 1 cos(2θo ) ⎣ Sy ⎦ = ⎣ Nx Ny Nz ⎦ ⎣ cos(θo ) ⎦ (7) Sz Px Py Pz 0 ˆ and Vˆ , respecWhere θo and Pˆ are the angle and the cross product between N tively. Azimuth and elevation angles of the light source directions are calculated by changing the Sˆ vectors from rectangular to spherical coordinates. The azimuth and elevation angles of the highlight pixels which are caused by the same light source will cluster together (see Fig. 3.(d) and 3.(e)). Hierarchal clustering [26] with Cartesian distance as similarity measure is used to cluster light directions into 10 clusters. The clusters which have entries less than 15% of total entries are discarded. The remaining clusters represent the dominant light sources. For each dominant light source, the average direction is computed (center of gravity of the cluster). Wide direction clusters are divided into multiple clusters (i.e. an extended light source is represented by multiple light sources). 3.3

Finding Shadows of a Light Source

The human face is self-shadowing. The shadows of an ideal directional light source have a sharp transition from shadow to shine. However, this is not the case in real world illumination. At the edges of a shadow, the light source is partially visible, resulting in continuous shadows. We use fuzzy sets to produce continuous shadows from the direction cluster of a light source. Each cluster is divided into sectors as shown in Fig. 4.(a). The directions in each sector are

ˆ = θ φ  represent an individual direction, ˆ . Let D represented by a direction D R ¯ is the mean direction of the cluster and n is the number of directions in a D ˆ Ri of the i-th sector is computed as follows. sector. D n

 ˆ si = 1 ˆ j − D( ¯ D ˆ j − D) ¯ M D n j=1  ˆ ˆ RMS i = M ˆ si  Msi D ˆ Msi  ˆ ¯ ˆ DRi = DRMSi + D

(8)

(9) (10)

Illumination Normalization for Color Face Images

(a)

(b)

(c)

(d)

97

(e)

Fig. 3. (a) Original Face image. (b) Skin mask. (c) Randomly selected 1000 representative skin pixels. (d) Detected specularities in facial skin. (e) Direction Clustering: despite that the specular pixels are disconnected spatially in this case they are all pointing roughly to one direction.

(a)

(b)

(c)

(d)

(e)

(f )

(g)

Fig. 4. (a) A direction cluster is divided into 4 sectors. (b),(c),(d),(e) and (f) are the shadows of the sector representing directions and the average direction. (g) the fuzzy shadows of the direction cluster.

ˆ RMS is used to push D ˆ R away The root mean square of the sector directions D ¯ ¯ ˆ R of from D. The 3D model is ray-casted by directional lights from D and D every sector (see Fig. 4) to find the crisp shadow images of these directional light sources (0’s for shadows and 1’s if shined). The fuzzy shadows membership M of the cluster is calculated by averaging the crisp shadow images and smoothing the resultant image using an average filter. At the center of a shined region, M equals 1 but at edges it gradually changes from 1 to 0. 3.4

Estimation of Light Intensities and Phong’s Parameters

At this stage, the number of the dominant light sources, their directions and fuzzy shadows M have been calculated. The intensities of the light sources and Phong’s

98

F.R. Al-Osaimi, M. Bennamoun, and A. Mian

model parameters are estimated by fitting the model (with these variables and kl = 1 as known variables but the free variables are the intensities, an albedo A, ks and m) on the facial skin. The kl parameter is kept constant but ks is variable to avoid redundancy in the free variables. For efficiency, the model is fitted only on 1000 randomly selected skin pixels (see Fig. 3.(c)). Experimental results show that there is no noticeable degradation in performance when using only 1000 skin pixels. However, it considerably cuts the fitting time. Fitting is performed by minimizing the differences between the original skin pixels O and those generated by the model T .











n T Ri |AR | |AR |  ˆ  |AG |  + Mi |ks |(Vˆ · R) ˆ |m|  TGi  = |ID |  |AG |  + |Ii | Mi (Nˆ · S) i=1 TB i |AB | |AB |

F =

1000 

(|ORi − TRi | + |OGi − TGi | + |OBi − TBi |)

(11)

(12)

i=1

{AR , AG , AB , ID , I1 , · · · , In and m} = arg min(F )

(13)

Since the free variables cannot be negative absolute values of the free variables are introduced in Eq. 11 to avoid using a less efficient constrained minimization algorithm. This will make every orthant of the objective function F symmetrical to the positive orthant. Irrespective of which orthant the algorithm converges in, we take the absolute values of the variables. ks and m do not vary significantly among different faces. By examining the algorithm on many faces, the best results in terms of fitting error and quality of illumination normalization have ks in the range of 4 to 6.5 and m in the range of 1 to 2. Constraining these two parameters to these ranges gives better results and faster convergence especially when there are many light sources. These constraints are imposed through the objective function to avoid using a constrained minimization algorithm. F = e|1.2(ks −5.25)|

1.5

+ e|3(m−1.5)|

1.5



1000

+

(|ORi − TRi | + |OGi − TGi | + |OBi − TBi |)

i=1

(14)

The first two terms have negligible cost if the variables are within the ranges but it grows exponentially outside the ranges. 3.5

Calculation of Face Albedo

Once the lighting conditions and Phong’s parameters are known, face albedo can be computed for every pixel P in the original face image as shown in Eq. 15 (see Fig. 5). The specular components are first subtracted from P . The difference represents the Lambertian components from which the albedo A is computed. ⎤  ⎤ ⎡ ⎡ ˆ m )/(ID + n Mij Ij (N ˆ · S)) ˆ (PRi − nj=1 Mij ks Ij (Vˆ · R) ARi j=1 n ˆ m )/(ID + n Mij Ij (N ˆ · S)) ˆ ⎥ ⎣ AGi ⎦ = ⎢ ⎣ (PGi − j=1 Mij ks Ij (Vˆ · R) ⎦ (15) j=1 n  n ˆ m )/(ID + ˆ ˆ ABi M I ( N · S)) (PBi − j=1 Mij ks Ij (Vˆ · R) ij j j=1

Illumination Normalization for Color Face Images

4

99

Results and Discussions

The algorithm was tested on the FRGC v2.0 [27] dataset which contains a large number of co-registered face images and 3D models. Fig. 5 shows some sample images (in color) and their corresponding normalized images produced by our algorithm. A qualitative analysis of the normalized images shows that our algorithm removes substantial amounts of illumination variation effects. The bright regions of a face, which are shined by the dominant light sources, become more similar in color saturation and value to the shadowed regions while preserving the fine details of the face albedo. For example, the red dots in the first face from the left are preserved in the normalized image. Since we only use the dominant light sources, the resulting face albedo (as computed in Eq. 15) looks like a frontal relight of the face more than an ideal albedo which means that there is no need for an additional frontal relight stage. Fig. 5.(b) shows a histogram of fitting error between the original facial skin and the lighting model (error is defined as in Eq. 12). The average fitting error for the 1000 used skin pixels (each color channel varies in the range 0 to 255) is 4.8 × 104 which gives an average fitting error of 16/pixel.channel (about 6.3% of the maximum channel value).

(b)

(a)

Fig. 5. (a) Normalized face images: the original images (top) and the normalized images(bottom)(the figure is better viewed in color) (b) Histogram of fitting error

100

5

F.R. Al-Osaimi, M. Bennamoun, and A. Mian

Conclusion

We presented a fully automatic algorithm for the illumination normalization of color face images with minimal assumptions. Our algorithm can accurately estimate the directions and intensities of multiple dominant light sources and the reflective properties of a face. It calculates face albedo by reversing the process of image formation. We stressed on the importance of considering specular reflection of the human face and cast shadows of multiple extended light sources in illumination normalization for facial images. The algorithm was tested on the challenging database of FRGC v2.0 containing complex illumination and facial expression variations. Our results show that the algorithm is robust and accurate.

References 1. H. Chen, P. Belhumuer and D. Jacobs, “In Search of Illumination Invariants,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 254-261, 2000. 2. Y. Adini, Y. Moses and S. Ullman, “Face Recognition: The Problem Of Compensating For Changes In Illumination Direction,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19(7), pp. 721-732, July 1997. 3. David G. Lowe, “Object Recognition from Local Scale-invariant Features,” International Conference on Computer Vision, pp. 1150-1157, 1999. 4. Y. Gao and M. Leung, “Face Recognition Using Line Edge Map,” IEEE Transactions on Pattern Analysis and Machine Intelligence Pami, vol. 24(6), pp.764-779, 2002. 5. B. Moghaddam and A. Pentland, “Probabalistic Visual Learning for Object Representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence Pami, vol. 19(7), pp. 694-710 ,1997. 6. Y. Weiss and , “Deriving Intrinsic Images from Sequences,” In Proc. of Int. Conf. Computer Vision, 2001. 7. D. Jacobs, P. Belhumeur and R. Basri, “Comparing Images under Variable Illumination,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition,pp. 610-617 ,1998. 8. C. Liu and H. Wechsler , “A Gabor Feature Classifier for Face Recognition ,” In Proc. of Int. Conf. Computer Vision, 2001. 9. B. Duc, S. Fischer and J. Bigun, “Face authentication with Gabor information on deformable graphs,” IEEE Transactions On Image Processing, vol. 8(4), pp. 504-516, 1999. 10. M. Lyons, J. Budynek, and S. Akamatsu, “Automatic classification of single facial images,” IEEE Transactions on Pattern Analysis and Machine Intelligence Pami, vol. 21(12), pp. 1357-1362 , 1999 . 11. A. Georghiades, P. Belhumeur and D. Kriegman , “From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose,” IEEE Transactions on Pattern Analysis and Machine Intelligence Pami, vol. 23(6), pp. 643-660 , 2001. 12. and , “What is the Set of Images of an Object under all Possible Illumination Conditions?,” International Journal of Computer Vision, vol. 28(3), pp. 245-260 , 1998.

Illumination Normalization for Color Face Images

101

13. R. Ramamoorthi and P. Hanrahan, “On the relationship between radiance and irradiance: determining the illumination from images of a convex Lambertian object,” Journal- Optical Society of America A, vol. 18(10), pp. 2448-2459 , 2001. 14. R. Basri and D. Jacobs , “Lambertian Reflectance and Linear Subspaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence Pami , vol. 25(2), pp. 218-232, 2003. 15. and , “Adaptive Histogram Equalization and its variations,” Computer Vision, Graphics, and Image Processing, vol. 39(3), pp. 355 - 368 , 1987 . 16. A. Bovik, “Handbook of image and video processing,” Academic Press,2000. 17. S. Shan, W. Gao, B. Cao and D.Zhao, “Illumination Normalization for Robust Face Recognition Against Varying Lighting Conditions,” IEEE Workshop on AMFG,2003. 18. C. Lee and B. Moghaddam, “A Practical Face Relighting Method for Directional Light Normalization,” AMFG, 2005. 19. Riklin-Raviv and A. Shashua, “The Quotient Image: Class-Based Re-Rendering and Recognition with Varying Illuminations,” IEEE Transactions on Pattern Analysis and Machine Intelligence Pami, vol. 23(2), pp.129-139 , 2001. 20. S. Shafer, “Using Color to Separate Reflection Components,” COLOR Research and Applications , vol. 10(4), pp. 210-218, 1985. 21. B. Phong, “Illumination for computer generated images,” Comm. ACM , vol. 18(6), pp. 311-317, 1975. 22. J. Foley, A. Dam, S. Feiner and J. Hughes, “Computer Graphics: Principles and Practice,” Addison-Wesley , 1990. 23. J. Wang, W. Yang and R. Acharya, “Color clustering techniques for color-contentbased image retrieval from image databases,” IEEE International Conference on Multimedia Computing and Systems ,pp. 442 - 449 , 1997. 24. K. Sobottka and I. Pitas, “A Novel Method for Automatic Face Segmentation, Facial Feature Extraction and Tracking,” Signal Processing: Image Comm., vol. 12(3), pp. 263-281, 1998. 25. S. Phung, A. Bouzerdoum and D. Chai , “Skin segmentation using color pixel classification: analysis and comparison,” IEEE Transactions on Pattern Analysis and Machine Intelligence Pami, vol. 27(1), pp. 148- 154, 2005. 26. A. Jain, M. Murty, and P.Flynn, “Data Clustering: A Review,” Computing Surveys, vol. 31(3), pp. 264-323, 1999. 27. P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min and W. Worek, “Overview of the Face Recognition Grand Challenge,” IEEE Computer Vision and Pattern Recognition, vol. 1, pp. 947 - 954 , 2005.