MLTP Based Contact Lens Detection in Iris

0 downloads 0 Views 710KB Size Report
Feb 3, 1987 - In this each image is segmented into iris, sclera and pupil. Then features ... In [3] purkinje images are different for a live iris and fake iris. For.

MLTP Based Contact Lens Detection in Iris Recognition for Anti-spoofing K. Sheela Sobana Rani1, M. Yuvaraju2 1

Assosiate Professor, Dept. of Electronics and Communication Engineering, Sri Ramakrishna Institute of Technology, Coimbatore, India 2 Assistant Professor, Dept. of Electrical and Electronics Engineering, Anna University Regional Centre, Coimbatore, India

International Journal of Research in Electronics & Communication Technology Volume 3, Issue 4, July-August, 2015, pp. 11-19 ISSN Online: 2347-6109 Print: 2348-0017, DOA: 29082015 © IASTER 2015,

ABSTRACT The security has become a major problem in recent times. The threat occurs when an unwanted individual tries to obtain access to any place or value. The possibility of identifying a person can be obtained from the biometric details. Iris recognition is one of the best biometric methods used for human identification and verification. But iris recognition system is susceptible to spoof attacks. One such attack is by wearing contact lens. The presence of contact lens has made a challenge to iris recognition as it destroys the natural iris pattern. So textured contact lens detection is an important step to prevent spoofing in iris recognition system. This paper utilizes IIIT-D databases for lens detection. In this work MLTP algorithm is used for feature extraction and feed forward neural network is used for classification of lens. A set of the histogram features is computed together for LTP to construct the feature vector. To improve the performance of feature extraction, MLTP combine the local ternary patterns and compute derivatives from iris images and then compared the performance of existing and proposed method. This helps in improving usability and reliability of iris recognition systems by detecting and removing the iris images with textured contact lens. Keywords: Contact Lens Detection, Spoofing, MLTP, Neural Network. I. INTRODUCTION The term biometric means identification or authentication of an individual based on certain unique features or characteristics. There are mainly two categories of biometric identifiers like physiological and behavioural characteristics. Iris, DNA, fingerprint, etc. belong to former one and gait, voice etc. belong to latter. The applications of iris recognition [1] are online business, information security, ATM security, police and government security agencies to keep record of criminals etc. There are many improvements occur in the field of iris recognition. The numbers of contact lens wearers are increased now a day. Contact lens are used to correct eyesight and for cosmetic reasons. The color and texture in contact lens can superimpose on natural iris pattern. This contact lens has the ability to change the optical properties of eye and thereby it can reduce the overall accuracy of iris biometric systems. Spoofing is the method in which one tries to hide their own identity. So contact lens detection is an important antispoofing method People use two type of lens. They are soft lens and hard lens. But the soft lens does not cause much problem as that of hard contact lens. MLTP (Modified Local Ternary Pattern) based classification is the novel algorithm used to detect the presence of textured contact lens on iris of a person. In this each image is segmented into iris, sclera and pupil. Then features calculated and applied it to training model. MLTP is used for feature extraction and a


International Journal of Research in Electronics & Communication Technology Volume-3, Issue-4, July-August, 2015,


(O) 2347-6109 (P) 2348-0017

feed forward neural network is used for training features. Thus an iris image can be classified as whether a person is with no lens, colored lens and transparent lens and a comparison must be performed for existing method (MLBP) and proposed method (MLTP). Hence spoofing activities done through iris can be reduced.

II. RELATED WORK There are several studies conducted in the field of iris recognition. In [2] the usage of Fourier transform used to detect periodic fake iris patterns was proposed .The method of using FFT to check printed iris pattern by checking high frequency spectral magnitude. This cannot be detected in case the iris image is defocused and purposely blurred. New types of lens have multiple layers of printing which cannot utilize this method. In [3] purkinje images are different for a live iris and fake iris. For that calculated the theoretic positions and distances between Purkinje images based on human eye model. But the dataset used for this is too small to get generalized conclusions. In [4] suggested a support vector machine for training texture features in a gray level co-occurrence matrix. In [5] classified images based on the object categories they contain. For that pyramid of histogram of orientation gradients image descriptor is used. In [6] proposed three method for contact lens detection like iris edge sharpness measures, texture through iris-textons and co-occurrence matrix. Two types of dataset used are CASIA and BATH. In proposed [7] a method of multi-scale local binary pattern as feature extraction method and Adaboost algorithm as classifier .In [8] proposed weighted LBP as a method for contact lens detection. This utilized a Gaussian smoothed and SIFT-weighted local binary pattern .SVM is the classifier used to classify a genuine and counterfeit iris images. In [9] proposed MLBP algorithm for feature extraction and found the effect of contact lens on iris.

III. IIIT-D DATABASES There are three objectives for IIIT-D Contact Lens Iris database : (1) images of at least 100 subjects are captured, (2) images without lens, with soft lens, and with textured lens are captured, and (3) images with variations in lenses and iris sensors are captured.



Fig.1. Iris Images in IIIT-D Contact Lens Iris Database: (A) Images Captured Using Cogent Iris Sensor and (B) Images Captured Using Vista Sensor

The IIIT-D Contact Lens Iris database comprised of 6570 iris images of 101 persons. There are 202 iris classes .It consists of images of both left and right iris. The lenses used in the database are soft lenses manufactured by CIBAVision [10] and Bausch&Lomb [11]. For textured lenses, four colors are used. Two iris sensors are used for capturing eye images. They are : (1) Cogent dual iris sensor and (2) VistaFA2E single iris sensor to study the effect of acquisition device. There are atleast three images for each iris class is presented in iris databases for both types of the sensors. Figure 1 shows the sample images from IIIT-D CLI database.

IV. PROPOSED METHOD This work presents a methodology for detecting contact lens in iris recognition systems. The flow of work of contact lens detection in human iris identification process is shown in Figure 2.


International Journal of Research in Electronics & Communication Technology Volume-3, Issue-4, July-August, 2015,


(O) 2347-6109 (P) 2348-0017

4.1 Eye Image Acquisition This means capturing a high quality image [9] of the eye. The acquired image of iris must have proper resolution and sharpness to obtain best recognition result. The IIIT-D databases are used here for obtaining input images. The images are captured by using two types of iris sensors .They are Cogent dual iris sensor and Vista single iris sensor. The database contains three images for each iris class in the lens categories for both iris sensors. It is important that the interior iris pattern should have good contrast without resorting to a level of illumination that annoys the operator, i.e., adequate intensity of source constrained by operator comfort with brightness. These images [10] must be well framed without excessively constraining the operator. Fig. 2. Flow of Work

4.2 Segmentation The input images are segmented into iris, sclera and pupil. The Hough transform is the efficient and standard computer vision algorithm that can be used to find out the parameters of geometric objects, such as ,rectangles and circles, present in an image in which feature has to be extracted. The circular Hough transform can be used to calculate the radius of the pupil and iris regions. The segmentation must be followed by localization. 4.3 Localization Iris localization is a process to isolate the iris region from the rest of the acquired image. Two circles attached to iris are as iris/sclera boundary and iris/pupil boundary. The Iris Localization Unit Contains three parts: (i) Pupil Detection and Iris Boundary Detection (ii) Eyelid Detection and (iii) Eyelash Detection. 4.4 Feature Extraction The known segmentation provided by the IIIT-D dataset is used to split each iris image into three regions: (1) pupil, (2) iris, and (3) sclera. Modified Local Ternary Pattern analysis is applied to each region of each image at multiple scales to produce feature values. MLBP(Modified Local Binary Pattern) is an existing method to detect contact lens in iris recognition. But this method has some limitations. They are : (i) sensitive to noise mainly in identical regions (ii) it supports only a binary level assessment for encoding and (iii) it is insufficient to represent the texture information in more detail and can only build up the feature matrix from 0 and 1 ,that was on the basis of centre pixel value. In order to overcome these problems, new method MLTP is used to detect contact lens accurately. Local Ternary Pattern (LTP) [12] is a strong quality descriptor that can be used to calculate texture and it is useful in the computer vision field including industrial inspection, motion analysis, face recognition ,iris recognition etc. LTP [13] is represented as a three-valued code. In LTP, the gray level values in the zone of width ± T around Gc are quantized to zero, the value above (Gc+T) are quantized to +1, and the value below (Gc-T) are quantized to -1.Local Ternary Patterns can overcome the difficulties caused by Local Binary Pattern. LTP does not threshold the pixels into 0 and 1 rather it uses a threshold function T to threshold pixels into three values. T is a user specified threshold. So LTP code more resistant to noise. Then combined normal LTP with computation of derivatives from image to get MLTP pattern. The method of LTP feature vector created is shown in Figure 3.


International Journal of Research in Electronics & Communication Technology Volume-3, Issue-4, July-August, 2015,


(O) 2347-6109 (P) 2348-0017

Fig.3. LTP Computation

A pixel neighborhood is at left, its threshold version is at middle, and the corresponding binary LTP pattern with the computed LTP code is at right in figure. After LTP is computed, a set of the histogram features and derivatives from iris image is computed together to construct the MLTP feature and then the histogram is normalized to get the feature vector. 4.5 Classification An artificial neural network [14] is used for classification after extracting features from iris. Feed Forward Back Propagation neural network [15] is a multiple layer network whose transfer function is differentiable and non linear is shown in Figure 4. To train a network, both input vectors and corresponding target vectors are needed. The grayscale values of iris pattern are given as inputs to array. The back propagation algorithm involves two different passes. They are forward and backward passes. The alternation between these passes occur several times as it scans the training data in this back propagation algorithm. During forward pass, outputs of every neurons in the network is computed. The algorithm starts from the first hidden layer. During backward pass, the propagation of error and adjustment of weights are performed. The error at each of the output layer is computed in this phase. Fig.4. Feed Forward Neural Network There are many parameters that must be considered in feed forward back propagation network. They are initial weight range, number of nodes in hidden layer, number of epochs, learning rate, hidden layer sigmoid, output layer sigmoid, and critical error. 4.5.1 Neural Network Training The steps required to train the back propagation neural network model for contact lens detection in the iris recognition system are as follows: 1) Determine the architecture of the feed forward ANN (Artificial Neural Network) model. The following parameters are to be set. They include total number of layers, number of neurons in input layer, number of neurons in hidden layer , and number of neurons in output layer . 2) Initialize weights and bias unit in the ANN. 3) The learning rate must be initialized in the range (0.1-0.9), momentum variable should be in the range (0.1-0.9) and the threshold error with a small value also should be initialized. 4) Initialize the target (desired) output vector for each input vector of the input dataset of iris images


International Journal of Research in Electronics & Communication Technology Volume-3, Issue-4, July-August, 2015,


(O) 2347-6109 (P) 2348-0017

5) Determine which training algorithm [12] that can be used to train this ANN model for detecting contact lens in iris recognition. 6) Then the input vector is applied. The output of each layer must be calculated to get the original output vector. The error of ANN that affect the needed output must be found. The training of network may be stopped or again repeated based on the error computed by the adjustment of weights. Repeat the entire process until obtaining an error equal to or less than the error threshold then stop the process of training. 4.5.2 Neural Network Testing After the training process, testing of neural network must be done. The testing process of the ANN model must be performed as: 1) Apply the feature value of irises to input layer neurons. 2) The output for all layers of the neural network are calculated to get the final outputs. 3) The iris is recognized by the ANN model if the output vector obtained from applying it is the same as the target vectors; i.e., if its MSE value is too small. Otherwise, the ANN does not recognize this iris feature; i.e., its MSE value is large.

V. SIMULATION RESULTS AND DISCUSSION The proposed work is implemented using MATLAB software. 5.1 Input images For the implementation of the proposed algorithm, the images of iris are directly given as input. There is no capture device. The images directly downloaded from standard IIIT-D iris contact lens databases are used to test the algorithm. For this work 10 samples for each category from the databases are chosen. The Figure 5 shows the screen shot of an input image, then that image should be converted to gray level images and then it was smoothened.

Fig.5. Screenshot of Input Image, Gray Image and Smoothened Image

At the preprocessing stage, the images are transferred from RGB to gray level. For locating boundaries of the iris, segmentation should be done. 5.2 Iris Segmentation The upper and lower eyelids must be removed before segmenting iris region from input image. (a) Removal of eyelids



International Journal of Research in Electronics & Communication Technology Volume-3, Issue-4, July-August, 2015,


(O) 2347-6109 (P) 2348-0017

After smoothened image, then detected the upper and lower eyelids and applied Hough transform to remove that. The circular Hough transform is used to find the radius of the pupil and iris regions. The circular iris boundary edge map will be corrupted by the eyelid edge map, so it should be removed is shown in Figure 6 and Figure 7.

Fig.6. Screenshot of Resized Image and Hough Transform Applied to Remove Uppereyelid

Texture segmentation is adopted to detect upper and lower eyelids. Sobel edge detection is applied to the search regions to detect the eyelids.

Fig.7. Screenshot of Hough Transform Applied to Remove Lowereyelid

Iris localization is a process to isolate the required iris region from the image taken from databases. Canny edge detection is performed to detect the edges. The Hough transform must be applied to determine the upper eyelids and lower eyelids. Then eliminate the eye lashes using the value of threshold. (b) Segmentation of iris, sclera and pupil Then segmented the input image into each of the three parts iris, sclera and pupil is shown in the Figure 8.This algorithm performs a segmentation of the iris region from an eye image.

Figure 8: Screenshot of Segmented Pupil, Iris and Sclera


International Journal of Research in Electronics & Communication Technology Volume-3, Issue-4, July-August, 2015,


(O) 2347-6109 (P) 2348-0017

5.3 Feature Extraction from Images Modified Local Ternary Pattern analysis is the proposed method that is applied to each region of each image at multiple scales to produce feature values. The pupil, iris, and sclera all have significantly different appearances, and as such the binary pattern analysis is performed separately for each region. Gabor filter is an outstanding technique for extracting the features in order to analyze the texture and also to detect the edge. The boundaries of iris from input images are identified by the magnitude coefficient of the Gabor filter. The modified LTP provides the feature values in form of three level codes which then converted into decimal value. Finally feature vector is formed after the histogram of pixels get evaluated. Then feature vector calculated for each of the input images. The feature vectors that obtained from the images are histograms, and then classifier as neural network applied to the extracted features. 5.4 Classification Result The obtained features are stored in an array and provided as input to the BPN. All the necessary parameters like number of input layer neurons and hidden layer neurons, epochs, learning rate, performance goal and target should be initialized. Thus the neural network trained the feature values and saved it. Since a neural network has been used here, performance plot can be obtained for a particular classification of images is shown in Figure 9.

Figure 9: Screenshot of Performance Plot of Neural Network

Performance function used here is MSE. The value of MSE will decrease with increasing the number of hidden layers. So reduces it to get the desired result.

Figure 10: Screenshot of Classification Output

The classification output after using the neural network is shown in Fig. 10.By using this algorithm, it can be detected whether a person is wearing a colored lens or transparent lens or no lens.


International Journal of Research in Electronics & Communication Technology Volume-3, Issue-4, July-August, 2015,


(O) 2347-6109 (P) 2348-0017

5.5 Comparison of MLBP and MLTP algorithm The two algorithms MLBP and MLTP for one image belong to three categories (colored lens, normal eye and transparent lens) are compared for showing MLTP algorithm give best results. Table 1: Comparison of Parameters for Colored Lens


FEATURE VALUE 4.0854 7.3679

MSE 70.7012 52.7146

PSNR 29.6705 30.9455

Table 2: Comparison of Parameters for Normal Eye


FEATURE VALUE 4.1385 7.5668

MSE 131.6501 103.8669

PSNR 26.97 28.00

Table 3: Comparison of Parameters for Transparent Lens


FEATURE VALUE 3.9304 7.7118

MSE 150.1045 110.1157

PSNR 26.4009 27.7485

VI. CONCLUSION By wearing contact lenses, the accuracy of iris recognition was degraded. The textured contact lenses are used as an effective way for someone on an iris recognition watch list to evade detection to perform unauthorized access. In this work, a novel lens detection algorithm, the modified LTP applied to obtain the texture features. Then a feed forward neural network classifier applied for classification of lenses based on feature values. Thus textured contact lenses can be detected and filtering them out of the stream for automated iris recognition which help in alleviating spoofing attempts. After detecting the presence of contact lens in iris recognition and then performed a comparison for existing method and proposed method. The proposed method leads to an increase in the recognition accuracy as compared to without lens classification by reducing the MSE value and improving the feature extraction and PSNR value for each image in this process of detection and thus removal of images with textured contact lens can be possible in iris recognition system. Research must be conducted to obtain better results in real time recognition systems. REFERENCES [1]

Flom, L., and Safir, A., Iris Recognition System, U.S. Patent 4 641 349.Feb. 3, 1987.


Daugman.J., How Iris Recognition Works, IEEE Trans. Circuits Syst. Video Technol., Vol. 14, No. 1, pp. 21–30, Jan 2002.


Kim J. Lee E.C and Park K.R., Fake Iris Detection by using Purkinje Image in Proc. IAPR Int. Conf. Biometrics, 2006, pp. 397–403.


An, S., He, X and Shi, P., Statistical Texture Analysis-Based Approach for Fake Iris Detection Using Support Vector Machines, in Proc. IAPR Int. Conf. Biometrics, 2007, pp. 540–546.


Bosch, A., Munoz X. Zisserman A, Representing Shape with a Spatial Pyramid Kernel, in Proc. 6th ACM Int. Conf. Image Video Retr., 2007, pp. 401–408.


Z, Sun., Tan, T., Wei, Z., and Qiu ,X., Counterfeit Iris Detection Based on Texture Analysis in Proc. 18th Int. Conf. Pattern Recognit, 2008, pp. 1–4.


International Journal of Research in Electronics & Communication Technology Volume-3, Issue-4, July-August, 2015,


(O) 2347-6109 (P) 2348-0017


He, Z., Su, Z., Tan, T., and Wei, Z., Efficient Iris Spoof Detection Via Boosted Local Binary Patterns in Advances in Biometrics. New York, NY, USA: Springer-Verlag, 2009, pp. 1080–1090.


Sun, Z., Tan, T., and Zhang, H., Contact Lens Detection based on Weighted LBP in Proc. 20th Int. Conf. Pattern Recognit., 2010, pp. 4279–4282.


Daksha Yada, James S. Doyle and Mayank Vatsa, Unravelling the Effect of Textured Contact Lenses on Iris Recognition, IEEE Trans. Inf. Forensics and Security, Vol. 9, 2014, pp. 851-862.


Ciba Vision, Duluth, GA, USA. (2013, Apr). Freshlook Color Blends [Online].Available:


Bausch & Lomb, NY, Rochester, USA. (2014, Jan) Bausch & Lomb Lenses [Online]. Available:


Srikant, S.K and Manjunath ,T.,C., A Novel Improved Technique of Image Indexing for Efficient Content Based Image Retrieval Using Local Patterns in IJETT – Vol. 12, 2014.


Prathiba,T, Soniah, G., Darathian, Efficient Content Based Image Retrieval Using Local Tetrapattern in IJAREEIE, Vol.2, 2013, pp.5040-5046


Shivani, Godara., Rajeev, Gupta., Neural Networks for Iris Recognition: Comparisons between LVQ and Cascade Forward Back Propagation Neural network Models, Architectures and Algorithm in IOSRJEN, Vol. 3, 2012, pp. 07-10


Gajendra, Shrimal., and Rakesh, Rathi., IRIS Identification based on Multilayer Feed forward Neural Network in IJLTEMAS , Vol. 2, 2013, pp. 2278 – 2540


Suggest Documents