Feature Level Fusion of Palm Veins and Signature Biometrics - IJENS

15 downloads 3285 Views 970KB Size Report
Palm veins recognition, Signature recognition, ... multimodal biometrics fusion have appeared in .... systems: online, for which the signature signal is captured ...
International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS Vol: 12 No: 01 28

Feature Level Fusion of Palm Veins and Signature Biometrics Hassan Soliman Faculty of Engineering, Mansoura University, Mansoura, Egypt [email protected]

Abdelnasser Saber Mohamed Information Systems Department, El-Mehalla Technical College, Mansoura, Egypt [email protected]

Abstract- Traditional biometric systems that based on single biometric usually suffer from problems likeimposters' attack or hacking, unacceptable error rate and low performance. So the need of using multimodal biometric system occurred .In this paper, a study of multimodal palm veins and signature identification is presented.Features of both modalities are extracted by using morphological operations and Scale Invariant Features Transform (SIFT)algorithm and a comparison for both methods is developed.Feature level fusion for both modalitiesis achieved by using a simple sum rule. Fused features vectors are subjected to discrete cosine transform (DCT) to reduce their dimensionalities. Linear Vector Quantization (LVQ) classifier is used with changed parameters to classify the different people in the database. Preliminary results are encouraging and supporting using palm veins and signature as a robust and reliable identification system. Key Words: Multimodal biometrics, feature fusion level, Palm veins recognition, Signature recognition, Morphological operations, SIFT algorithm, DCT, LVQ classifier. I.

INTRODUCTION

Multimodal biometric systems provide antispoofing measures by making it difficult for intruders to spoof multiple biometric traits simultaneously [1]. The benefit of multimodal biometrics may become even more evident in the case of a larger database of users. Palm vein recognition is good by itself and has high acceptance rate with low false acceptance rate, but due to increasing online transactions and communication the demand to increase the security against imposters or hackers [2]. So by combining multiple modalities enhanced

Ahmed Atwan Faculty of Computer & Information Sciences, Mansoura University, Mansoura, Egypt [email protected]

performance, more security and reliability could be achieved. In our proposed approach for identification based on multimodal palm vein and signature, the features of both modalities were extractedby using morphological operations and SIFT algorithm and a comparison for both methods was developed. Both modalities' features can be fused at three different levels: fusion at the feature level, matching level or decision level [3]. In this paper, the focus is on fusion at feature level, whichis believed to be very promising as feature sets can provide more information about the input biometrics than other levels.Feature level fusion refers to combining different feature vectors that are obtained by either using multiple sensor data. We used Fujitsu’s PalmSecure™ scanner to obtain the vein pattern database and collect regular signatures from employees of Center of Scientific Computing located in Mansoura University. Each modality contains its feature vector. These feature vectors contain non-homogenous features, so normalizing should be madefor features vectors for both modalities before concatenate them to form a single one. Concatenating two features vectors may result in a feature vector with very large dimensionality [4]. In this paper, DCT (discrete cosine transforms) is usedto reduce the dimensionality of both modalities' feature vectors.

II.

REVIEW OF RELATED WORKS

A number of studies showing the advantages of multimodal biometrics fusion have appeared in the literature. Brunelli and Falavigna [5] used hyperbolic tangent (tanh) for normalization and weighted geometric average for fusion of voice and

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS Vol: 12 No: 01 29

face biometrics. They also proposed a hierarchical combination scheme for a multimodal identification system. Kittler et al. [6] have experimented with several fusion techniques for face and voice biometrics, including sum, product, minimum, median, and maximum rules and they have found that the sum rule outperformed others. Kittler et al. [6] note that the sum rule is not significantly affected by the probability estimation errors and this explains its superiority.

black lines. Palm vein authentication has a high level of authentication accuracy due to the uniqueness and complexity of vein patterns of the palm. Because the palm vein patterns are internal to the body, this is a difficult method to forge. In addition, the system is contactless and hygienic for use in public areas [2].

Hong and Jain [7] proposed an identification system based on face and fingerprint, where fingerprint matching is applied after pruning the database via face matching. Ben-Yacoub et al. [8] considered several fusion strategies, such as support vector machines, tree classifiers and multi-layer perceptron, for face and voice biometrics. The Bayes classifier is found to be the best method. Ross and Jain [9] combined face, fingerprint and hand geometry biometrics with sum, decision tree and linear discriminant-based methods. The authors report that sum rule outperforms others. This paper contains the following sections: first section is the introduction, second one is review of related works, third section investigates multimodal biometric system using palm veins and signature recognition systems and discuss the two methods of feature extraction and forth section introduces the experimental results based on LVQ classifier, and the last section introduces conclusions. III.

MULTIMODAL BIOMETRIC SYSTEM

Most of the problems and limitations of biometrics are imposed by unimodal biometric systems, which rely on the evidence of only a single biometric trait. Some of these problems may be overcome by multi biometric systems and an efficient fusion scheme to combine the information presented in multiple biometric traits. In this paper, we introduce a novel multimodal biometric system using palm veins and signature modalities. The following framework (see Figure 1) explains the workflow of the system.

Fig.1. Workflow of the proposed system. 1. Palm vein authentication morphological operations:

based

on

1.1 Vein pattern

The sensing technology used for vein patterns is based on near-infrared spectroscopy (NIRS) and imaging, and has been developed through in vivo measurements during the last 10 or so years. That is, the vein pattern in the subcutaneous tissue of the palm is captured using near-infrared rays [2]. In this work, we used Fujitsu’s PalmSecure™ scanner for sensing and image acquisition of images that we tested in our research. Decide whether or not the user will hold his/her palm over the palm vein authentication sensor. If the palm is held, capture the infrared-ray image, which contains palm vein patterns (see Fig. 2).

A. Palm Veins Recognition System

This system uses an infrared beam to penetrate the users hand as it is held over the sensor; the veins within the palm of the user are returned as

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS IJVIPNS Vol: 12 No: 01 30

(a)

(b)

Fig. 2. (a) Capture device. (b) an infrared palm image captured by PalmSecureTM scanner The following workflow (see Figure 3) shows the whole processes of the identification system usi using Palm veins biometrics based on morphological operations.

minimized. Figure (4) illustrates the procedure of ROI locating: 1). Binarized the input image (Fig. 4(a)); 2). Obtain the he boundaries of the gaps, (Fixj; Fiyj) (Fig. 4(b)); 3). Compute the tangent of the two gaps (Fig. 4(b)), use this tangent (the line connect (x1, y1) and (x2, y2)) as the Y-axis Y of the palm coordinate; 4). Use a line passing through the midpoint of the two points (x1, y1) and (x2, y2), which is also perpendicular to the Y-axis, axis, as the X-axis X (the line perpendicular to the tangent in Fig. 4(b)); 5). The ROI is located as a square of fixed size whose center has a fixed distance to the palm coordinate origin (Fig. Fig. 4(c)); 6). Extract the subimage within the ROI (Fig. 4(d)).

(a)

(c)

(b)

(d)

Fig. 4. Locate ROI. (a) Binarized image; (b) boundaries and ROI locating; (c) ROI locating; (d) the subimage in ROI. Fig.3. Palm Vein Authentication Operations Workflow Diagram based on morphological operations. 1.2 ROI extraction from palm vein patterns

After image capture, asmall area (128*128 pixels) off a palm image is located as the region of interest (ROI) to extract the features and to compare different palms. Using the features within ROI for recognition can improve the computation efficiency significantly. Further, because this ROI is located by a normalized coordinate based on the palm boundaries, the recognition error caused by a user who slightly rotate or shift his/her hand is

1.3 Image processing steps:

Usually, in the image-based based biometric systems, a number of processing tasks to produce a better quality of image that will be used on the later stage as an input image and assuring that relevant information can be detected [10]. In this paper, we will use Matlab functions and an Image Processing Toolbox to do the entire required image processing. We will follow the lines set out by Gao et al [11] for fingerprint recognition and subsequently applied to hand veins by Malki et al [12].

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS Vol: 12 No: 01 31

Normally, the captured hand-vein pattern is grayscale and subject to noise. Noise Reduction and Contrast Enhancement are crucial to ensure the quality of the subsequent steps of feature extraction [12]. This is achieved by means of: a. Binarization that transforms the grayscale pattern into a black and white image, b. Skeletonization that reduces the width of lines to one pixel and c. Finally,Isolated Pixel Removal that eliminates the unwanted isolated points. These three steps constitute the procedure of image preprocessing, as shown in Figure 5.

Fig.5. three steps of Image Preprocessing (i) 1.4 Palm vein features extraction

Hand vein patterns have two main features: endings (end points) and bifurcations (branch points). The former is the end point of a thinned line, while the latter is the junction point of three lines Figure6 illustrates the idea.

Fig.8. (a)Original image contains the vein pattern, (b) after Binarization, (c) after filer (d) after Skeletonization, (e) after isolated pixel removal, leaving ,(f) after thinning (g) the bifurcations, (h) endings and (i) OR operation between bifurcations and endings(merging). B. Off-Line Signature Recognition

Fig.6. Palm vein features: endings and bifurcations The detection of bifurcations and endings in the preprocessed image can be performed in parallel. Intermediate results are summed by a simple OR logic before the feature of false is eliminated. Figure (8) illustrates all the steps for Palm vein feature extraction.

Signature verification is an important research area in the field of authentication of a person as well as documents in e-Commerce and banking. We can generally distinguish between two different categories of signature verification systems: online, for which the signature signal is captured during the writing process, thus making the dynamic information available, and offline for which the signature is captured once the writing process is over and thus, only a static image is available. In this paper, we deal with an Off-line signature verification System. We design a system capable of verifying authenticity of a signature based on test performed with genuine signatures (Verification Mode) and person identification from signature (Recognition Mode).[13] Various approaches are possible for signature recognition with a lot of scope of research. In this paper, we deal with an Off-line signature recognition technique, where the signature is captured and presented to the user in the format

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS IJVIPNS Vol: 12 No: 01 32

of image only. We use various image image-processing techniques to extract the parameters of signatu signatures and verify the signature based on these parameters. Signature recognition is a two two-class pattern classification problem, where authentic signatures belong to one class and the forged signatures belong to the other class[13]. Figure (9) illustrates the signature authentication ion operations workflow diagram.

b)

Rotating:moving :moving the signature to the origin and passing the new co-ordinates. co c) Strip out the white area:stripping area non-data data pixels surrounded the image. Y

X

1. Signature Authentication based on Morphological Operations Workflow

(b)

(a)

(c) Fig.10. (a) original scanned signature's image, (b) rotated image for its original origina Co-ordinates ordinates and (c) image without its surrounded white area. 

Fig.9. Signature Authentication ion Operations Workflow Diagram

Image processing

The purpose in this phase is to make Y signature images ready for feature extraction phase. The processing stage includes the followings: a) Binarization:Converts the image ima to its black and white equivalency pixels. b) Noise Elimination: Removes single black pixels on white background. c) Skeletonization: Reduces the width of lines to one pixel. Figure(11) shows these operations. d)

1.1 Signatures Preprocess

In this paper, we divided the preprocess procedures intoImage normalization processes and image processing processes. Preprocess procedures will be applied on both oth training and testing images as follows: 

(a)

(b)

Image normalization

The signature images must be normalized to determined standards because of the variation they may have. We do the following processes to normalize images: a) Resizing:: signature dimensions may have intrapersonal and interpersonal differences. Therefore, the image should be adjusted to a default size. We proposed it 175 X 200

(c) Fig.11. (a) a) Binaries image (b) after noise filtration (c) c) after Skeletonization

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

X

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS IJVIPNS Vol: 12 No: 01 33 1.2 Signatures Features Extraction :

In this phase, we will focus on global features extractions, which provide information about specific cases of the signature shape. It is described as the following: 1) Signature height-to-width width ratio:It is approximately equal for person's signatures. It can be obtained by dividing height to width of the signature image. 2) Signaturearea: it provides information about the signature's pixels density. 3) Maximum horizontal histogram gram (MHH) and maximum vertical histogram (MVH): horizontal histogram is calculated for each row and takes the highest value as MHH. Vertical histogram is calculated for each column and takes the highest value as MVH. 4) End point numbers of the signature: It is the pixel, which has only one neighbor.

(a)

(b)

(c)

(d)

(e)

(f)

Fig.12. (a) a) Processed signature (b) Maximum Vertical Histogram (c) Maximum Horizontal Histogram (d) end points (f) Intermediate Signature merg merging C.

Multimodal Biometrics cs Fusion based on Morphological Operations

There are many levels of biometric fusion such as [14] sensor level, feature level, match score level, rank level, and decision level. In feature level, the feature sets extracted from multiple data source can bee fuse to create a new feature set to represent the individual. It may result in a new

high-dimension dimension feature vector. In order to reduce the high dimensionality of the feature vectors, we used the DCT (Discrete Cosine Transform). The 2D discrete cosine transform ansform (DCT) is defined as:[15]

Biometric multimodality can be studied as a classifier combination problem [6]. Kittler et al [6] consider in the task of combining classifiers in a probabilistic Bayesian framework. Several ways to merge the modalities are re obtained (sum, product,max,min,) based on the Bayesian theorem and certain hypothesis, from which the Sum Rule outperformed the remainder in the experimental comparison [16]. In this paper, we will apply Sum Rule to concatenate the features from two modalities alities (palm veins and off-line off signatures), then introduce the new feature vectors to the LVQ classifier. In this paper, we employ Arun and Rohin [17] [ procedures for feature level fusion is accomplished by a simple concatenation of feature sets obtain from multiple modalities. Let X = { x1 , x2,……, xm} and Y = { y1,y2,…..,yn} denote feature vectors (X ∈ Rm and Y ∈ Rn) representing the information extracted via two different resources. The objective is to combine these two feature sets in order to yield a new feature vector, Z that would better represent the individual. The vector Z is generated by firstt augmenting vectors X and Y, and then performing feature selection on the resultant feature vector. The fusion of feature level data from any two biometric sources in this paper will follow the similar procedure. The different stage in this algorithm is described d below. 1. Feature Normalization

In our experiments, we used the median normalization scheme due to its robustness to outliers. Normalizing the feature values via this technique results in modified feature vectors. [17] The median normalization scheme, sch on the other hand, is relatively robust to the presence of noise

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS IJVIPNS Vol: 12 No: 01 34

in thetraining data. In this case, x' is computed as,     x' = . Where Fx is the function

D.

Palm Veins and Signature Features Extraction based on SIFT Algorithm:

 |  

generates x. The denominator is known as the Median Absolute deviation (MAD) and is an estimate of the scale parameter of the feature value.

Scale-invariant invariant feature transform (or SIFT) is an algorithm in computer vision to detect and describe local features in images. The algorithm was published by David Lowe in 1999. Scale Invariant Feature Transform (SIFT) is an approach for detecting and extracting local feature descriptors that are reasonably invariant to changes in illumination, image noise, rotation, scaling, ing, and small changes in viewpoint [18]. SIFT algorithm was deployed to extract features from tow modalities(( see figure (14)). (14))

2. Feature Selection

Augmenting the two feature vectors, x' and Y', results in a new feature vector, Z' = {x'1, x'2,….x'm, y'1,y'2,….,y'n} Z' ∈ Rm+n. The curse-of-dimensionality'18 ity'18 dictates that the augmented nted vector needs not necessity result in an improved matching performance. Further, some of the feature values may be `noisy' compared to the others. The feature selection process entails choosing a minimal feature set of si size k, k < (m + n), that improves classification performance on a training set of feature vectors. The sequential forward floating selection technique is employed to perform feature selection on the feature values of Z'. This results in a new feature vectorr Z = {Z1.Z2, ….Zk}. The criterion function to perform feature selection is defined to be the average of the Genuine Accept Rate (GAR) at four different False Accept Rate (FAR) values (0:05%, 0:1%, 1%, 10%) in the ROC (Receiver Operating Characteristics) curve urve pertaining to the training data. The reason for choosing this criterion is explained below[17].

Fig.14. SIFT Algorithm Overview diagram [19]

The following figure (see figure 13) shows intermediate fusion of palm veins features (X'), intermediate fusion of signature features and the final fusion n matrix of both modalities based on morphological operations.

+ X’ (Palm veins features)

= Y ‘(Signature (Signature features)

In this work, we considered spatial, orientation and Keypoints descriptor information i of each extracted SIFT point.For palm vein, the input to the system is the palm vein image.The output is the set of extracted SIFT features sp= (sp1, sp2.... ....spm) where each feature point spi=(x ,y , •,k) consist of the (x, y) spatial location, locati the local orientation • and k is the key descriptor of size 1x128. For signature image, the input to the system is the signature image. The output is the set of extracted SIFT features ss=(ss1, ss2.... .... ssm) where each feature point ssi=(x ,y , •,k) •,k consist of the (x, y) spatial location, the local orientation • and k is the he key descriptor of size 1x128. 1.

Multimodal Biometrics Fusion based on SIFT Keypoints

1.1 Feature set normalization

The Keypoints descriptors of each palm vein and

Z’Fusion matrix signature points are then normalized using the

Fig.13. Morphological features fusionof both modalities

min-max max normalization technique (SPnorm and SSnorm), to scale all the 128 values of each Keypoints descriptor within the range 0 to 1. This normalization can also apply the same threshold

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS Vol: 12 No: 01 35

on the palm vein and signature Keypoints descriptors, when the corresponding pair of points is found for matching the fused pointsets of database and query palm vein and signature images.[19] 1.2 Feature Concatenation and Reduction

The feature level fusion is performed by concatenating the two feature pointsets. These results in a fused feature pointsetconcat= (SPlnorm, SP2norm, SPnorm,…….SSlnorm, SS2norm, SSmnorm). Feature reduction strategy to eliminate irrelevant features can be applied either before or after feature concatenation.In this paper, DCT (Discrete Cosine Transform) is used to reduce the dimensionality of the concatenated feature vector. 1.3 SIFT Feature Matching

1)

Find nearest neighbor in a database of SIFT features from training images. For robustness, use ratio of nearest neighbor to ratio of second nearest neighbor. Neighbor with minimum Euclidean distance!Expensive search. Use an approximate, fast method to find nearest neighbor with high probability.[19]

2) 3) 4)

each singer and taking a threshold. We discard signer who above it (those who have no sufficient quality (consistency) in signing) and accept the rest. After databasefiltration for signatures,we selected 30 consistence signatures. We used 5 signatures in training phase and 5 signatures in testing phase. We dedicated 30 people as genuine signatures and the other 7 persons as imposters. In addition, we dedicated 30palm vein image as genuine images and the other 7 images as imposters. By Fusion5 palm vein imageswith the 5 images of signature in both phases training and testing, we get the fused features and introduce them to DCT and then to the classifier. B.

The LVQ (Linear Vector Quantization) network which consist of an input layer and an output layer, it is used as the basic building block of our classifier.The input layer consists of a number of neurons corresponding to the number of training patterns the network will be trainedon. For example, we have 30 fused features vectors (e.g. 30 classes) with 5 training pattern for each, we will have 150 input neurons each corresponding to an input pattern.Output layer consist of a specified number of neurons per class. We have 3 neurons per class and 80 classes then we have 30 x 3 = 90 output neurons. C.

(b) (a) Fig.15. SIFT Keypoints matching for (a) Palm veins pattern and (b) Signature image IV.

EXPERIMENTAL RESULTS A.

Database Description

We collected our own database from employees of Center of Scientific Computing located in Mansoura University who have different jobs, genders, and ages. 37 signers assigned on the form we have designed to collect 10 signatures from each person. Therefore, we have 370signatures in our signature database and by usingPalmSecure scanner; we collect 5*37 = 185 palmveins images from 37 persons. For the signatures images we first doing the normalization and then image processing steps to enhance the database, then after filtering the images by computing the coefficients variance for

LVQ Classifier

Output layer initialization

The neurons in the output layer may be initialized to random values, may be initialized to one of the training samples (each neuron representing a class initialized to one of the training samples for this class. The neurons which representing a class may be initialized by taking the mean of training sample for this class. The last method of initialization is the best one because it reduces the training time and is more reliable because the mean is unbiased to any of the given training samples. D.

Training the network

Each pattern in the input layer has a response in the output layer. The neurons in the output layer with minimum Euclidean distance from this pattern are considered the winner. The winner may or not represent the correct class. In either case, the winner is trained. If it represents a correct class, its weights are moved toward this pattern; otherwise, they are moved away from the pattern.

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS Vol: 12 No: 01 36

This process is repeated for all patterns in the input layer. This makes one epoch of training. A large number of epochs (100 epochs) are performed. Each epoch changes the weights of a number of neurons (it makes them stronger in classifying a pattern or stronger in not being confused with patterns of other classes). Finally, the network is tested with testing set of pattern. The pattern is classified by measuring the distance between it and between each neuron in the trained output layer. The class of the winner is considered the class of the pattern. Now, we will make 60 classifiers (whichusing the same network architecture, but using different parameters for each. We have varied 3 parameters: 1. The learning rate (which controls how many a neurons moves towards or away from a pattern) is5 values. 2. The number of training epochs is4 values. 3. The number of neurons representing each class is3 values. Therefore, we should have 4x3x5= 60 classifiers. Each classifier is tested against the training set and the test set. Each individual classifier gives average classification strength against each class. It forms the following table,

do so we combined the classifiers. It simply explained as the following: TABLE II Votes For Some Classes By Some Classifiers Classifier Suggested Voting No. class strength 1 2 3 4 5 6 7 8 9 10

0.8 0.7 1 0.9 0.8 0.9 0.9 1 0.7 0.8

In this table, classifier no. 1 suggested class 13 for the given pattern and gave vote (0.8),classifier no. 2 suggested class 17 for the given pattern and gave vote (0.7) and classifier no.3 suggested class 12 for the given pattern and gave vote (0.9) and so on. To get the final classification we sort distinct classes and sum the vote given for each as in the following table: TABLE III

Result Of Summing Voting Class Total voting 12 3.8 13 2.4 17 2.3

TABLE I 60 Classifiers Voting Strength For 8 Classes Classifier No.[learning rate, training epochs, neurons per class] Classifier 1 [ 0.05, 40, 1] Classifier 2 [0.05 , 80,3] . . Classifier 19 [ 0.1,120,1] . . Classifier 60 [ 0.25, 160,5]

13 17 12 13 17 12 12 12 13 17

Voting strength for classes

0.6

0.8

0.7

1

0.8

0.8

0.8

0.6

0.8

0.8

0.5

0.9

0.8

0.8

0.7

0.4

0.6

0.8

0.7

0.8

0.9

0.8

0.8

0.6

0.9

0.7

0.5

1

0.8

0.7

0.7

0.3

This table shows a relation between 60 classifiers and their average strength in individually classifying 8 classes. Each classifier is shown along with its training parameters enclosed in square brackets [learning rate, training epochs, neurons per class]. Each classifier gives a vote for a class so a class with the greatest number of voting becomes the class of the given pattern. To

The question arises now is, how can we choose the combination of the classifier? The answer would be; an accumulative classification test was used. We begin by a combination of semi-strong classifiers, testing their average classification strength as a single combined classifier. We gradually add stronger classifiers and retest the average classification strength. After adding a sufficient number of classifiers, we note that the classification strength become stable. Adding more classifiers no longer contributes to the classification strength. At this point, we have a robust and economical number of classifiers to use the final combined classifier.In the following figure, we can deduce that when both modalities are being fusedthe Genuine Acceptance Rate (GAR) will be increased and the False Acceptance Rate (FAR) will be decreased. Table IV contains the GAR results of using palm veins and signature as separated and fused modalities.

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS IJVIPNS Vol: 12 No: 01 37

3

1

0.1

2

3

1

0.25

2

3

GAR using Combined LVQ for Feature Level Fusion

2

GAR using Combined LVQ for palm vein

0.05

GAR using Combined LVQ for Signature

1

Training epochs

Neurons per class

Alpha(learning rate)

TABLE IV. Variation of the System Recognition Accuracy when using Palm Veins and Signature as Separated and Fused Modalities.

40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160

88.35% 88.58% 88.05% 89.13% 88.23% 88.45% 89.60% 89.69% 87.88% 87.98% 88.22% 89.38% 87.93% 88.47% 88.03% 88.98% 87.97% 89.27% 89.80% 89.91% 87.98% 88.10% 88.45% 90.00% 89.89% 90.34% 90.06% 90.91% 88.32% 90.39% 90.93% 90.80% 88.29% 88.46% 88.57% 89.59%

90.77% 91.75% 92.73% 93.04% 91.25% 92.12% 92.93% 91.79% 93.76% 92.26% 93.60% 93.77% 91.76% 91.73% 92.78% 94.03% 91.93% 91.92% 93.16% 94.66% 94.36% 94.13% .13% 94.04% 94.14% 91.02% 92.04% 93.26% 93.24% 92.15% 93.03% 93.14% 4% 93.37% 93.79% 93.76% 95.68% 95.50%

93.50% 94.52% 95.01% 95.98% 92.86% 94.85% 95.43% 94.66% 96.71% 94.90% 95.24% 95.99% 94.79% 94.01% 95.31% 95.98% 94.47% 94.82% 95.89% 96.14% 96.12% 97.03% 96.92% 96.72% 93.71% 94.84% 95.52% 95.62% 94.73% 95.40% 95.25% 95.02% 96.60% 95.99% 96.98% 96.96%

Fig.16. the relation between GAR and FAR in case of feature level fusion based on Morphological Morpholog Operations E. SIFT Keypoints and Morphological Features Results for Both Modalities As mentioned above, we used morphological operations to extract two main features (bifurcations and endpoints) from palm veins, used global features of the signature and a then concatenate two modalities features, and introduced them to DCT algorithm to reduce the dimensionality of the vector and we got the above results using Combined LVQ classifier. Here, a comparison between SIFT Keypointsand Morphological features after er applying DCT for both fused modalities is developed. Table (4) illustrate the variation of the system recognition accuracy based on SIFT Keypoints and Morphological features after applying DCT using Combined LVQ classifier.

As shown above in TABLE IV, when both modalities are being fused the Genuine Acceptance Rate (GAR) will bee increased and the False Acceptance Rate (FAR) will be decreased. Figure 16 illustrates the idea of the research and summarizes the above table.

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS Vol: 12 No: 01 38

TABLE V Variation of The System Recognition Accuracy based on SIFT Keypoints and Morphological Features after applying DCT Using Combined LVQ Classifier.

Alpha(le arning rate)

Neur ons per class

1

0.05

2

3

1

0.1

2

3

1

0.25

2

3

Traini ng epochs

40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160 40 80 120 160

GAR using Combine d LVQ based on Morphol ogical features 93.50% 94.52% 95.01% 95.98% 92.86% 94.85% 95.43% 94.66% 96.71% 94.90% 95.24% 95.99% 94.79% 94.01% 95.31% 95.98% 94.47% 94.82% 95.89% 96.14% 96.12% 97.03% 96.92% 96.72% 93.71% 94.84% 95.52% 95.62% 94.73% 95.40% 95.25% 95.02% 96.60% 95.99% 96.98% 96.96%

GAR using Combine d LVQ based on SIFT Keypoint s Descripto rs 95.54% 96.56% 97.05% 97.88% 94.76% 96.75% 97.33% 96.56% 98.78% 96.97% 97.31% 98.06% 96.86% 96.08% 97.38% 98.05% 96.54% 96.77% 97.84% 98.09% 98.07% 98.98% 98.87% 98.67% 95.66% 96.89% 97.57% 97.67% 96.78% 97.45% 97.30% 97.07% 98.65% 98.04% 99.03% 99.01%

Figure (17) summarizes the above table. It indicates the result that is SIFT algorithm is more accurate,and does not need more preprocessing steps to identify people. ِ◌Accurac %100.00 y %98.00 %96.00 %94.00 %92.00 %90.00 %88.00 %86.00

α

0.25

0.1

0.05

%84.00

GAR using Combined LVQ based on Morphological features GAR using Combined LVQ based on SIFT keypoint Descriptors

Fig.17.GAR Comparison between SIFT features and Morphological features Accuracy.

V. CONCLUSION In this paper, we have attempt to present new insights and experimental results for palm vein and signature recognition. We proposed different approaches of morphological techniques to enhance features extraction in palm vein and global features extraction of signature images.In addition, we deploy SIFT algorithm to extract the features from two modalities. We used the simple sum rule to fuse both modalities' features based on both morphological operations and SIFT Keypoints descriptors. We applied DCT algorithm to reduce the feature vectors dimensionalities of both feature extraction techniques (morphologicaloperations and SIFT Algorithm). Feature vectors after DCT operations are ready to be introduced to the LVQ classifier, which contains the following parameters: learning rate, training epochs, neurons per class. We combined the 60 classifiers and then made voting for their strength and chose the high voting class to be the class of the pattern.Finally we can deduce that SIFT algorithm is more accurate and does not need more preprocessing steps to identify people.

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS

International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS Vol: 12 No: 01 39

References 1. 2. 3.

4.

5.

6.

7.

S. THEODORIDIS and K. KOUTROUMBAS, Pattern recognition. Academic Press, 31 Oct 2008. Palm Vein Authentication Technology White Paper march of Fujistu 2008. M. H. KHAN, R. K. SUBRAMANIAN, and N. M. KHAN," Representation of hand dorsal vein features using a low dimensional representation integrating cholesky decomposition,". In the IEEE 2nd International Congress on Image and Signal Processing, Vol.3, 17-19 October 2009, Tianjin, China. F. KUO-CHIN, C. L. LINAND, and L. WINLONG, "Biometric verification using thermal images of palm-dorsa vein-patterns," Institute of Computer Science and Information Engineering, National Central University.16th IPPR Conference on Computer Vision, Graphics and Image Processing (CVGIP 2003). R. BRUNELLI, D. FALAVIGNA, "Person identification using multiple cues," IEEE Transactions on Pattern Analysis and Machine Intelligence 1995. J. KITTLER, R. P.W. DUIN, "The combining classifier: to train or not to train," in Proceedings of the International Conference on Pattern Recognition, vol. 16, no. 2, pp. 765–770, 2002. L. HONG and A. K. JAIN, "Integrating faces and fingerprints for personal identification," IEEE Trans. PAMI, vol. 20, no. 12, pp. 1295-1307, 1998.

8.

S. BEN-YACOUB, Y. ABDELJAOUED, and E. MAYORAZ, "Fusion of face and speech data for person identity verification," IEEE Trans. Neural Networks, vol. 10, no. 5, pp. 1065-1075, 1999.

9.

A. ROSS, and A. K. JAIN, "Information fusion in biometrics," Pattern Recognition Letters, vol. 24, no. 13, pp. 2115-2125, 2003.

13. H.B. Kekre, and V. A. Bharadi, "Signature Recognition Using Cluster Based Global Feature," IEEE International Conference (Iacc 2009). 14. P. Verlinde, G. Chollet, and M. Acheroy, "MultiModal Identity Verification Using Expert Fusion," Information Fusion. no. 1, Pp. 17-33, Elsevier, 2000. 15. Http://En.Wikipedia.Org/Wiki/Discrete_Cosine_T ransform. 16. G. Rodriguez, and Others, " A Comparative Evaluation Of Fusion Strategies For Multimodal Biometric Verification," Avbpa Conference, 23 August 2010. 17. A. ROSS, And R. Govindarajan, " Feature Level Fusion Using Hand And Face Biometrics," Motorola Inc., Anaheim, Ca 92806 USA, June 2011. 18. L. David, "Object Recognition From Local ScaleInvariant Features," Proceedings Of The International Conference On Computer Vision. 2. Pp. 1150–1157.Doi:10.1109/Iccv, 1999. 19. A. Rattani, D. R. Kisku, M. Bicego, and M. Tistarelli, "Feature Level Fusion Of Face And Fingerprint Biometrics," Arxiv:1002.2523v1, 12 Feb 2010

10. D. Bhattacharyya, P. Das, K. Tai-Hoon, and B. S. Kumar, "Vascular pattern analysis towards pervasive palm vein authentication," Heritage Institute of Technology, Kolkata, India and Hannam University, Daejeon, Korea ,Journal of Universal Computer Science, vol. 15, no. 5 2009. 11. Q. Gao, and G. S. Moschytz, "Fingerprint feature extraction using cnns," European Conference on Circuit Theory and Design, pp. 97-100, Espoo, Finland, 2001. 12. S. Malki, , Y. Fuqianga and L. Spaanenburg, " Vein feature extraction using dt-cnns.10th International Workshop on Cellular Neural Networks and Their Applications," Istanbul, Turkey, 28-30, pp.1-6. August 2006

126801-7474 IJVIPNS-IJENS © February 2012 IJENS

IJENS