Face-based Document Image Retrieval System

27 downloads 0 Views 755KB Size Report
bDepartment of Computer Science & Engineering, B.L.D.E.A's, V. P. Dr. P. G. Halakatti C.E.T, Bijapur – 586103. Abstract. Many of the documents such as ...
Available online at www.sciencedirect.com

ScienceDirect ScienceDirect

Procedia Computer Science 00 (2018) 000–000

Available online at www.sciencedirect.com Procedia Computer Science 00 (2018) 000–000

www.elsevier.com/locate/procedia

ScienceDirect

www.elsevier.com/locate/procedia

Procedia Computer Science 132 (2018) 659–668

International Conference on Computational Intelligence and Data Science (ICCIDS 2018) International Conference on Computational Intelligence and Data Science (ICCIDS 2018)

Face-based Document Image Retrieval System Face-based Document Image Retrieval System

a

Umesh D. Dixitaa and M. S. Shirdhonkarbb Umesh D. Dixit and M. S. Shirdhonkar

Department of Electronics & Communication, B.L.D.E.A’s, V. P. Dr. P. G. Halakatti C.E.T, Bijapur – 586 103 Science & Engineering,B.L.D.E.A’s, B.L.D.E.A’s,V.V.P.P.Dr. Dr.P.P.G.G.Halakatti HalakattiC.E.T, C.E.T,Bijapur Bijapur – 586103 DepartmentofofComputer Electronics & Communication, – 586 103

b a Department b

Department of Computer Science & Engineering, B.L.D.E.A’s, V. P. Dr. P. G. Halakatti C.E.T, Bijapur – 586103

Abstract Abstract Many of the documents such as passport, identity card, property documents, voter ID and certificates include face i mage of a person.ofThese types of documents are usedidentity in various anddocuments, on-line applications. Retrieval of such documents basedofona Many the documents such as passport, card,off-line property voter ID and certificates include face i mage face image is atypes usefulofapplication government offices,off-line variousand companies, and organizations. Thisofpaper a met hodon to person. These documentsinare used in various on-line applications. Retrieval suchproposes documents based retrieve documents from the database pertaining to a person basedcompanies, on face image. The proposedThis method includes three steps: face image is a useful application in government offices, various and organizations. paper proposes a met hod(1) to Detecting face image from document (ii) Extracting thebased face image Retrieving all documents thatsteps: contain retrieve documents from thequery database pertaining to a person on facefeatures image. and The(iii) proposed method includes three (1) face imageface similar to that the query document. The proposed method concept of grey level co-occurrence Detecting image fromofquery document (ii) Extracting the face imageextends featuresthe and (iii) Retrieving all documents that matrix containto colorimage images for feature Experiments outmethod on the database of 810 document in which face images face similar to thatextraction. of the query document. are Thecarried proposed extends the concept of greyimages, level co-occurrence matrix to are borrowed from face94 [1]. MeanExperiments Average Precision (MAP) is achieved proposed method and theface results are color images for feature extraction. are carried out of on 82.66% the database of 810 with document images, in which images compared withfrom current state[1]. of Mean art including features Grey Level Matrix [2].are It are borrowed face94 AverageHaralick Precision (MAP)associated of 82.66%with is achieved withCo-occurrence proposed method and(GLCM) the results is found that thecurrent results state are encouraging compared withfeatures existingassociated methods. with Grey Level Co-occurrence Matrix (GLCM) [2]. It compared with of art including Haralick is found that the results are encouraging compared with existing methods. © 2018 2018 The The Authors. Published by by Elsevier Elsevier B.V. Ltd. © Authors. Published This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/3.0/) Peer-review responsibility of Elsevier the scientific of the International Conference on Computational Intelligence and © 2018 The under Authors. Published by B.V. committee Peer-review under responsibility of the scientific committee of the International Conference on Computational Intelligence and Data (ICCIDS 2018). Peer-review responsibility Data Science Scienceunder (ICCIDS 2018). of the scientific committee of the International Conference on Computational Intelligence and Data Science (ICCIDS 2018). Keywords: Face image detection; GLCM; Ranking and retrieval; Document image retrieval; Face-based Retrieval Keywords: Face image detection; GLCM; Ranking and retrieval; Document image retrieval; Face-based Retrieval *Umesh D. Dixit

E-mail *Umeshaddress:[email protected] D. Dixit E-mail address:[email protected] 1877-0509© 2018 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of thebyscientific committee of the International Conference on Computational Intelligence and 1877-0509© 2018 The Authors. Published Elsevier B.V. Data Scienceunder (ICCIDS 2018). of the scientific committee of the International Conference on Computational Intelligence and Peer-review responsibility Data Science (ICCIDS 2018).

1877-0509 © 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/3.0/) Peer-review under responsibility of the scientific committee of the International Conference on Computational Intelligence and Data Science (ICCIDS 2018). 10.1016/j.procs.2018.05.065

660

Umesh D. Dixit et al. / Procedia Computer Science 132 (2018) 659–668 Umesh D. Dixit and M. S. Shirdhonkar/ Procedia Computer Science 00 (2018) 000–000

2

1. Introduction Content-based document image retrieval is an active research area and attracted many researchers since two decades. The goal of document image retrieval is to help the community for searching, browsing and retrieving of documents based on various criteria. Recently, a large number of methods such as title, logo, signature, layout and structure based document image retrieval have been proposed. Broadly the techniques developed can be categorized as traditional methods and new methods [3]. Keyword and optical character recognition based indexing belong to traditional methods, however, signature, logo and layout structural approach are considered as new methods of indexing. These techniques helped to retrieve document images from the database that are similar to query document image, matching in title, logo, signatures, layout and structures. 1.1. Motivation Due to digitization across the world, many of the documents nowadays use face image of a person for identification. Some of the examples of these documents are a passport, identity card provided by organizations, voter identity cards, certificates, property documents, driving license, etc. Hence indexing and retrieving of such documents using face image becomes an essential application. Face based document image retrieval may find its application in business offices, government organizations, insurance companies, investigation agencies, banking sectors etc. These facts motivated us to develop a technique for face based document image retrieval. 1.2 Related Work This section briefly discusses the state of art developed by different researchers for face image recognition and matching. General framework for document image analysis and retrieval with its importance is discussed in [4]. Kekre et al. [5] proposed a novel technique for face recognition employing Walshlet pyramid, in which features are extracted from Walshlets applied on different levels of image decomposition. A kernel machine based discriminant analysis method that deals with nonlinearity of face patterns are used in [6]. This solves small sample size problem that exists in face recognition tasks. Ahmadi et al. [7] presented a fuzzy hybrid learning algorithm (FHLA) for radial basis function neural network (RBFNN) in their face recognition work. Gradient and linear least squared methods are combined in FHLA for adjusting RBF parameters and the weights of neural network connection. Vikram et al. [8] proposed person specific document image retrieval to retrieval. To implement this work, they recommended calculating the average of the face images, by normalizing the size and tagging the documents to the average face image. Principal component analysis (PCA) and linear discriminant analysis (LDA) subspace based recognition scheme is employed in their proposed work. A face detection method using skin based color features was proposed by Weng et al. [9]. They used skin based color features, which are extracted from two-dimensional discrete cosine transform (DCT) and neural networks to detect faces. Tan and Triggs [10] used heterogeneous set of feature for face recognition. Gabor wavelets and Local Binary Patterns (LBP) are used to capture small appearance and a broader range of details. To reduce the resulting high-dimensional features they applied PCA. The Kernel Discriminative Common Vector (KDCV) is then used to obtain discriminant nonlinear features in their work. Leng et al. [11] used Discrete Cosine Transform (DCT) features for face and palm print recognition in their work. They discussed about selecting proper DCT coefficients for better discrimination effect and presented dynamic weighted discrimination power analysis (DWDPA) to achieve the same. Park and Jain [12] employed soft biometrics for face image matching and retrieval. They suggested using demographic information (e.g., gender and ethnicity) and facial marks (e.g., scars, moles, and freckles) to boost the performance of image matching and also the retrieval. Chen et al. [13] proposed two orthogonal methods: attribute-enhanced sparse coding and attribute embedded inverted indexing for the retrieval of face images. They combined both low-level and high-level human attributes to improve the results of face image retrieval. Face recognition and retrieval using fiducial point feature was proposed by Pannirselvam and Prasath [14]. Fiducial points that include eye brow, eye, iris, nose and mouth are extracted to obtain feature sets and the Euclidean distance is used matching and retrieval of face images. [15]



Umesh D. Dixit et al. / Procedia Computer Science 132 (2018) 659–668 Umesh D. Dixit and M. S. Shirdhonkar/ Procedia Computer Science 00 (2018) 000–000

661 3

Employs local S, U, V and D sub-bands, which are the result of applying SVD on original image. Mainly S-sub band based descriptors are used for face retrieval. Document retrieval based on face image using Singular Value Decomposition based features is proposed in [16]. Correlation distance is used as a similarity measure to rank and retrieve the documents. 1.3 Contribution of this paper Most of the work proposed in the literature emphasize on detecting individual features of the face such as nose, eyes, mouth, etc. A major challenge in retrieving face image based documents is a variation of different facial features due to copy-paste method adopted by various agencies while preparing such documents. For example, in many of the cases like preparing identity cards, scanned copy of the photograph is pasted by adjusting its size to the space provided in the documents. To address this challenge, entire face image without segmentation is considered for extracting the features. The main contribution of this paper is:  In document images such as passports, identity cards, voter id, bank passbooks, driving license, etc., part of the document with the highest energy represents face image. Hence, in the proposed method, energy of the different connected components is computed, and the connected component with the highest energy is extracted as face image from the document.  This work extends the concept of grey level co-occurrence matrix to RGB images to construct face image features. Co-occurrence matrices for R, G and B components are separately computed and diagonal elements of these co-occurrence matrices are used to construct the feature vector for matching and retrieval of documents.  Extracted features of face image from query document are compared with features of the documents stored in a database for retrieval purpose. Mahalanobis distance metric is been employed for ranking and retrieval of documents. The rest of the paper is organized as follows: Section 2 describes the proposed methodology, section 3 discusses experimental results and finally section 4 concludes the paper. 2. Proposed Method Fig. 1 shows the architecture of the proposed system, which includes three major blocks: Face image detection from the query document, feature extraction and retrieval of documents. Important blocks of proposed system are explained in the following sub sections.

Query Document

Face image detection

Feature extraction

Document database

Database of extracted face image features

Feature matching

Fig. 1 Architecture of proposed system

Retrieval and ranking documents

Umesh D. Dixit et al. / Procedia Computer Science 132 (2018) 659–668 Umesh D. Dixit and M. S. Shirdhonkar/ Procedia Computer Science 00 (2018) 000–000

662

4

2.1. Face-detection from query document Generally, in document images such as passports, identity cards, voter id, bank passbooks, certificates and driving license, the part of the document that contains a face of a person will have high energy as compared to other parts. This fact is used in the proposed method for face image detection. Algorithm 1 lists out the steps used in the proposed work. Algorithm 1: Detection of Face Image from the query Document 1. 2. 3. 4. 5. 6. 7. 8.

Input: Query document image Output: Face image. Begin Read query document image. Convert RGB image to Binary image. Find connected components in the document image and calculate the energy using equation (2). Consider the component having highest energy as the face and extract coordinate points and size of this component. Using co-ordinate points and size information separate out a face image from the color query document. End

Document that contains the face of a person is used as a query input to the system. In general, most of these documents are available as color images. The color document image is converted to grey scale image using the popular equation (1). (1) An Otsu method [17] is then employed for finding the threshold value to obtain a binary image from the gray scale image. In the next step, connected components using 8-neighbours approach is obtained and then compute the energy of each component using equation (2). (2) Where, is the connected component with size M×N and E CC represents its energy contribution. The connected component with highest energy is considered as part of document image that contain a face of a person. By knowing the location, this connected component is separated out from the original document image for computing the. A sample result of face detection with query document is shown in Fig. 2.

Fig. 2 Query document and extracted face image



Umesh D. Dixit et al. / Procedia Computer Science 132 (2018) 659–668 Umesh D. Dixit and M. S. Shirdhonkar/ Procedia Computer Science 00 (2018) 000–000

663 5

2.2. Feature computation from detected face image Features play a vital role in analysis, matching and retrieval of images. This work proposes a new feature extraction method by extending the concept of grey level co-occurrence matrix to RGB color images. This section, briefly discuss the concept related to GLCM and an algorithm used for feature extraction in this section. 2.2.1. Grey Level Co-occurrence Matrix GLCM is a square matrix of size N×N, that provides popular statistical texture features for image analysis and matching [18]. To obtain GLCM, initially, the grey level image is divided into N-intensity levels by means of appropriate intensity mapping. A number of intensity levels in the image decides the size of GLCM. Each element of the matrix at position (i, j) represents the frequency of co-occurrence of two pixels with intensity i and j respectively, separated by a distance ‘k’ in the direction specified by displacement vector. Fig. 3 shows an example of GLCM for an image ‘I’ of size 6×6 with two intensity levels resulting into a GLCM of size 2×2.

Fig. 3 Example of GLCM for an image

The first element of GLCM indicates the occurrence of pixels with intensity (0,0) in the diagonal direction as specified by displacement vector. Similarly other elements of GLCM provides occurrence of pixels (0,1), (1,0) and (1,1). The matrix thus obtained can be used for texture analysis and also extract various features like contrast, energy, homogeneity and correlation called Haralick features [2]. 2.2.2. Algorithm for feature extraction In this work, the concept of GLCM is extended for RGB images. The principal diagonal elements of cooccurrence matrices are used as the feature set instead of conventional features like contrast, correlation, entropy, energy etc. The reason behind using diagonal elements is that most of the non-zero entries of co-occurrence matrix lie along the principal diagonal [16]. This also helps in optimizing the number of features. Algorithm 2 shows the important steps used for feature extraction of the face image. Algorithm 2: Computation of extracted face image features. 1. 2. 3. 4. 5. 6.

7. 8.

Input: RGB face image Output: Feature vector. Begin Separate R, G and B channels of the face image. Divide each R, G and B into 8 intensity levels. Calculate co-occurrence of R, G and B matrices. Compute: a. FV1 = {Diag(CMR)} /* CMR is co-occurrence matrix for R component */ b. FV2 = {Diag(CMG)} /* CMG is co-occurrence matrix for G component */ c. FV3 = {Diag(CMB)} /* CMB is co-occurrence matrix for B component */ Feature Vector, FV = {FV1} U {FV2} U {FV3} Return FV.

Umesh D. Dixit et al. / Procedia Computer Science 132 (2018) 659–668 Umesh D. Dixit and M. S. Shirdhonkar/ Procedia Computer Science 00 (2018) 000–000

664

6

9. End For constructing feature vector, face image obtained from algorithm 1 is used as an input. The RGB color face image is initially divided into 3 components Red, Green and Blue. Each of these matrices will be of twodimensional and hold intensity levels of red, green and blue. In algorithm 2, intensities of R, G and B are converted into 8-levels to reduce the number of features and then obtain their co-occurrence matrices using the concept of GLCM as discussed in section 2.2.1. Let CMR, CMG and CMB be the co-occurrence matrices of R, G and B components. As eight intensity levels are used, the size of each of these co-occurrence matrices will be 8×8. The feature vector is then constructed using (3), (4) and (5). (3) (4) (5) FV1, FV2 and FV3 are the features that represent 8-diagonal elements of each co-occurrence matrices of R, G and B components respectively. A final feature vector of size 24 is constructed by combining FV1, FV2 and FV3 as given by (6). (6) 2.3. Retrieval and Ranking of documents Initially, the computed features of extracted face images from all the preprocessed documents are stored in the feature database. This helps in reducing query processing time. The experiments for retrieval are conducted using Mahalanobis similarity distance, with a positive definite covariance matrix ‘C’ given by equation (7). The combination of proposed feature set with Mahalanobis similarity distance measure has shown excellent retrieval results. (7)

Where ‘C’ is a covariance matrix, Mdist(QFV, DFVi) represents Mahalanobis distance between query feature vector QFV and each database feature vector DFVi. Finally, documents are sorted and retrieved based on the similarity distance values. Lowest distance value indicates the closest match to the query. Algorithm 3 shows the process of retrieval and ranking of the documents. Algorithm 3: Retrieval and ranking of documents 1. 2. 3. 4. 5. 6.

Input: DFV[1:N] /* Feature vector of ‘N’ documents stored in database*/ QFV[1:N] /* Feature vector of query document*/ Output: Top-N retrieved documents Begin Compute Mahalonobis distance Mdist(QFV,DFVi) between QFV and each DFVi using equation (6) for finding a match between query face image and each of indexed documents. Sort distance values stored in ‘Mdist’ and corresponding documents for ranking. Display top-N documents, whose features are close to the query document based on sorted ‘Mdist’ values. End



Umesh D. Dixit et al. / Procedia Computer Science 132 (2018) 659–668 Umesh D. Dixit and M. S. Shirdhonkar/ Procedia Computer Science 00 (2018) 000–000

665 7

3. Experimental Results Since there is no public database is available for testing this particular work, own database is created with 810 document images for testing the proposed method. Face images for this database are borrowed from the face94 [1]. A brief description of the database used for testing is given below. Database description: The database is constructed for 27 persons that include 16 male and 11 female members. 30 documents are considered for each member leading to a total of 810 (30 documents × 27 persons) documents. These documents consist of sample identity cards, passports, certificates, PAN cards, Adhar cards, etc., with different size and resolution. Fig. 4 shows sample documents used in the database.

Fig. 4 Sample documents used in the database Top-4 retrieved documents for a sample query using the proposed method is shown in fig.5. It can be observed that 3 documents out 4 are relevant to the query, giving a 75% of precision.

Fig. 5 Sample query and retrieved documents

Umesh D. Dixit et al. / Procedia Computer Science 132 (2018) 659–668 Umesh D. Dixit and M. S. Shirdhonkar/ Procedia Computer Science 00 (2018) 000–000

666

8

Evaluation of the work is carried out by computing precision, recall and f-measure for each class of document. Precision is a measure of relevant documents retrieved to the total number of documents retrieved and Recall is relevant documents retrieved to the total number of relevant documents in the database. F-measure is harmonic mean of precision and recall, given by equation (8). (8) A randomly selected query from each class is processed during the experiments. Analysis for each 27 classes of documents with Top 1, Top 5, Top 10, Top 15 and Top 20 retrieval results is carried out. To evaluate the results average precision (AP), average recall (AR) and f-measure for top 1, top 5, top 10, top 15 and top 20 documents are computed. Results of the proposed method are compared with different methods. Table 1 shows a summary of average precision (AP) and average recall (AR) with proposed method and the other techniques used in the literature. This method gave mean average precision (MAP) of 82.66%. F-measure obtained with different methods is tabulated in Table 2. Table 1. Average precision and average recall Sl. No. 1 2 3 4

Feature extraction method Haralick Features[2] DWT Features [16] SVD based Features [16] Proposed Method

Evaluation Parameter

Top 1

Top 5

Top 10

Top 15

Top 20

Mean Values

AP AR AP

100 3.3 100

60.74 10.12 65.93

46.29 15.43 49.46

35.81 17.91 40.78

29.81 19.86 35.89

54.53 13.32 58.41

AR

3.3

12.96

18.03

19.87

22.01

15.23

77.78 65.57 52.25 12.96 20.23 21.83 89.62 84.81 74.81 15.26 28 41.71 AR: Average Recall

40.15 24.27 64.07 42.71

67.15 16.52 82.66 26.19

AP 100 AR 3.3 AP 100 AR 3.3 AP: Average Precision

Table 2. Comparison of F-measure Number of top matches Top 1 Top 5 Top 10 Top 15 Top 20

Haralick features

DWT Features

6.34 17.35 23.13 23.87 23.84

6.34 21.91 26.42 26.72 27.29

SVD based Features 6.34 22.22 30.92 30.79 30.25

Fig. 6 Graphical comparison of results

Proposed Method 6.34 26.06 42.10 53.55 51.24



Umesh D. Dixit et al. / Procedia Computer Science 132 (2018) 659–668 Umesh D. Dixit and M. S. Shirdhonkar/ Procedia Computer Science 00 (2018) 000–000

667 9

Fig. 7 Graphical comparison of F-Measure

Fig. 6 provides a graphical comparison of MAP and MAR using proposed method and other methods. F-measure obtained with different methods is also shown in Fig. 7. Thus the Comparisons reveal that the proposed method outperforms in MAP, MAR and F-measure parameters. 4. Conclusion This paper proposes a technique for face based document image retrieval, which helps to retrieve all documents pertaining to a person. A new method that extends the concept of GLCM for RGB images is used for feature extraction and Mahalanobis distance for finding similarity measure. MAP of 82.66% is achieved with the proposed technique in this work. References 1. 2.

Dr. Libor Spacek, Faces Directories, Faces 94 Directory, http://cswww.essex.ac.uk/mv/allfaces R. Haralick, K. Shanmugam, and I. Dinstein, “Textural Features for Image Classification”, IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC–3, Issue. 6, pp. 610–621, 1973. 3. Mohammadreza Keyvanpour and Reza Tavoli, “Document Image Rettrieval: Algorithms, Analysis and Promising directions,” International Journal of Software Engineering and its Applications, vol. 7, Issue. 1, 2013. 4. Umesh D. Dixit and M. S. Shirdhonkar, “A Survey on Document Image Analysis and Retrieval System” 5. H. B. Kekre, Sudeep D. Thepade, Akshay Maloo, “Face Recognition Using Texture Features Extracted From Walshlet Pyramid”, International Journal on Recent Trends in Engineering & Technology, Vol. 5, Issue. 2, pp.186-193, 2011. 6. Anastasios N. Venetsanopoulos, “Face Recognition Using Kernel Direct Discriminant Analysis Algorithms”, IEEE Transactions On Neural Networks, Vol. 14, No. 1, pp.118-126, 2003. 7. Majid Ahmadi ,Javad Haddadniaa, Karim Faeza, “A fuzzy hybrid learning algorithm for radial basis function neural network with application in human face recognition”, Pattern Recognition Society, Vol.36, Issue.5, pp.1187-1202, 2003. 8. Vikram T.N, Shalini R. Urs, and K. Chidananda Gowda, “Person specific document retrieval using face biometrics”, ICADL, Springer LNCS 5362, pp. 371–374, 2008. 9. Ying Weng, Aamer Mohamed, Jianmin Jiang and Stan Ipson, “Face Detection based Neural Networks using Robust Skin Color Segmentation”, 5th International Multi-Conference on Systems Signals and Devices, pp.1-5, 2008. 10. Xiaoyang Tan and Bill Triggs, “Fusing Gabor and LBP Feature Sets for Kernel-based Face Recognition”, 3rd International Workshop on Analysis and Modeling of Faces and Gestures , Vol. 4778, pp. 235-249, 2007. 11. Lu Leng, Jiashu Zhang, Muhammad Khurram Khan, Xi Chen, Khaled Alghathbar, “Dynamic weighted discrimination power analysis: A novel approach for face and palmprint recognition in DCT domain”, International Journal of the Physical Sciences, Vol. 5, Issue. 17, pp. 2543-2554, 2010. 12. Unsang Park and Anil K.Jain, “Face Matching and Retrieval using Soft Biometrics”, IEEE Transactions on Information Forensics and Security, Vol. 5, Issue. 3, pp. 406-415, 2010.

668

Umesh D. Dixit et al. / Procedia Computer Science 132 (2018) 659–668 Umesh D. Dixit and M. S. Shirdhonkar/ Procedia Computer Science 00 (2018) 000–000

10

13. Bor-Chun Chen, Yan-Ying Chen, Yin-His Kuo, Winston H.Hsu, “Scalable Face Image Retrieval using Attribute-Enhanced Sparse Codewords”, IEEE Transactions on Multimedia, Vol. 15, Issue 5, pp. 1-11, 2013. 14. S. Pannirselvam and S. Prasath, “A Novel Technique for Face Recognition and Retrieval using Fiducial Point features”, Procedia Computer Science, Elsevier, pp. 301-310, 2015. 15. Shiv Ram Dubey, Satish Kumar Singh and Rajat Kumar Singh, “Local SVD based NIR Face Retrieval”, Journal of Visual Communicati ons and Image Representation, 2017. 16. Umesh D. Dixit and M. S. Shirdhonkar, “Face-biometric based Document Image Retrieval using SVD features”, Computational Intelligence in Data Mining, 2017 – Springer 17. Otsu, N., "A Threshold Selection Method from Gray-Level Histograms," IEEE Transactions on Systems, Man, and Cybernetics, Vol. 9, Issue. 1, pp. 62-66, 1979. 18. Bino Sebastian V, A. Unnikrishnan and Kannan Balakrishnan, “Grey level co-occurrence matrices: generalization and some new features”, International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, Issue. 2, 2012.