Biomedical Image Indexing and Retrieval Descriptors

1 downloads 0 Views 296KB Size Report
Procedia Computer Science 85 ( 2016 ) 954 – 961 ... Keywords:Medical imaging; Image retrieval; Local binary pattern (LBP); Local ternary pattern (LTP); local ...
Available online at www.sciencedirect.com

ScienceDirect Procedia Computer Science 85 (2016) 954 – 961

Biomedical image indexing and retrieval descriptors: A comparative study Gagan Deepa*, Lakhwinder Kaurb, Savita Guptac a

Department of CSE, IET Bhaddal, PTU, Ropar, India Department of CE, Punjabi University, Patiala, India c Department of CSE, UIET, PU, Chandigarh, India

b

Abstract This paper focuses on the comparison of two new proposed pattern descriptors i.e., local mesh ternary pattern (LMeTerP) and directional local ternary quantized extrema pattern (DLTerQEP) for biomedical image indexing and retrieval. The standard local binary patterns (LBP) and local ternary patterns (LTP) encode the gray scale relationship between the center pixel and its surrounding neighbors in two dimensional (2D) local region of an image whereas the former descriptor encodes the gray scale relationship among the neighbors for a given center pixel with three selected directions of mess patterns which is generated from 2D image and later descriptor encodes the spatial relation between any pair of neighbors in a local region along the given directions (i.e.,Ͳ௢ , Ͷͷ௢ , ͻͲ௢ and ͳ͵ͷ௢ ) for a given center pixel in an image. The novelty of the proposed descriptors is that they use ternary patterns from images to encode more spatial structure information which lead to better retrieval. The experimental results demonstrate the superiority of the new techniques in terms of average retrieval precision (ARP) and average retrieval rate (ARR) over state-of-the-art feature extraction techniques (like LBP, LTP, LQEP, LMeP etc.) on three different types of benchmark biomedical databases. Keywords:Medical imaging; Image retrieval; Local binary pattern (LBP); Local ternary pattern (LTP); local mesh ternary pattern; directional local ternary quantized extrema pattern; Texture;

1.

Introduction

Nowadays, texture based features play an important role as powerful discriminating visual features. They are extensively used in the image processing applications to identify the visual patterns. Some of texture feature extraction methods are proposed1,2,3,4,5,6,7,8,9. Most of biomedical images represented in gray scale are extensively textured in common. Hence, in clinical exams, the appearance of organ/tissue/lesion in the images is due to intensity variations that laid on various texture characteristics. Thereafter, texture became very popular in biomedical image retrieval because of eminent significance of acquired texture10,11,12. However, the computation complexity of the texture features calculated from above texture descriptors is more expansive. To address the same, the local binary pattern (LBP) is proposed 13. LBP being having low computational complexity and capacity of coding minute specifications, they14 proposed some more moderations in LBP for texture classifications. Many researchers have extensively worked on LBP in various useful applications15,16,17. In the literature, many extended versions of LBP to obtain new image features for biomedical image indexing and retrieval have proposed18-24. As we know that the LBP provides the first-order directional derivative patterns but they25 examined LBP as non-directional first-order local patterns and have proposed local derivative patterns (LDP) for face recognition. Both LBP and LDP in the literature review cannot appropriately find out the appearance distinction of particular objects in natural images due to intensity variations. To handle this, local ternary pattern (LTP) introduced for texture classification26. LTP has small pixel value variations as compared to LBP. LBP, LDP and LTP

* Corresponding author. Tel.: +91-981-597-2140; E-mail address:[email protected]

1877-0509 © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer-review under responsibility of the Organizing Committee of CMS 2016 doi:10.1016/j.procs.2016.05.287

955

Gagan Deep et al. / Procedia Computer Science 85 (2016) 954 – 961

capture the feature information based on the distribution of edges which are coded using only two directions (positive direction or negative direction). In some more researches, the local quantized patterns (LQP) for visual recognition 27 and local quantized extrema patterns (LQEP) for natural and texture image retrieval have proposed28. The local mesh patterns (LMeP) have been proposed for biomedical image retrieval23. The standard LBP method provides the connection between the referenced pixel and its neighbors in circumference. In contrast to LBP, LQEP method provides the connection between any pair of neighbors in a local region for a given referenced pixel in an image and LMeP method provides the connection between the surrounding neighbors for a given referenced pixel in an image. After the research papers review above, in this paper, the local patterns; LBP, LTP and LMeP motivated us to propose the LMeTerP for biomedical image indexing and retrieval. It has already been proved7,8,9 that the directional features are very significant in various applications of image retrieval. But, the most of above literature on LBP features and its improved versions gives non-directional features. So, in this paper, the local 2D patterns; LBP, LTP and LQEP also motivated us to propose the DLTerQEP for biomedical image indexing and retrieval. Both descriptors use the ternary patterns of images to provide more spatial structure information which lead to better retrieval. The performance of the proposed descriptors is tested by conducting the experiments on three different types of benchmark biomedical databases. 2. Proposed Methods 2.1. Local mesh ternary patterns (LMeTerP) The idea of local patterns (the LBP, the LTP, and the LMeP) has been adopted to define LMeTerP. To extend the standard LTP to LMeTerP, we generated the three local mesh pattern images at j=1, 2, 3 using convolution operations from a given image as follows: ು మ

ቀ ቁିଵ

݂ଶ ሺܿ‫ݒ݊݋‬ሺ݉݁‫ܲܤܮ̴݄ݏ‬ǡ ‫ݓ‬ሼͳǡ ݆ሽǡ Ԣ‫݁݉ܽݏ‬ԢሻǢ ̴݉‫ ݏ݊ݎ݁ݐݐܽ݌̴ݎ݁ݐ‬௝ ௉ǡோ ൌ ෍ ሬሬሬԦ ௝ୀଵ

(1) ሬሬሬԦଶ ሺ‫ݔ‬ሻ is the three value code which contains -1 or 0 or 1. This mess ternary code is converted into two binary From Eq. (1), ݂ codes (upper LTP code and lower LTP code) at different thresholds (in this paper, 2 and -2) using the concept of LTP 26. The ternary coding patterns are obtained as follows: ು మ

ቀ ቁିଵ

̴݉‫ݏ݊ݎ݁ݐݐܽ݌̴ݎ݁ݐ‬௨௣௣௘௥ ሺͳǡ ݆ ൅ ͳሻ ൌ ෍ ሬሬሬԦ ݂ଶ ሺ‫ݔ‬ሻǢ ௝ୀଵ

(2)

where ሬሬሬԦଶ ሺ‫ݔ‬ሻ ൌ  ൜ͳǡ݂݅ሺ‫ ݔ‬൒ ሺ‫ ݈݀݋݄ݏ݁ݎ݄ݐ‬ൌ ʹሻሻǢ ݂ Ͳǡ݂݅ሺ‫ ݔ‬൏ ሺ‫݈݀݋݄ݏ݁ݎ݄ݐ‬ሻሻǢ ು మ

ቀ ቁିଵ

ሬሬሬԦଶ ሺ‫ݔ‬ሻǢ ̴݉‫ݏ݊ݎ݁ݐݐܽ݌̴ݎ݁ݐ‬௟௢௪௘௥ ሺͳǡ ͷ ൅ ݆ሻ ൌ ෍ ݂ ௝ୀଵ

(3) where ሬሬሬԦଶ ሺ‫ݔ‬ሻ ൌ  ൜ͳǡ݂݅ሺ‫ ݔ‬൑ ሺ‫ ݈݀݋݄ݏ݁ݎ݄ݐ‬ൌ െʹሻሻǢ ݂ Ͳǡ݂݅ሺ‫ ݔ‬൐ ሺ‫݈݀݋݄ݏ݁ݎ݄ݐ‬ሻሻǢ Now, by multiplying with the binomial weights to each LTP coding, the unique LMeTerP values (decimal values) for a particular selected mess pattern (3×3) image for characterization of spatial structure of the local pattern are defined by Eq.4: ௉ିଵ

‫ܲݎ݁ܶ݁ܯܮ‬ఈǡ௉ ൌ ෍ ̴݉‫ݏ݊ݎ݁ݐݐܽ݌̴ݎ݁ݐ‬ሺ௨௣௣௘௥Τ௟௢௪௘௥ሻǡ௪ ʹ௪ ௪ୀ଴

(4) Each LTP (upper and lower) map of mesh images of LMeTerP method is with values ranging from (0 to 2P-1). The complete image is represented by constructing a histogram using the following Eq. 5 after detecting the local pattern, PTN (LBP or LTP or

956

Gagan Deep et al. / Procedia Computer Science 85 (2016) 954 – 961

m_ter_patterns or LMeTerP). ௞భ

ଵ ሺ‫ݒ‬ሻ ‫ܪ‬௅ெ௘்௘௥௉

௞మ

ͳ ൌ ෍ ෍ ݂ଶ ሺ‫ܲݎ݁ܶ݁ܯܮ‬ሺ‫ݍ‬ǡ ‫ݎ‬ሻǡ ‫ݒ‬ሻǢ ݇ଵ ൈ ݇ଶ ௤ୀଵ ௥ୀଵ

‫ר‬

‫ א ݒ‬ሾͲǡ ሺʹ ܲ െ ͳሻሿǢ ͳǡ݂݅ሺ‫ ݔ‬ൌ ‫ݕ‬ሻǢ ݂ଶ ሺ‫ݔ‬ǡ ‫ݕ‬ሻ ൌ  ൜ Ͳǡ݂݅ሺ‫ݕ ് ݔ‬ሻǢ (5) where k1 × k2 represents the size of an input image. Further, we consider the uniform two operations on two LBPs which are calculated from the LTP. Further information about the uniform two patterns are as follows. For local pattern with P neighboring pixels, there are 2P (0 to 2P-1) possible values for LBP, resulting in a feature vector of length 2P. A high computational cost is involved in extracting such a feature vector. Thus, uniform patterns17 are considered to reduce the computational cost. Thus, the distinct uniform patterns for a given query image would be P (P-1) + 2. The possible uniform patterns for P=8 can be seen17. In the above equation 5, the map values are updated with the ranging: ‫ א ݒ‬ሾͲǡ ܲሺܲ െ ͳሻ ൅ ʹሻሿǢ Here, as we know that LTP26 is affected with the large size of feature dimensions (from 2 P to 3P) (using 3-value ternary coding). In this paper using thresholds ((t) = 2, -2), it has been studied that they26 solved the above said problem by converting the ternary pattern into two binary patterns (upper LTP and lower LTP) (see in Eq. 2 and 3). 2.2. Directional local ternary quantized extrema patterns (DLTerQEP) The idea of local patterns (the LBP, the LTP, and the LQEP) has been adopted to define directional local ternary quantized extrema pattern (DLTerQEP). DLTerQEP describes the spatial structure of the local texture in ternary patterns using the local extrema and directional geometric structures. In the proposed DLTerQEP for a given image, the local extrema in all directions is acquired by computing local difference between the center pixel and its neighbours by indexing of the patterns with pixel positions. The positions are indexed in a manner to write the four directional extremas operator calculations. Local directional extrema values 28 (LDEV) for the local pattern neighbourhoods of an image (I) are calculated as follows: ௞భ ି௝ ௞మ ି௝

‫ܸܧܦܮ‬ሺ‫ݍ‬ǡ ‫ݎ‬ሻ ൌ ෍ ෍ ሺ‫ܫ‬ሺ‫ ݍ‬െ ݆ǣ ‫ ݍ‬൅ ݆ǡ ‫ ݎ‬െ ݆ǣ ‫ ݎ‬൅ ݆ሻ െ ‫ܫ‬ሺ‫ݍ‬ǡ ‫ݎ‬ሻሻ ௤ୀ௜ ௥ୀ௜

(6) where the size of input image is k1× k2 and the values of i and j are 4, 3 respectively for getting values of directional patterns of at least 7×7 pattern of an image. Four directional HVDA7 geometric structure of possible directional LQP geometries is used for feature extraction. Directional local extrema values in 0o, 45o, 90o and 135o directions are extracted in the shape of HVDA7 from LDEV values as calculated by Eq.6. Then four directional ternary extrema codings (DTEC) are collected based on the four directions (0 o, 45o, 90o and 135o) at different thresholds (in this paper, 2 and -2) using the concept of LTP26. The ternary coding patterns are obtained as follows: ‫ͳܥܧܶܦ‬൫‫ܫ‬ሺ݃௖ ሻ൯ห‫ן‬ ௢ ሬሬሬԦ ሬሬሬԦ ሬሬሬԦ ‫݂ ۓ‬ସ ൫‫ܸܧܦܮ‬ሺ݃ସହ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃ସଷ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ݂ସ ൫‫ܸܧܦܮ‬ሺ݃ସ଺ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃ସଶ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ݂ସ ൫‫ܸܧܦܮ‬ሺ݃ସ଻ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃ସଵ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ‫ן‬ൌ Ͳ ݂ۖ ሬሬሬԦସ ൫‫ܸܧܦܮ‬ሺ݃ଷସ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃ହସ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ݂ ሬሬሬԦସ ൫‫ܸܧܦܮ‬ሺ݃ଶସ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃଺ସ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ݂ ሬሬሬԦସ ൫‫ܸܧܦܮ‬ሺ݃ଵସ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃଻ସ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ‫ן‬ൌ Ͷͷ௢ ൌ ሬሬሬԦସ ൫‫ܸܧܦܮ‬ሺ݃ଷହ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃ହଷ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ݂ ሬሬሬԦସ ൫‫ܸܧܦܮ‬ሺ݃ଶ଺ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃଺ଶ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ݂ ሬሬሬԦସ ൫‫ܸܧܦܮ‬ሺ݃ଵ଻ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃଻ଵ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ‫ן‬ൌ ͻͲ௢ ‫݂۔‬ ۖ ሬሬሬԦସ ൫‫ܸܧܦܮ‬ሺ݃ଷଷ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃ହହ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ݂ ሬሬሬԦସ ൫‫ܸܧܦܮ‬ሺ݃ଶଶ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃଺଺ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ݂ ሬሬሬԦସ ൫‫ܸܧܦܮ‬ሺ݃ଵଵ ሻ ൈ ‫ܸܧܦܮ‬ሺ݃଻଻ ሻǡ ‫ܫ‬ሺ݃௖ ሻ൯Ǣ‫ן‬ൌ ͳ͵ͷ௢ ‫݂ە‬

(7) where gij represents ith row and jth column coefficient index, For upper LTP, DTEC1 is computed from Eq. (7) by obtaining value of ሬሬሬԦ ݂ସ as follows: ሬሬሬሬԦସ ሺ‫ݔ‬ǡ ݃௖ ሻ ൌ  ൜ͳǡ݂݅ሺ‫ ݔ‬൒ ሺ‫ ݈݀݋݄ݏ݁ݎ݄ݐ‬ൌ ʹሻሻǢ ݂ Ͳǡ݂݅ሺ‫ ݔ‬൏ ሺ‫݈݀݋݄ݏ݁ݎ݄ݐ‬ሻሻǢ (8) Similarly, from above Eq. (7), lower LTP, DTEC2 is computed by obtaining value of ሬሬሬԦ ݂ସ as follows:

957

Gagan Deep et al. / Procedia Computer Science 85 (2016) 954 – 961

ሬሬሬԦସ ሺ‫ݔ‬ǡ ݃௖ ሻ ൌ  ൜ͳǡ݂݅ሺ‫ ݔ‬൑ ሺ‫ ݈݀݋݄ݏ݁ݎ݄ݐ‬ൌ െʹሻሻǢ ݂ Ͳǡ݂݅ሺ‫ ݔ‬൐ ሺ‫݈݀݋݄ݏ݁ݎ݄ݐ‬ሻሻǢ (9) The DTEC is defined from the Eqs.7 to 9 as follows: ‫ͳܥܧܶܦ‬൫‫ܫ‬ሺ݃௖ ሻ൯ห଴೚ ǡ ‫ͳܥܧܶܦ‬൫‫ܫ‬ሺ݃௖ ሻ൯หସହ೚ ǡ ‫ͳܥܧܶܦ‬൫‫ܫ‬ሺ݃௖ ሻ൯หଽ଴೚ ǡ ‫ͳܥܧܶܦ‬൫‫ܫ‬ሺ݃௖ ሻ൯หଵଷହ೚ Ǣ ൩ ‫ܥܧܶܦ‬൫‫ܫ‬ሺ݃௖ ሻ൯ ൌ ൥ ‫ʹܥܧܶܦ‬൫‫ܫ‬ሺ݃௖ ሻ൯ห଴೚ ǡ ‫ʹܥܧܶܦ‬൫‫ܫ‬ሺ݃௖ ሻ൯หସହ೚ ǡ ‫ʹܥܧܶܦ‬൫‫ܫ‬ሺ݃௖ ሻ൯หଽ଴೚ ǡ ‫ʹܥܧܶܦ‬൫‫ܫ‬ሺ݃௖ ሻ൯หଵଷହ೚ (10) As above DTEC coding is converted into two binary codes (upper LTP code and lower LTP code) as that in the LTP 26. So in practice, DLTerQEP allows concatenation of four directional extremas for P=12-bit (w=0...11) binary coding string generation (3-bit from each given directional extrema) in each binary pattern of LTP. Now, by multiplying with the binomial weights to each DTEC LTP coding, the unique DLTerQEP values (decimal values) for a particular given pattern (7×7) for characterization of spatial structure of the local pattern are defined by Eq.11. ௉ିଵ

‫ܲܧܳݎ݁ܶܮܦ‬ఈǡ௉ ൌ ෍ ‫ܥܧܶܦ‬ሺ௨௣௣௘௥ Τ௟௢௪௘௥ሻǡ௪ ʹ௪ ௪ୀ଴

(11) For whole image, each DTEC LTP (upper and lower) map from above is with values ranging from 0 to 4095 (0 to 2 P-1), so complete DTEC LTP map is built with values ranging from 0 to 8191(0 to ((2(2 P))-1). The complete image is represented by constructing a histogram using the following Eqs. 12 and 13 after detecting the local pattern, PTN (LBP or LTP or DTEC or DLTerQEP). ௞భ

ଵ ሺ‫ݒ‬ሻ ൌ ‫ܪ‬஽௅்௘௥ொா௉

௞మ

ͳ ෍ ෍ ݂ଶ ൫‫ܲܧܳݎ݁ܶܮܦ‬௨௣௣௘௥ ሺ‫ݍ‬ǡ ‫ݎ‬ሻǡ ‫ݒ‬൯Ǣ ݇ଵ ൈ ݇ଶ ௤ୀଵ ௥ୀଵ

‫ א ݒ‬ሾͲǡ ͶͲͻͷሿǢ ݂ଶ ሺ‫ݔ‬ǡ ‫ݕ‬ሻ ൌ  ൜

ͳǡ݂݅ሺ‫ ݔ‬ൌ ‫ݕ‬ሻǢ Ͳǡ݂݅ሺ‫ݕ ് ݔ‬ሻǢ (12)

ଶ ሺ‫ݒ‬ሻ ൌ ‫ܪ‬஽௅்௘௥ொா௉

ଵ ௞భ ൈ௞మ

భ మ σ௞௤ୀଵ σ௞௥ୀଵ ݂ଶ ሺ‫ܲܧܳݎ݁ܶܮܦ‬௟௢௪௘௥ ሺ‫ݍ‬ǡ ‫ݎ‬ሻǡ ‫ݒ‬ሻǢ ‫ א ݒ‬ሾͲǡ ͶͲͻͷሿǢ

(13) Parameter specifications of Eq.13 are same as in Eq.12, ଵ ଶ ሺ‫ݒ‬ሻǡ ‫ܪ‬஽௅்௘௥ொா௉ ሺ‫ݒ‬ሻሿ ‫ܪ‬஽௅்௘௥ொா௉ ሺ‫ݒ‬ሻ ൌ ሾ‫ܪ‬஽௅்௘௥ொா௉

(14)

The proposed DLTerQEP is disparate from the familiar method LBP. The DLTerQEP extracts the spatial relation between any pair of neighbors in a local region along the given directions, while LBP extracts relation between the center pixel and its neighbors. DLTerQEP take out the directional edge information based on local extrema that differ it from the existing LBP. Therefore, DLTerQEP captures more spatial information as compared with LBP. DLTerQEP also provides a significant increase in discriminative power by allowing larger local pattern neighborhoods. Fig. 1 illustrates the responses obtained by applying the LBP, the LTP, the LMeTerP and the DLTerQEP on a reference face image. Face image is chosen as it provides the results which are visibly comprehensible to differentiate the effectiveness of these approaches. It is monitored that the feature map of DLTerQEP (upper and lower LTPs) operator is able to capture more directional edge information as compared to feature map of LBP, LTP and LMeTerP for texture extraction.

^ĂŵƉůĞŝŵĂŐĞ

>WĨĞĂƚƵƌĞŵĂƉ

>DĞdĞƌW ĨĞĂƚƵƌĞŵĂƉ

>dĞƌYW Ͳ ƵƉƉĞƌ >dWĨĞĂƚƵƌĞŵĂƉ

Fig. 1. Response of proposed method on a reference face

>dĞƌYW Ͳ ůŽǁĞƌ >dWĨĞĂƚƵƌĞŵĂƉ

958

3.

Gagan Deep et al. / Procedia Computer Science 85 (2016) 954 – 961

Experimental results

The performance of the proposed algorithm is analyzed for biomedical image retrieval by conducting experiments on three different medical databases. In all the experiments, each image in the database is chosen as the query image. For each query, the system collects n database images ܺ ൌ ൫‫ݔ‬ଵǡ ‫ݔ‬ଶǡǥǡ ‫ݔ‬௡ ൯ǡ with the shortest image matching distance. If ‫ݔ‬௜Ǣ i=1,2,...,n associate with the same category of the query image, we can say that the system has correctly matched the desired. The results yielded by the proposed methods are examined in the following subsections. 3.1. Experiment on LIDC-IDRI-CT data set

ARP(%)

Experiments are conducted on the images collected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), which is public lung image database of CT scans29. The database is separated into 84 cases, each containing around 100-400 Digital Imaging and Communication (DICOM) images and an XML data file containing the physicians annotations. Database contains 143 nodules having size range of 3 to 30mm (to be manually segmented by radiologists). For the experiment, 12 patient cases consisting of 75 nodules (26 benign and 49 malignant) and 229 slices have been selected. 1,2 1 0,8 0,6 0,4 0,2 0

LBPu2 LMeTerP LMeP WLD LQP 0

2

4

6 Query image data

8

10

12

LQEP DLTerQEP

Fig.2. Performance comparison of the proposed methods (LMeTerP and DLTerQEP) with other existing methods by passing different query images (1-10) in terms of ARP on LIDC-IDRI-CT database.

The Fig. 2 shows the retrieval performance for top ten matches of the proposed methods (LMeTerP and DLTerQEP) and some other existing methods (LBPu2, WLD, LQP, LMeP, LQEP) in terms of ARP by passing different query images (1-10) on LIDCIDRI-CT database. The Fig. 2 shows that LMeTerP achieves better performance over the DLTerQEP and other existing methods LBPu2, WLD, LQP, LMeP and LQEP on most of the cases. 3.2. Experiment on VIA/I-ELCAP-CT data set To perform the evaluation of different computer-aided detection systems, computer tomography (CT) dataset is designed by Vision and image analysis (VIA) group and international early lung cancer action program (I-ELCAP) (VIA/I-ELCAP CT lung image dataset is available online30). Experiments are performed on CT scans of about 10 scans having 100 images each of resolution 512×512. Table I shows the performance of the proposed methods and other existing methods in terms of ARP (%) on VIA/I-ELCAPCT database. Fig. 3 illustrates the retrieval performance of the proposed methods (LMeTerP and DLTerQEP) and other existing methods (LBP/LBPu2, INTH, GLCM1, GLCM2, LMeP, LTP, LDP, LTCoP and LQEP) in terms of ARR. From Table I and Fig. 3, it is clear that the proposed method (DLTerQEP) outperforms the LMeTerP and other existing methods in terms of ARP and ARR on VIA/I-ELCAP-CT database. 3.3. Experiment on OASIS-MRI data set In the experimentation, the open access series of imaging studies (OASIS) database is utilized. The open access series of imaging studies (OASIS) is a series of magnetic resonance imaging (MRI) dataset. (Dataset is available online for use in medical research field31). A cross-sectional collection of 421 patients aged between 18 and 96 years is used for the experimentation. Furthermore, based on the shape of ventricular in the images, four categories (124, 102, 89, and 106 images) from 421 images are grouped to evaluate the retrieval of images.

959

Gagan Deep et al. / Procedia Computer Science 85 (2016) 954 – 961

Table I: Performance of the proposed methods and other existing methods in terms of ARP (%) on VIA/I-ELCAP-CT database. Method

ARP(%)

10

20

30

40

50

60

70

80

90

100

INTH

60.76

51.325

45.8833

42.2475

GLCM1

63.37

53.11

46.9567

42.9475

39.242

36.715

34.4786

32.5438

30.8633

29.402

39.716

37.1733

34.9586

32.8938

31.1067

29.627

GLCM2

65.07

55.33

49.62

LBPu2

79.21

73.77

70.26

45.405

42.152

39.3483

36.9429

34.8125

32.9733

31.383

66.9025

64.056

61.3467

58.9314

56.7175

54.31

LTP

71.59

65.29

61.44

51.915

58.17

55.82

54.12

52.09

50.03

48.44

47.53

LTCoP

86.95

78.55

LDP

86.12

77.83

73.55

69.19

65.47

62.44

60.02

57.5

54.97

52.47

72.57

67.93

63.21

60.01

57.55

54.68

52.02

48.95

LMeP

83.28

76.5

LMeTerP

88.19

80.69

71.97

68.06

64.83

62.05

59.5643

57.2575

54.9356

52.695

75.39

71.189

67.49

64.21

61.33

58.6

56

53.34

LQEP

82.94

DLTerQEP

87.58

76.09

71.22

67.29

64.17

61.36

58.98

56.77

54.55

52.41

79.88

74.6

70.36

66.91

63.73

61.05

58.61

56.41

54.07

59 INTH

GLCM1

54

GLCM2

LBPu2

49

LTP

LTCoP

LDP

LMeP

LMeTerP

LQEP

44

DLTerQEP

ARR (%)

39 34 29 24 19 14 9 4 10

20

30

40 50 60 70 Number of top matches considered

80

90

100

Fig. 3. Comparison of (LMeTerP and DLTerQEP) and other existing methods in terms of ARR on VIA/I-ELCAP-CT database.

Table II summarizes the retrieval results of (LMeTerP and DLTerQEP) and other existing methods on OASIS database in terms of ARP. Fig. 4 shows the graphs depicting the retrieval performance of the proposed method and other existing methods on different values of (P, R) in terms of ARP as function of number of top matches. From Fig. 4 and Table II, it is evident that the proposed method (DLTerQEP) outperforms the LMeTerP and the many of other existing methods on OASIS-MRI image database for biomedical image retrieval.

Gagan Deep et al. / Procedia Computer Science 85 (2016) 954 – 961

ARP(%)

960

LBPu2_8_1 LBPu2_16_2 LMeTerP CS_LBP BLK_LBP DBWP LTP DLEP LDP LMeP_8_1 LMePu2_8_1 LQEP DLTerQEP

100 95 90 85 80 75 70 65 60 55 50 45 40 35

1

2

3

4

5

6

7

8

9

10

Number of top matches considered

Fig. 4. Comparison of proposed methods and other existing methods on different values of (P, R) as function of number of top matches in terms of ARP on OASIS-MRI database. Table II: Comparison of various techniques showing group wise performance in terms of precision on OASIS-MRI database Method

4.

Precision (%) (n=10)

Group1

Group2

Group3

Group4

Total

LBPu2_8_1

51.77

32.54

33.82

49.06

42.63

LBPu2_16_2

52.58

38.43

31.68

51.13

44.37

LMeTerP

54.03

38.04

35.73

50.09

44.47

CS_LBP

44.7

40.1

31.17

48.27

41.06

BLK_LBP

48.13

41.22

35.16

50.01

43.63

DBWP

52.74

37.74

34.38

60

47.05

LTP

56.53

36.27

34.97

50

45.17

DLEP

50.51

36.28

37.01

68.21

48

LDP

46.29

36.37

36.82

45.56

41.8

LMeP

52.82

36.56

36.08

51.5

44.96

LQEP

50.97

36.37

39.1

35.57

40.51

DLTerQEP

51.61

38.63

36.97

65.57

48.19

Conclusions

In this paper, new proposed descriptors for biomedical image indexing and retrieval are compared. To test the robustness and effectiveness of the proposed algorithms, results are taken on three different types of benchmark medical databases. The detailed investigations of the results show that the proposed method (DLTerQEP) outperforms in terms of ARP and ARR as compared to LBP, LTP, LMeTerP and other state-of-the-art methods on LIDC-IDRI-CT, OASIS-MRI and VIA/I-ELCAP-CT databases. References 1. 2. 3. 4. 5. 6. 7.

Haralick, R., Shanmugan, K., Dinstain, I., Textural features for image classification. IEEE Transactions on Systems, Man and Cybernetics 1973; 3(6): pp.610-622. Choi, H., Baraniuk, R., Multiscale image segmentation using wavelet-domain hidden markov models. IEEE Trans. on Image Processing 2001; 10(9): pp.1309-1321. Chen, J., Shan, S., He, C., Zhao, G., Pietikäinen, M., Chen, X., Gao, W., WLD: A Robust local image descriptor. IEEE transactions on pattern analysis and machine intelligence 2009; 32(9): pp.1705-1720. Zhang, L., Zhou, Z., & Li, H., Binary Gabor pattern: An efficient and robust descriptor for texture classification. 19th IEEE International Conference on Image Processing (ICIP) 2012; pp.81-84. Nanni, L., Lumini, A., & Brahnam, S., Local binary patterns variants as texture descriptors for medical image analysis. Artificial Intelligence in Medicine 2010; 49(2): pp.117-125. Do, M., N., Vetterli, M., Wavelet-based texture retrieval using generalized Gaussian density and Kullback-leibler distance. IEEE Trans. on Image Processing 2002; 11(2): pp.146-158. Kokare, M., Biswas, P., K., Chatterji, B., N., Texture image retrieval using new rotated complex wavelet filters. IEEE Trans Syst Man Cybernet 2005; 33(6): pp.1168-1178.

Gagan Deep et al. / Procedia Computer Science 85 (2016) 954 – 961

8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31.

961

Kokare, M., Biswas, P., K., Chatterji, B., N., Rotation-invariant texture image retrieval using rotated complex wavelet filters. IEEE TransSyst Man Cybernet 2006; 36(6): pp.1273-1282. Kokare, M., Biswas, P., K., Chatterji, B., Texture image retrieval using rotated wavelet filters. J Pattern Recog. Lett 2007; 28: pp.1240-49. Felipe, J., C., Traina, A., J., M., Traina, C., J., Retrieval by content of medical images using texture for tissue identification. In:Proceedings of the 16th IEEE Symposium on Computer-Based Medical Systems, New York, USA, 2003; pp.175-180. Jhanwara, N., Chaudhuri, S., Seetharamanc, G., Zavidovique, B., Content based image retrieval using motif co-occurrence matrix. Image and Vision Computing 2004; 22(14): pp.1211-1220. Scott, G., Shyu, C., R., Knowledge-driven multi dimensional indexing structure for biomedical media database retrieval. IEEE Trans. Inf. Technol. Biomed. 2007; 11 (3): pp.320-331. Ojala, T., Pietikainen, M., Harwood, D., A comparative study of texture measures with classification based on feature distributions. Pattern Recognition 1996; 29 (1): pp.51-59. Ojala, T., Pietikainen, M., and Maeenpaa, T., Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence 2002; 24(7): pp.971-987. Nanni, L., and Lumini, A., Local binary patterns for a hybrid fingerprint matcher. Pattern Recognition 2008; 41(11): pp.3461-3466. Heikkila, M., Pietikainen, M., Schmid, C., Description of interest regions with local binary patterns. Pattern Recognition 2009; 42: pp.425-436. Guo, Z., Zhang, L., and Zhang, D., Rotation invariant texture classification using LBP variance with global matching. Pattern recognition 2010; 43(3): pp.706-716. Murala, S., Maheshwari, R., P., and Balasubramanian, R., Directional binary wavelet patterns for biomedical image indexing and retrieval. J. Med. Syst. 2012; 36 (5): pp.2865-2879. Murala, S., Maheshwari, R., P., and Balasubramanian, R., Directional local extrema patterns: a new descriptor for content based image retrieval. Int. J. Multimedia Inf. Retr. 2012b; 1 (3): pp.191-203. Murala, S., Maheshwari, R., P., and Balasubramanian, R., Local maximum edge binary patterns: a new descriptor for image retrieval and object tracking. Signal Processing 2012c; 92(6): pp.1467-1479. Murala, S., Maheshwari, R., P., and Balasubramanian, R., Local tetra patterns: new feature descriptor for content based image retrieval. IEEE Trans. Image Processing 2012d; 21(5): pp.2874±2886. Murala, S., and Jonathan, W., Q., Local ternary co-occurrence patterns: a new feature descriptor for MRI and CT image retrieval. Neurocomputing 2013; 119 (7): pp.399-412. Murala, S., and Jonathan, W., Q., Local mesh patterns versus local binary patterns: biomedical image indexing and retrieval. IEEE Journal of Biomedical and Health Informatics 2014; 18 (3): pp.929-938. Murala, S., and Jonathan, W., Q., MRI and CT image indexing using local mesh peak valley edge patterns. Signal processing: image communication 2014b; 29(3): pp.400-409. Zhang, B., Gao, Y., Zhao, S., Liu, J., Local derivative pattern versus local binary pattern: Face recognition with higher-order local pattern descriptor. IEEE Trans. Image Processing 2010; 19 (2): pp.533-54. Tan, X., Triggs, B., Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process 2010; 19(6): pp.1635-1650. Hussain S., U., Triggs, B., Visual recognition using local quantized patterns. European conference on computer vision (ECCV 2012), Part-II, LNCS 7573, Italy 2012; pp.716-729. Rao, L., K., Rao, D., V., Local quantized extrema patterns for content based natural and texture image retrieval. Human Centric Computing and Information Sciences, DOI: 10.1186/s13673-015-0044-z, 2015; 5(26). NEMA-CT image database, (2012) [Online]. Available: ftp://medical.nema.org/medical/Dicom/Multiframe/. VIA/I-ELCAP CT Lung Image Dataset, Available [online] . Marcus, D., S., Wang, T., H., Parker, J., Csernansky, J., G., Morris, J., C., Open access series of imaging studies (OASIS): Cross sectional MRI data in young, middle aged, nondemented, and demented older adults. J. Cogn. Neurosci. 2007; 19 (9): pp.1498-1507.