A Method for Automatic Segmentation of Nuclei in ... - IEEE Xplore

0 downloads 0 Views 2MB Size Report
Nov 6, 2014 - In the preprocessing stage, a top-hat filter is used to increase the contrast and suppress the non-uniform illu- mination, shading, and other ...
716

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 8, NO. 5, OCTOBER 2014

A Method for Automatic Segmentation of Nuclei in Phase-Contrast Images Based on Intensity, Convexity and Texture M. Ali Akber Dewan, M. Omair Ahmad, Fellow, IEEE, and M. N. S. Swamy, Fellow, IEEE

Abstract—This paper presents a method for automatic segmentation of nuclei in phase-contrast images using the intensity, convexity and texture of the nuclei. The proposed method consists of three main stages: preprocessing, -maxima transformation-based marker controlled watershed segmentation ( -TMC), and texture analysis. In the preprocessing stage, a top-hat filter is used to increase the contrast and suppress the non-uniform illumination, shading, and other imaging artifacts in the input image. The nuclei segmentation stage consists of a distance transformation, -maxima transformation and watershed segmentation. These transformations utilize the intensity information and the convexity property of the nucleus for the purpose of detecting a single marker in every nucleus; these markers are then used in the -TMC watershed algorithm to obtain segments of the nuclei. However, dust particles, imaging artifacts, or prolonged cell cytoplasm may falsely be segmented as nuclei at this stage, and thus may lead to an inaccurate analysis of the cell image. In order to identify and remove these non-nuclei segments, in the third stage a texture analysis is performed, that uses six of the Haralick measures along with the AdaBoost algorithm. The novelty of the proposed method is that it introduces a systematic framework that utilizes intensity, convexity, and texture information to achieve a high accuracy for automatic segmentation of nuclei in the phase-contrast images. Extensive experiments are performed demonstrating the superior performance ( ; ; validation based on manually-labeled nuclei) of the proposed method. Index Terms—AdaBoost algorithm, Haralick features, nuclei clustering, phase-contrast image, segmentation of nuclei.

I. INTRODUCTION

I

N cell biology research, analysis of microscopic images of cells has become very essential both for a fundamental understanding of the biology involved and for finding effective cures for various diseases such as cancer, lack of fertility, and

Manuscript received April 05, 2013; revised August 12, 2013 and October 11, 2013; accepted November 24, 2013. Date of publication March 11, 2014; date of current version November 06, 2014. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada and in part by the Regroupement Stratégique en Microélectronique du Québec. This paper was recommended by Associate Editor K. Chakrabarty. The authors are with the Department of Electrical and Computer Engineering, Center for Signal Processing and Communications, Concordia University, Montreal, QC H3G 1M8, Canada (e-mail: [email protected]; [email protected], [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TBCAS.2013.2294184

inflammation [1], [2]. Segmentation of nuclei is a process that involves separating the individual nuclei from the background. This is the first step commonly employed in developing various tools for the automatic cell-image analyses such as counting cells, quantifying molecular markers (antigens) of interest in healthy and pathologic specimens, or quantifying aspects associated with normal/diseased tissue architecture [3], [4]. Although analyses of such images can be performed manually, it is time consuming, exhausting and prone to human error, requiring frequent repetitions to validate the results [5], [6]. These factors have motivated the development of automatic cell analysis tools, to identify each individual cell and extract relevant cell characteristics. In recent years, some automated methods have also been proposed, and newer methods continue to be investigated. Phase-contrast images are widely used in live cell microscopy, specifically when high magnification is required and the specimen is colorless or the details are so fine that the color does not show up well [7]. Another reason for the wide use of such images is that it is an ideal non-perturbing imaging technology for studying long-term cell behavior, since harmful side effects of using fluorescent excitation that may become toxic to cells over time are avoided [8], [9]. Though the phase-contrast imaging is a widely used technology, it results in poor image quality due to uneven lighting conditions and short exposure times used in order to minimize cell-death [10]. This causes difficulty in edge detection or intensity thresholding used for segmentation. The presence of obfuscating and uninteresting changes due to illumination variations and other imaging artifacts also increases the complexity of automatic segmentation. The problem becomes more acute, especially when the nuclei overlap or come in close contact and touch one another in a cluster [5]. In order to facilitate the automated analysis of cell images, it is essential to achieve high accuracy for segmentation of the nuclei. In this paper, we present a framework for the segmentation of nuclei, taking into account the aforementioned problems associated with phase-contrast images. The proposed framework relies on the following three observations regarding the characteristics of a nucleus. First, a nucleus in a phase-contrast image normally appears as a dark region surrounded by a bright halo artifact [10], [11]. Second, in a two-dimensional microscopic image, the nucleus is nearly convex [12], [13]. Third, the texture of a nucleus is different from that associated with dirt, dust or other imaging artifacts [14]–[16] in a cell image. Based on

1932-4545 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

DEWAN et al.: A METHOD FOR AUTOMATIC SEGMENTATION OF NUCLEI IN PHASE-CONTRAST IMAGES

the first two observations, the intensity information and the convexity of the nucleus are utilized for segmentation, wherein a group of algorithms, namely, top-hat filter, distance transformation, -maxima transformation and marker-controlled watershed segmentation, are used. Based on the third observation, a technique that makes use of the texture of the nuclei is developed to validate the segmentation thus carried out; the technique uses six texture measures that are based on the Gray Scale Co-occurrence Matrix (GLCM) [17] and an AdaBoost algorithm [18]. In our previous work [19], a method for tracking of nucleus was proposed, where the nucleus was detected using only the intensity information. In this paper, a method for nucleus segmentation is proposed, where intensity, convexity, and texture are used to improve the segmentation accuracy. The paper is organized as follows. Section II contains a very brief review of the various segmentation methods. Section III describes each of the building blocks of the proposed method. Section IV gives the performance of the proposed method and compares it with those of the other methods. Section V concludes the paper. II. PREVIOUS WORK A number of research efforts for segmentation have been reported in the last few years. A detailed survey of the existing methods can be found in [1], [3], [5], and [20], and can be classified mainly into two categories: region-based and contour-based. Region-based methods use intensity information of the nucleus and its surrounding to discriminate nuclei for the purpose of segmentation. Image thresholding [21]–[23], watershed algorithm [12], [13], [24], [25] and graph-cut methods [2], [26]–[28] belong to this category. Image thresholding is a method that thresholds an image intensity using histograms. For the segmentation of nuclei, a two-stage thresholding method is proposed in [21], an iterative one in [22] and an entropy-based one in [23]. These methods can be used in situations where the nuclei are well-separated and have a good contrast compared to the surrounding. However, in a phase-contrast image, the contrast between the nuclei and the surrounding is very poor. Often nuclei remain in a cluster, and the boundaries between the nuclei in the cluster are almost invisible. In such situations, the thresholding approaches are not very effective in separating the nuclei from one another or even from the background. On the other hand, the methods that use watershed algorithm are popular in view of the fact that they are able to segment touching nuclei as long as a separate initializing marker for each nucleus can be found. However, a major drawback of the watershed algorithm is the over segmentation error due to the presence of multiple markers resulting from a poor or inadequate initialization [5], [25]. To solve this problem, two typical approaches are used: fragment merging and marker control [25]. In the fragment-merging approaches, heuristic rules on connectivity, shape, and size of the fragments are used [12], [13]. However, the segmentation of nuclei eventually fails if the preset values used in the heuristics do not comply with the current input image. In the marker-controlled approach, markers are selected either manually or by using radial voting [29], Hough transform [30], or -maxima ( -minima) transformation [19], [31]. Radial voting requires edge extraction, which is based on

717

gradient thresholding and a careful choice of several parameters; however, these may not be feasible in the context of automated segmentation of nuclei. Hough transform is practical only for nearly circular nuclei and also requires excessive computation. The -maxima ( -minima) transformation is found to be efficient, but is overly sensitive to image texture, and may result in multiple markers in the input image. In [2], [26], and [27], nuclei segmentation is implemented via the graph-cut framework, which cuts a weighted graph into sub-graphs according to morphological features of nuclei. A recent method [28] incorporates temporal information along with max-flow/min-cut algorithm to improve segmentation accuracy. These methods perform nuclei segmentation based on an assumption that the intensity gradients are small inside a nucleus while they are large around the nucleus boundary, which is not always valid. Thus, these methods have “leaking” problem, when the nucleus has a weak boundary or is grouped together with another nuclei having a similar intensity [23]. Moreover, these methods require manual interaction to achieve a highly accuracy. The contour-based category includes methods that rely on gradient/edge information and deformable models. Edge detection and morphology tools have been explored by [11] and [32] to segment the nuclei in cell images. Another method [33] uses edge detection and edge optimization technique to detect optimal threshold for nuclei segmentation. However, the edge detection and morphological operations may fail if the contrast between the nuclei and the surrounding is low. Using the halo artifact, Laplacian-of-Gaussian (LoG) filter is used to detect nuclei as blobs in phase-contrast images [34]. However, applying LoG on phase-contrast images containing nuclei of different sizes, shapes and orientations not generate satisfactory blob detection results. Different LoG kernels (e.g., variances in the LoG) may detect different numbers of cells for the same image. The deformable models include active contours [35], [36] and level set methods [20], [37], [38]. These methods consist of finding the boundaries of the object of interest by evolving contours or surfaces guided by internal and external forces. For segmentation of nuclei, they start from an initial contour, and eventually a final nucleus contour is adopted based on the presence of the nucleus. The main advantage of these methods is that they can represent the nucleus morphology with a high accuracy. Compared to active contour methods, level set-based methods have become increasingly popular, since they neither require any explicit parameterization nor suffer from any topological constraints. However, the major limitations of these models are their inability to resolve multiple overlapping objects that are often segmented as a single object, and their sensitivity to initialization. Moreover, to achieve a satisfactory solution, the initial guess should be sufficiently close to the boundaries of interest or nucleus, and this may not be possible in all cases [5]. This paper addresses the challenges dealing with automatic segmentation of nuclei in phase-contrast images. To increase the contrast and suppress the non-uniform illumination, shading, and other imaging artifacts in a phase-contrast image, we employ a top-hat filter in the preprocessing stage. We use the intensity information of the nucleus from the preprocessed image and convexity information from a distance transformation image to increase the accuracy of the marker

718

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 8, NO. 5, OCTOBER 2014

where

Fig. 1. Flow diagram of the proposed method.

detection for the marker-controlled watershed. We also introduce a Haralick feature-based texture analysis technique to filter out the non-nuclei regions from the segmentation result at the final step. These result in increasing the overall accuracy of the nuclei segmentation. III. METHODOLOGY In this Section, the framework for the proposed segmentation method is described. As mentioned earlier, we use three useful features of a nucleus, namely, intensity, convexity and texture. The flow diagram of the proposed method is shown in Fig. 1 which consists of three stages: preprocessing, -maxima transformation-based marker controlled watershed segmentation ( -TMC), and texture analysis. The details of the framework are given in the following subsections. A. Preprocessing Stage In a phase-contrast image, shading artifacts and undesirable noisy peaks appear due to non-uniform illumination, camera sensitivity, or even dirt or dust on the lens surface of the image acquisition system. Moreover, the contrast of the nuclei with respect to the background is poor. Thus, our objective of this stage is to increase the contrast between the nuclei and the surroundings, suppress noisy artifacts and solidify the nucleus region (monotonically decreasing brightness pattern starting from the center to the surrounding in the nucleus region) in the phasecontrast image. These help to separate the nuclei region from its surrounding. To achieve the above goals, we use a top-hat filter [39] in the preprocessing stage (1)

is the minimum value assigned on the boundary and is the value assigned at location for the structuring element . In the top-hat filtering process, the input image is first inverted so that the cell interior becomes bright compared to its surroundings, and then a morphological opening operation is performed. The erosion part of this opening operation deteriorates the small peaks, and the dilation part that follows solidifies the bright cell-center regions. After performing the opening, this processed image is subtracted from the inverted unprocessed image, which corrects the non-uniform illumination, as well as the shading artifact from the input image. It also increases the dynamic range of the intensity between the cell interior and the background, thus facilitating a higher range of threshold to be set for the nuclei segmentation. There are some other methods, such as [40] and [41], for contrast enhancement or removing noisy artifacts from phase-contrast images, but these methods do not pay attention on highlighting the nucleus structures as is done by the top-hat filtering process. In view of these issues, we choose a top-hat filter for the preprocessing stage. After the above operation, a thresholding operation has to be performed on . Otsu’s thresholding [42] is a popular technique for image segmentation, and is based on the assumption that the gray level histogram of an ideal image is a mixture of Gaussian densities. Recently, on the basis of the theory of formation of an ideal image [43], Pal and Pal [44] have shown that the gray level distribution within the object and the background can be more closely approximated by a Poisson distribution than by a normal distribution. Based on this assumption, we choose Poisson distribution-based minimum error thresholding technique [45] for the proposed framework. It should be noted that we tested both the Otsu and Poisson distribution-based thresholding techniques with our framework, and the two methods yielded almost similar results. Thus, these thresholding techniques are interchangeable for our framework. Using the Poisson distribution-based minimum error thresholding, the optimal threshold is computed as follows:

(4) where is the mean intensity of and are a priori probabilities, and and are the means of nucleus and nonnucleus regions, respectively. The quantities , and are computed as follows:

where the small circle “ ” refers to the morphological opening operation, is an input image, is the inverted image of is the filtered image, and is a non-flat disk-shaped structuring element of radius , the parameter being suitably chosen. The shape of the structuring element is defined as

(5)

(2) and the values for the different pixel locations of the structuring elements are defined as

(6) where and

(3)

is the intensity of a pixel in the range is the normalized histogram of the

intensity values of nuclei and non-nuclei regions. The intensity

DEWAN et al.: A METHOD FOR AUTOMATIC SEGMENTATION OF NUCLEI IN PHASE-CONTRAST IMAGES

B.

Fig. 2. Results of the preprocessing stage. (a) Input image. (b) Illuminationcorrected image. (c) 3D view of the intensity profile of image (a). (d) 3D view of intensity profile of image (b). (e) Thresholded image of (a). (f) Thresholded image of (b).

values of the histogram follow the mixture of Poisson distributions, . Fig. 2 illustrates the results obtained after the correction for illumination and detection of the nuclei region. Fig. 2(a) and (b) show the input image and illumination corrected image, respectively. Fig. 2(c) shows the 3D view of intensity profile of the input image. The gradient of the intensity profile is visible in Fig. 2(c), and is due to the shading artifacts of the image acquisition system. It is also seen that the difference in the intensities between the nuclei and the surrounding is small. In addition, the presence of spurious small noisy peaks can be observed from this figure. These make it difficult to set a global threshold for separating the intensities of the nuclei from the surrounding. Fig. 2(d) shows the 3D view of the intensity profile of the image in Fig. 2(b), where it is clearly seen that the top-hat filter removes the shading artifact, reduces the noise and also increases the contrast between the nuclei and the surrounding. The results after applying thresholding method on Fig. 2(a) and (b) are shown in Fig. 2(e) and (f), respectively. It is to be mentioned that, if the top-hat filter is not used to correct the non-uniform illumination or to remove noisy peaks in the preprocessing stage, the thresholding approach may fail to separate the intensities of the nuclei from the surrounding as seen from Fig. 2(e) and (f).

719

-TMC Watershed Segmentation

Although the nuclei are separated from the background by the thresholding method, there may exist many nuclei touching each-other in clusters. To separate such nuclei, we first use the intensity information and the convexity of the nucleus for identifying a single marker for each nucleus, and then apply a watershed algorithm. In the following, we describe the methods for marker extraction and nuclei segmentation. Marker Extraction: In general, nuclei are of elliptical shape. If we propagate the distance inside a nucleus region, the center of the nucleus is assigned the highest distance value. On the other hand, in an illumination-corrected image, the center of a nucleus has the highest brightness. Thus, with the weighted combination of the above two images, the center of a nucleus should get assigned the highest value, which in turn will help -maxima transformation to detect the nucleus with a high accuracy. To compute the distance image, distance transformation is applied on the thresholded image, where we use Chamfer 5/7 distance [46]. The reason for this choice is that the approximation of the Euclidian distance by the Chamfer distance is substantially faster than that by using any other method, as pointed out in [47]. Moreover, this distance calculation is efficient in terms of memory requirement and its relative error in approximating the Euclidian distance is very low ( % [48]). In this operation, for every cluster in the thresholded image, the boundary pixels are initialized with zero value and the non-boundary pixels with infinity (a very high value) in the pre-distance image, . Then, the distance transformation is performed using two passes: forward and backward. The forward pass modifies the distance values from left to right and top to bottom in as follows:

(7) The backward pass modifies the values from right to left and bottom to top as follows:

(8) Finally, the values of are normalized by dividing them by 5. Now, represents the distance of the nearest boundary pixel from the position . After completing the distance transformation, we perform normalization of and in the range [0, 1] and then the weighted summation of the normalized data as follows: (9) is the illumination-corrected image, is the transwhere formed image after summation, and is a weighting parameter in the range [0, 1]. The parameter controls the relative importance of and in generating . We assign less weight to and more weight to , since this helps in detecting less number of false markers after the -maxima transformation. We have empirically set after testing a number of

720

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 8, NO. 5, OCTOBER 2014

Fig. 4. Marker detection in nuclei. (a) An enlarged view of the portion of the thresholded image contained within the rectangular box of Fig. 2(f). (b) Image obtained after the distance transformation of the image of Fig. 4(a). (c) Image obtained after the weighted combination of the distance image of Fig. 4(b) and the illumination corrected image corresponding to the region within the rectangular box of Fig. 2(b). (d) Detected markers in the image of Fig. 4(a) obtained after -maxima transformation. (e) Locations of the markers in the input image corresponding to the nuclei within the rectangular box in the image of Fig. 2(a). (f) Locations of the markers in the input image of Fig. 2(a). (Note that a false marker, shown in the figure as , has occurred due to noise.) Fig. 3. An example illustrating the robustness of the distance and illuminationcorrected images in sharpening the peaks of the local maxima. (a) An input image. (b) Illumination-corrected image of (a). (c) Thresholded image of (a). (d) Image obtained after the distance transformation of (c). (e) Enlarged view of (b) with white stripes at row 23. (f) Intensity profile of row 23 in the image of (e). (g) Image obtained after combining the images of (b) and (d). (h) Intensity profile of row 23 in the image of (g).

values on a training dataset. After generating normalized , values are re-scaled in the range [0, 255]. It should be noted that the weighted sum of the distance image and the illumination-corrected image increases the possibility of selecting a single marker in every nucleus. Fig. 3 shows an example as to how the combination of the illumination-corrected image and the distance image helps in generating a single maximum in each of the nuclei. Fig. 3(a)–(d) show the input image, the illumination-corrected image, the thesholded image, and the distance image, respectively. Fig. 3(e) and (f) show the illumination corrected image with white stripes at row 23 and its corresponding intensity profile, respectively, whereas Fig. 3(g) and (h) show the transformed image after combining (b) and (d) with white stripes at row 23 and its corresponding intensity profile, respectively. It is seen from the intensity profiles that the distance image contributes in sharpening the gradient as well as the peaks of the local maxima in the illumination-corrected image, which in turns helps in generating a single local maximum in every nucleus. We use for the purpose of marker detection, where we employ -maxima transformation [49]. The -maxima transformation detects all the regional peaks whose heights relative to the surrounding values are less than . The choice of the parameter is not so critical, since a wide range of values for yields correct results for the filtered image. We select using the contrast between the nuclei, and the background with the following normalization function: (10)

where and are the average intensities of the nuclei and the background in , respectively, and is a control parameter, whose value varies between 0 and 1 to adjust . In our experiment, we set empirically. An example of marker detection is illustrated in Fig. 4. Fig. 4(a) shows an enlarged view of a portion of the thresholded image contained in Fig. 2(f), where a cluster exists. Fig. 4(b) shows the image obtained after the distance transformation of the image of Fig. 4(a). Fig. 4(c) shows the image obtained after the weighted combination of the distance image of Fig. 4(b) and the illumination-corrected image corresponding to the region within the rectangular box of Fig. 2(b), where a single maximum in every nucleus gets emphasized. Fig. 4(d) shows the detected markers in the image of Fig. 4(a) obtained after -maxima transformation. Fig. 4(e) shows the locations of the markers in the input image corresponding to the nuclei within the rectangular box in the image of Fig. 2(a). Fig. 4(f) shows the result of the marker detection in Fig. 2(a). The detected markers are then used with a watershed algorithm for the segmentation of the nuclei. Watershed Segmentation of Nuclei: Watershed algorithm, when applied on the gradient of an input image, splits the image into areas known as catchment basins based on the topography of the image [50]. In view of its accuracy and efficiency, we use a marker-controlled watershed algorithm in our proposed method. The classical form of watershed algorithm is based on flooding simulation, where the flooding starts from the local minima [51]; eventually, the local minima (black regions) and maxima (solid lines) of the gradient data yield the catchment basins and watershed lines (dams), respectively. However, this kind of a flooding process may lead to over-segmentation due to noise and other local irregularities of the gradient data [50]. In our method, these minima are replaced by markers that have already been detected in the previous step. In this case, the flooding starts from the detected markers instead of from the local minima. In the context of our work, we refer

DEWAN et al.: A METHOD FOR AUTOMATIC SEGMENTATION OF NUCLEI IN PHASE-CONTRAST IMAGES

721

Fig. 6. Examples of some nucleus and non-nucleus objects in phase-contrast images. (a)–(d) Textures of some nucleus objects. (e)–(h) Textures of some non-nucleus objects (imaging artifacts, prolonged cell cytoplasm, or noise).

Fig. 5. -TMC watershed segmentation of nuclei. (a) A side-view of the topography of an image showing the markers, the watershed lines, and the catchment basins. (b) The top-view of the topography. (c) Segmented nuclei in the original image of Fig. 2(a); a false segmentation has occurred and is shown in a circle. (d) An enlarged view of the false segmentation of Fig. 5(c).

to it as the -maxima transformation-based marker-controlled ( -TMC) watershed algorithm in order to distinguish it from the classical one. Since our method emphasizes the detection of a single marker in every nucleus, there is less probability of over-segmentation when the watershed algorithm is applied. However, due to dust or dirt particles, prolonged cell cytoplasm, or imaging artifacts (all of which may be considered as noise), some false markers may be detected as mentioned earlier, which may lead to a false segmentation. An illustration of the -TMC watershed segmentation is shown in Fig. 5. Fig. 5(a) shows a side view for the topography of an image with two downhill-markers along with the watershed lines and the catchment basins. Fig. 5(b) shows the top view of the topography, where the pixels in the catchment basins are associated with the markers. Fig. 5(c) shows the result of the segmentation of the nuclei after the marker extraction and -TMC watershed segmentation on the original image of Fig. 2(a). It is to be mentioned in passing, even though a manual count shows that there are 18 nuclei in the input image shown in Fig. 2(a), after completing the watershed segmentation 19 nuclei have been detected shown in Fig. 5(c). Thus, there is obviously a false segmentation, and this is due to the detection of a false marker in the marker detection step. By observing the images of Fig. 2(a), 4(f), and 5(c), we can manually identify the false segment and the corresponding false marker, and these are shown circled in Fig. 5(c) and 4(f), respectively. An enlarged view of the false segment is shown in Fig. 5(d). In order to identify and remove such false segments, we use the information about the texture of the nuclei, which is described in the following subsection.

cleus segment in a phase-contrast image. Fig. 6 shows some examples of the texture for a number of nucleus and non-nucleus objects. To distinguish the nucleus segments from non-nucleus ones, we employ the AdaBoost algorithm [18] that makes use of six Haralick measures of GLCM [21]. We use Haralick measures as these are known to be efficient for texture analysis [52]. The GLCM is a symmetric matrix of order , where is the quantized number of gray levels. It is assumed that the texture information of an image can be represented in adjacency relationships between specific gray levels. The element in C is defined by (11)

where represents the number of occurrences of grey levels and at an inter-pixel distance in a direction within a given window of interest of size . The Haralick measures are second-order texture measures derived from the GLCM [17]. Haralick et al. originally proposed 14 texture measures, among which six are considered most useful for texture analysis [52], and these are energy, contrast, correlation, homogeneity, entropy, and dissimilarity, as given below. (12) (13)

(14) (15)

C. Texture Analysis The texture of a cell-image carries important characteristics of a nucleus, which can help in discriminating it from a non-nu-

(16)

722

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 8, NO. 5, OCTOBER 2014

(17) where (18) and

(19) In order to increase the robustness of the texture measures, care must be taken in selecting the parameters , and while generating the GLCM. With a small window of size , a GLCM of 256 gray levels (i.e., ) will have 65 536 entries. This may result in a sparsely populated matrix , which in turn may cause instability in the texture measures. Taking the above issues into consideration, we compute the six Haralick measures with and in our implementation. For every and , we calculate the entries of the matrix for and 135 , i.e., taking into consideration the neighboring pixels in the horizontal, right diagonal, vertical and left diagonal directions. From each of these four matrices, we compute the texture measures using (12)–(19). We also take the average of the measurements in these four directions as the feature values. As a result, we obtain in total features for the texture analysis in the proposed method. To demonstrate the suitability of the Haralick measures in discriminating the nucleus from the non-nucleus segments, we compute the values of the six rotational invariant features for and using 30 nucleus and 30 non-nucleus segments, and the results for the features are shown in Fig. 7. From this figure, we observe two interesting characteristics. First, the feature values for the nucleus segments are easily separable from those for the non-nucleus ones. Second, most of the feature values for the nucleus segments are more stable than those for the non-nucleus ones. These two characteristics allow us to use these six features in a classifier to efficiently discriminate the texture of the nucleus segments from that of non-nucleus ones. With the Haralick measures, any classification algorithm such as K-means, neural-network, support vector machines (SVM) or AdaBoost can be used for texture analysis in order to separate the nuclei from the non-nuclei segments. However, AdaBoost is the only algorithm which automatically selects a small set of most discriminant features from a large feature set [53], and simultaneously performs classification at a high accuracy rate with less computation time [54]. In all the other methods, discriminant features are selected either manually or by incorporating a separate feature selection method before classification. AdaBoost algorithm also has a good generalization capability [18] and it has been found to be suitable for a wide range of computer vision-based applications involving detection [54]. Hence,

Fig. 7. Rotation invariant features for nucleus and non-nucleus segments for and . (a) Energy. (b) Contrast. (c) Correlation. (d) Homogeneity. (e) Entropy. (f) Dissimilarity.

we use the AdaBoost algorithm [18] along with the Haralick features for texture analysis. The AdaBoost algorithm selects the discriminant features for the weak classifiers to construct the strong classifier through a training process. As training samples, 600 nuclei and 300 nonnuclei regions are used, which are manually segmented from phase-contrast images and scaled to the size 32 32. An image window containing a nucleus region is considered as a positive sample, whereas that containing a non-nucleus region is considered as a negative one. During the training process, it takes all the training samples as input and initially assigns equal weights for all the samples. In round one, it selects a feature for the weak classifier , that is, the classifier yielding the minimum classification error, where the classification error is a measure of the goodness of separation between the positive and negative samples. Then, the weights assigned to the samples that have been misclassified by are increased, while those assigned to the samples that are classified correctly are decreased. This reweighting ensures that during the next round of the algorithm, the samples that were misclassified in the previous round will be given more importance than those that have already been successfully classified. In round two, the same procedure is repeated as in round one to select another feature for the weak classifier . This procedure is repeated times to select the most discriminant features for the weak classifiers, where is

DEWAN et al.: A METHOD FOR AUTOMATIC SEGMENTATION OF NUCLEI IN PHASE-CONTRAST IMAGES

PARAMETERS

AND

723

TABLE I THEIR VALUES USED IN THE PROPOSED METHOD FOR THIS EXPERIMENT

a priori chosen. Finally, the strong classifier is constructed using a weighted linear combination of the weak classifiers given by (20) where ’s are weighting parameters that are determined based on the classification errors of the weak classifiers ’s. These weighting parameters control the relative importance of ’s in generating the strong classifier. A detailed description of the AdaBoost algorithm can be found in [18]. In our implementation, we select the 10 most discriminant features (i.e., ) among the 810 features for the texture analysis in the AdaBoost algorithm. IV. EXPERIMENTAL RESULTS A number of experiments are carried out on four different datasets of phase-contrast microscopic images. The first three datasets contain time-lapse images of Swiss Webster mouse at embryonic day E10-E12 [24] and the fourth dataset contains cervical cancer cell colonies of the HeLa cell line [55]. The image sequences chosen for this study contain a total of 2910 frames, where the size of each frame is 480 640 pixels, and the depth of each pixel is 8 bits. As for the working environment, Multimedia Technology for Educational System [56] and windows-based visual C++ are used. The experiments are performed on a machine with Intel Pentium IV 3.4 GHz processor and a 2 GB of RAM. All the parameters concerned in the proposed method and their values used for our experiments are enlisted in a Table I. We now present segmentation results using the proposed method. A. Qualitative Analysis We first demonstrate the effectiveness of the texture feature in the segmentation of the nuclei in the proposed method. Fig. 8(a)–(d) show one frame of each of four cell images picked randomly. A manual count shows that there are 39, 15, 42, and 113 nuclei in the frames shown in Fig. 8(a)–(d), respectively. Fig. 8(e)–(h) show the results of marker detection in the corresponding frames after employing the -maxima transformation. It is seen from these figures that 44, 17, 45, and 116 markers have been detected in these four frames, out of which 39, 15, 42, and 113 correspond to the nuclei in the frames of Fig. 8(a)–(d) respectively; thus, there are 5, 2, 3, and 3 markers that have been falsely detected. These false markers are due to noise, imaging artifacts and prolonged cell-cytoplasm. The results after the

Fig. 8. Segmentation of nuclei using the proposed method. (a)–(d) Input images from four different cell sequences. (e)–(h) Results of marker detection in the input images. (i)–(l) Segmented nuclei, where some false segmentation has occurred. (m)–(p) Segmented nuclei after texture analysis.

completion of the -TMC watershed segmentation on the input images are shown in Fig. 8(i)–(l), and as expected, there are 44, 17, 45, and 116 segments, respectively, which includes 5, 2, 3, and 3 false segments, respectively. The results after carrying out the texture analysis on the segments of Fig. 8(i)–(l) are shown in Fig. 8(m)–(p). It is seen that the numbers of segments in Fig. 8(m)–(p) correspond to the manually counted numbers of nuclei in Fig. 8(a)–(d), respectively. By comparing Fig. 8(m)–(p) with Fig. 8(i)–(l), respectively, we can identify the false segments and these are shown circled in Fig. 8(i)–(l). Also, the corresponding false markers in Fig. 8(e)–(h) are shown circled. We now show the superiority of the proposed method over the classical watershed algorithm [40]. Although the classical watershed algorithm is a popular technique, commonly used for image segmentation, it is not so effective in segmenting the nuclei due to low contrast, nuclei clustering and noise in phase-contrast images. Fig. 9(a) shows an input image which contains two clusters of the nuclei and Fig. 9(b) the segmentation result in the corresponding frame using the classical watershed algorithm. It is seen from Fig. 9(b) that some of the nuclei are under-segmented while some others over-segmented thus

724

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 8, NO. 5, OCTOBER 2014

TABLE II DETECTION ACCURACY OF THE PROPOSED METHOD BEFORE AND AFTER TEXTURE ANALYSIS

Fig. 9. Comparison of segmentation results. (a) Input image. (b) Segmentation using classical watershed algorithm [40]. (c) Segmentation using the proposed method.

giving wrong segmentation results. Fig. 9(c) shows the segmentation result using the proposed method from which it is seen that the proposed method is very effective.

TABLE III DETECTION ACCURACY OF THE PROPOSED METHOD WITHOUT THE TOP-HAT FILTER

B. Quantitative Analysis In this section, a quantitative analysis of the performance of the proposed method for segmentation is carried out in terms of detection accuracy [10] and overlap accuracy (or Kappa index) [45]. The detection accuracy is measured using the parameters precision , recall , and -measure . The parameter is defined as , where the parameter TP refers to the true positives which is the number of nuclei correctly identified and FP refers to the false positives which is the number of segments incorrectly identified as nuclei. Since the parameter determines the ratio of the true detection of the nuclei to the total number of detected nuclei, it can also be considered a measure of the exactness of the detection. The parameter is defined as , where the parameter FN refers to the false negatives which is the number of nuclei not identified by the method. The parameter is also considered as a measure of the completeness, since it determines the ratio of the number of true detections of the nuclei to the total number of nuclei actually present in the image. Finally, in order to combine the precision and recall into a single measure of performance, the parameter is defined as the harmonic mean of and given by (21) and the Kappa index (KI) The overlap accuracy are two measures of the amount of overlap between the between the binary image of the segmented nuclei and the ground truth mask; they are defined as (22) (23) where is the binary image of the segmented nuclei produced by the method and is the ground-truth mask. It is assumed in the above equations that a complete overlap of two nuclei does not occur, since, in general, a nucleus is located near the center of the cell and may interact or collide, but not enter the domain of another nucleus [57]. However, cytoplasms of two cells may have some overlap. It is to be mentioned that higher values of , and KI for a given method reflect its better accuracy compared to that of the other methods.

Detection Accuracy: We first study the accuracy of detection of the proposed method before and after using the texture analysis, and the corresponding results are presented in Table II. For this purpose, we conduct experiments using the same four cell sequences as used in the previous subsection. We randomly pick 70 frames from each of the embryonic sequences and 10 from the HeLa cell sequence. We manually count the total number of nuclei in each frame to use as the ground truth. The details of the total number of frames and the total number of nuclei for the selected frames from each cell sequence are given in these tables along with the values of , and . An examination of the F1-measures in Table II clearly shows a substantial improvement in the detection accuracy of the proposed method, after the texture analysis is performed. This is due to the identification and removal of the non-nuclei segments during texture analysis, thus substantially improving the accuracy of detection. In order to study the effect of the top-hat filter on the proposed method, we determine the detection accuracy of the proposed method without the use of the top-hat filter in the preprocessing stage. For this experiment, we use the same datasets and evaluation parameters that have been used in Table II. The results are given in Table III. In the absence of the top-hat filter, the effect of the non-uniform illumination is no longer corrected nor is the shading artifacts and spurious noise of the input image removed; this causes an increase in the detection of false nuclei or a decrease in the detection of true nuclei, thus deteriorating the detection accuracy. This is supported by the values given in Table II (after the texture analysis) and Table III. In order to compare the detection accuracy of the proposed method with four other existing methods presented in [12], [13], [28], and [51], the above experiments are repeated with the same datasets using these methods. Table IV presents the average values of and for the various methods. It is seen from this table that the values of , and are the highest for the proposed method, thus demonstrating the superiority of the method in terms of the detection accuracy. In order to illustrate that this superior detection accuracy is independent of the choice of the frames in each of the four sequences, we repeat the experiment nine more times, each time

DEWAN et al.: A METHOD FOR AUTOMATIC SEGMENTATION OF NUCLEI IN PHASE-CONTRAST IMAGES

725

TABLE IV DETECTION ACCURACY OF THE PROPOSED METHOD WITH SOME OTHER EXISTING METHODS

Fig. 11. for the various methods. A: Proposed method. B: Graph-cut [28]. C: Hybrid merging [13]. D: Compactness [12]. E: Classical watershed segmentation [51].

Fig. 10. Comparison of -measure for the various methods. A: Proposed method. B: Graph-cut [28]. C: Hybrid merging [13]. D: Compactness [12]. E: Classical watershed segmentation [51].

randomly picking 70 frames from each of the embryonic sequences and 10 from the HeLa cell sequence. The results are presented in Fig. 10, where the parameter is plotted for each of the experiments. This figure shows that the methods in [12], [13], and [28] perform better than that in [51]. For [12] and [13], this is due to the application of the merging and splitting strategy to reduce the over- and under-segmentation errors, and for [28], due to the utilization of temporal information in correcting segmentation error. However, the proposed method gives a significantly better performance than the other methods do. Area Overlap Accuracy: We now study the performance of the proposed algorithm in terms of the overlap accuracy and the Kappa Index (KI). For this purpose, we conduct experiments randomly picking 20 frames from the four cell sequences considered earlier. We manually segment the nuclei in each frame and use the segmented frame as the ground truth mask. The parameters and KI are computed using (23) and (24), respectively, for the methods in [12], [13], [28], and [51] as well as for the proposed method, and the results presented in Figs. 11 and 12, respectively. It is seen from these figures that the proposed method outperforms those of [12], [13], [28], and [51] in terms of either or KI. From the above study, it is clear that the segmentation performance of the proposed method is superior to that of the existing methods with respect to both the detection accuracy and the area overlap accuracy. The reason for such a superior and robust performance can be attributed to the use of the top-hat filter in the

Fig. 12. KI for the various methods. A: Proposed method. B: Graph-cut [28]. C: Hybrid merging [13]. D: Compactness [12]. E: Classical watershed segmentation [51].

preprocessing stage, the -TMC watershed algorithm, and the Haralick feature-based texture analysis. V. CONCLUSION This paper has presented a novel three-stage method, which utilizes the intensity, convexity, and texture of the nucleus for automatic segmentation of the nuclei in a phase-contrast image. The method has attempted to resolve the challenges that are encountered due to the low contrast, non-uniform illumination, shading and imaging artifacts, and noise in the image, as well as the challenges due to the prolonged cell cytoplasm and clustering. In the first stage, a top-hat filter has been used to remove the non-uniform illumination and shading artifacts as well as to increase the contrast in the image. In the second stage, a distance transformation and the -maxima transformation have been used, which incorporate the information about the intensity and the convexity of the nucleus, with an objective to detect a

726

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 8, NO. 5, OCTOBER 2014

single marker in every nucleus. These markers have then been used as minima in the -maxima transformation-based marker controlled watershed algorithm in order to segment the nuclei. The use of the two transformations helps in reducing the overand under-segmentation of the nuclei. Even though the second stage is quite efficient in segmenting the nuclei, some false segmentation may still occur due to imaging artifacts, prolonged cell cytoplasm, or noise. To overcome this problem of false segmentation, in the third stage, the Haralick-feature-based nuclei texture analysis has been used. Since the texture of the nuclei carries special characteristics that are different from that of the non-nuclei segments, the texture has been effectively used to distinguish the nucleus from the non-nucleus segments. It should be mentioned that if any marker is missed at the initial stage, it cannot be retrieved with the approaches in this paper. However, we can use a low threshold value that detects more markers than the actual number of nuclei in an image and then perform -maxima transformation and texture analysis to filter out the non-nuclei segments. Extensive experiments have been carried out to demonstrate the effectiveness of the proposed method in the automatic segmentation of nuclei. Finally, it may be pointed out that the incorporation of the texture information in conjunction with that of the intensity and convexity of the nucleus further improves the accuracy of the segmentation. REFERENCES [1] E. Meijering, O. Dzyubachyk, I. Smal, and W. A. v. Cappellen, “Tracking in cell and developmental biology,” Semin. Cell Develop. Biol., vol. 20, no. 8, pp. 894–902, 2009. [2] M. Vergani, M. Carminati, G. Ferrari, E. Landini, C. Caviglia, A. Heiskanen, C. Comminges, K. Zor, D. Sabourin, M. Dufva, M. Dimaki, R. Raiteri, U. Wollenberger, J. Emnéus, and M. Sampietro, “Multichannel bipotentiostat integrated with a microfluidic platform for electrochemical real-time monitoring of cell cultures,” IEEE Trans. Biomed Circuits Syst., vol. 6, no. 5, pp. 498–507, 2012. [3] Y. A. Kofahi, W. Lassoued, W. Lee, and B. Roys, “Improved automatic detection and segmentation of cell nuclei in histopathology images,” IEEE Trans. Biomed. Eng., vol. 57, no. 4, pp. 841–852, 2010. [4] X. Yue, E. M. Drakakis, M. Lim, A. Radomska, H. Ye, A. Mantalaris, N. Panoskaltsis, and A. Cass, “Real-time multi-channel monitoring system for stem cell culture process,” IEEE Trans. Biomed Circuits Syst., vol. 2, no. 2, pp. 66–77, 2008. [5] P. Quelhas, M. Marcuzzo, A. M. Mendonça, and A. Campilho, “Cell nuclei and cytoplasm joint segmentation using the sliding band filter,” IEEE Trans. Med. Imag., vol. 29, no. 8, pp. 1463–1473, 2010. [6] W. Shitong and W. Min, “A new detection algorithm (NDA) based on fuzzy cellular neural networks for white blood cell detection,” IEEE Trans. Inf. Technol. Biomed., vol. 10, no. 1, pp. 5–10, 2006. [7] W. He, X. Wang, D. Metaxas, R. Mathew, and E. White, “Cell segmentation for division rate estimation in computerized video time-lapse microscopy,” in Proc. Multimodal Biomedical Imaging II, Jan. 2007, vol. 6431, pp. 1–5. [8] I. Ersoy, F. Bunyak, M. A. Mackey, and K. Palaniappan, “Cell segmentation using hessian-based detection and contour evolution with directional derivatives,” in Proc. Int. Conf. Image Processing, 2008. [9] J. Ghaye, G. De Micheli, and S. Carrara, “Quantification of sub-resolution sized targets in cell fluorescent imaging,” in Proc. IEEE Biomedical Circuits and Systems Conf., 2012. [10] O. Debeir, P. V. Ham, R. Kiss, and C. Decaestecker, “Tracking of migrating cells under phase-contrast video microscopy with combined mean-shift processes,” IEEE Trans. Med. Imag., vol. 24, no. 6, pp. 697–711, 2005. [11] K. Li, E. Miller, M. Chen, T. Kanade, L. Weiss, and P. Campbell, “Cell population tracking and lineage construction with spatiotemporal context,” Med. Imag. Anal., vol. 12, no. 5, pp. 546–566, 2008.

[12] X. Chen, X. Zhou, and S. T. C. Wong, “Automated segmentation, classification, and tracking of cancer cell nuclei in time-lapse microscopy,” IEEE Trans. Biomed. Eng., vol. 53, no. 4, pp. 762–766, Apr. 2006. [13] X. Zhou, F. Li, J. Yan, and S. T. C. Wong, “A novel cell segmentation method and cell phase identification using Markov model,” IEEE Trans. Inf. Technol. Biomed., vol. 13, no. 2, pp. 152–157, Mar. 2009. [14] A. M. Martins, A. D. D. Neto, A. M. B. Junior, A. Sales, and S. Jane, “Texture based segmentation of cell images using neural networks and mathematical morphology,” in Neural Netw., Jul. 2001. [15] T. Kazmar, M. Smid, M. Fuchs, B. Luber, and J. Mattes, “Learning cellular texture features in microscopic cancer cell images for automated cell-detection,” in Proc. Int. Conf. IEEE Engineering in Medicine and Biology Soc., Buenos Aires, Argentina, 2010. [16] C. Chen, W. Wang, J. A. Ozolek, N. Lages, S. J. Altschuler, L. F. Wu, and G. K. Rohde, “A template matching approach for segmenting microscopy images,” in Proc. IEEE Int. Symp. Biomedical Imaging, Pittsburgh, PA, USA, 2012. [17] R. M. Haralick, K. Shanmuga, and I. Dinstein, “Textural features for image classification,” IEEE Trans. Syst., Man, Cybern., vol. SMC-3, pp. 610–621, Nov. 1973. [18] Y. Freund and R. E. Echapire, “A short introduction to boosting,” J. Jpn. Soc. Artif. Intell., pp. 1–13, Sep. 1999. [19] M. Dewan, M. Ahmad, and M. Swamy, “Tracking biological cells in time-lapse microscopy: An adaptive technique combining motion and topological features,” IEEE Trans. Biomed. Eng., vol. 58, no. 6, pp. 1637–1647, Jan. 2011. [20] D. Padfield, J. Rittscher, N. Thomas, and B. Roysam, “Spatio-temporal cell cycle phase analysis using level sets and fast marching methods,” Med. Imag. Anal., vol. 13, pp. 143–155, 2009. [21] F. Cloppet and A. Boucher, “Segmentation of complex nucleus configurations in biological images,” Pattern Recognit. Lett., vol. 31, pp. 755–761, Jan. 2010. [22] H.-S. Wu, J. Barba, and J. Gil, “Iterative thresholding for segmentation of cells from noisy images,” J. Microsc., vol. 197, no. 3, pp. 296–304, Mar. 2000. [23] P. R. Gudla, K. Nandy, J. Collins, K. J. Meaburn, T. Misteli, and S. J. Lockett, “A high-throughput system for segmenting nuclei using multiscale techniques,” Cytometry, vol. A, pp. 451–466, Mar. 2008. [24] O. Al-Kofahi, R. J. Radke, S. K. Goderie, Q. Shen, S. Temple, and B. Roysam, “Automated cell lineage construction: A rapid method to analyze clonal development established with murine neural progenitor cells,” Cell Cycle, vol. 5, no. 3, pp. 327–335, 2006. [25] G. G. Lee, H.-H. Lin, M.-R. Tsai, S.-Y. Chou, W.-J. Lee, Y.-H. Liao, C.-K. Sun, and C.-F. Chen, “Automatic cell segmentation and nuclear-to-cytoplasmic ratio analysis for third harmonic generated microscopy medical images,” IEEE Trans. Biomed. Circuits Syst., vol. 7, no. 2, pp. 158–168, 2013. [26] V. Ta, O. Lézoray, A. Elmoataz, and S. Schüpp, “Graph-based tools for microscopic cellular image segmentation,” Pattern Recognit., vol. 42, pp. 1113–1125, 2009. [27] O. Danek, P. Matula, C. Ortiz-de-Solorzano, A. Munoz-Barrutia, M. Maska, and M. Kozubek, “Segmentation of touching cell nuclei using a two-stage graph cut model,” in Lecture Notes on Computer Science. Heidelberg, Germany: Springer-Verlag, 2009. [28] A. Massoudi, A. Sowmya, K. Mele, and D. Semenovich, “Employing temporal information for cell segmentation using max-flow/min-cut in phase-contrast video microscopy,” in Proc. Annu. Int. Conf. IEEE Engineering in Medicine and Biology Soc., Boston, MA, USA, 2011. [29] O. Schmitta and M. Hasseb, “Radial symmetries based decomposition of cell clusters in binary and gray level images,” Pattern Recognit., vol. 41, pp. 1905–1923, Nov. 2008. [30] C. Ortiz-de-Solorzano, E. G. Rodriguez, A. Jones, D. Pinkel, J. W. Gray, D. Sudar, and S. J. Lockett, “Segmentation of confocal microscope images of cell nuclei in thick tissue sections,” J. Microsc., vol. 193, no. 3, pp. 212–226, Mar. 1999. [31] M. E. Plissiti, C. Nikou, and A. Charchanti, “Automated detection of cell nuclei in Pap smear images using morphological reconstruction and clustering,” IEEE Trans. Inf. Technol. Biomed., vol. 15, no. 2, pp. 233–241, Mar. 2011. [32] D. House, M. Walker, Z. Wu, J. Wong, and M. B., “Tracking of cell populations to understand their spatio-temporal behavior in response to physical stimuli,” in Proc. Workshop Mathematical Modeling in Biomedical Image Analysis, Miami, FL, USA, 2009.

DEWAN et al.: A METHOD FOR AUTOMATIC SEGMENTATION OF NUCLEI IN PHASE-CONTRAST IMAGES

[33] S. Wienert, D. Heim, K. Saeger, A. Stenzinger, M. Beil, and P. Hufnagl, Detection and Segmentation of Cell Nuclei in Virtual Microscopy Images: A Minimummodel Approach, Scientific Reports, 2012, pp. 1–7. [34] K. Smith, A. Carleton, and V. Lepetit, “General constraints for batch multipletarget tracking applied to large-scale video microscopy,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Anchorage, AK, USA, 2008. [35] H. Grimm, A. Verkhovsky, A. Mogilner, and J. Meister, “Analysis of actin dynamics at the leading edge of crawling cells: Implications for the shape of keratocyte lamellipodia,” Eur. Biophys. J., vol. 32, pp. 563–577, 2003. [36] F. Li, X. Zhou, H. Zhao, and S. Wong, “Cell segmentation using front vector flow guided active contours,” in Proc. Int. Conf. Medical Image Computing and Computer-Assisted Intervention, 2009. [37] G. Xiong, X. Zhou, L. Ji, P. Bradley, N. Perrimon, and S. Wong, “Segmentation of drosophila RNAi fluorescence images using level sets,” in Proc. IEEE Int. Conf. Image Processing, Oct. 2006, pp. 73–76. [38] M. Ambuhl, C. Brepsant, J. Meister, A. Verkhovsky, and I. Sbalzarini, “Highresolution cell outline segmentation and tracking from phase-contrast microscopy images,” J. Microsc., vol. 245, pp. 161–170, 2012. [39] F. Meyer, “Iterative image transformation for an automatic screening of cervical cancer,” Histochem. Cytochem., vol. 27, no. 1, pp. 128–135, 1979. [40] S. J. R. Pizer, J. Ericksen, B. Yankaskas, and K. Muller, “Contrast-limited adaptive histogram equalization: Speed and effectiveness,” in Proc. Conf. Visualization in Biomedical Computing, Atlanta, GA, USA, 1990. [41] P. M. J. Perona, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, pp. 629–639, Jul. 1990. [42] N. Otsu, “A threshold selection method from gray-level histogram,” IEEE Trans. Syst., Man, Cybern., vol. 9, pp. 62–66, 1979. [43] J. Dainty and R. Show, Image Science. New York, NY, USA: Academic, 1974. [44] N. R. Pal and S. Pal, “Poisson distribution and object extraction,” Int. J. Pattern Recognition Artif. Intell., vol. 5, pp. 439–483, 1991. [45] J. Fan, “Notes on Poisson distribution-based minimum error thresholding,” Pattern Recognit. Lett., pp. 425–431, 1998. [46] G. Borgefors, “Hierarchical chamfer matching: A parametric edge matching algorithm,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 10, no. 6, pp. 849–865, Nov. 1988. [47] C. Maurer, R. Qi, and V. Raghavan, “A linear time algorithm for computing exact Euclidean distance transforms of binary images in arbitrary dimensions,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 25, no. 2, pp. 265–270, 2003. [48] E. Nacken, “Chamfer metrics in mathematical morphology,” J. Math. Imag. Vis., vol. 4, pp. 233–253, 1994. [49] L. Vincent, “Morphological gray scale reconstruction in image analysis: Applications and efficient algorithms,” IEEE Trans. Image Process., vol. 2, no. 2, pp. 176–201, Apr. 1993. [50] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. Englewoood Cliffs, NJ, USA: Prentice-Hall, 2008. [51] L. Vincent and P. Soille, “Watersheds in digital spaces: An efficient algorithm based on immersion simulations,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, no. 6, pp. 583–598, Jun. 1991. [52] P. Maillard, “Comparing texture analysis methods through classification,” J. Amer. Soc. Photogram. Remote Sens., vol. 69, no. 4, pp. 357–367, Apr. 2003. [53] L. Furst, S. Fidler, and A. Leonardis, “Selecting features for object detection using an AdaBoost-compatible evaluation function,” Pattern Recognit. Lett., vol. 29, pp. 1603–1612, 2008. [54] C. Kyrkou and T. Theocharides, “A flexible parallel hardware architecture for AdaBoost-based real-time object detection,” IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 19, no. 6, pp. 1034–1047, Jun. 2011. [55] C. Arteta, V. Lempitsky, J. A. Noble, and A. Zisserman, “Learning to detect cells using non-overlapping extremal regions,” in Proc. Medical Image Computing and Computer-Assisted Intervention, Nice, France, Oct. 2012. [56] J. Lee, Y. K. Cho, H. Heo, and O. S. Chae, “MTES: Visual programming for teaching and research in image processing,” in Lecture Notes in Computer Science. Heidelberg, Germany: Springer-Verlag, 2005.

727

[57] J. S. Suri, K. Liu, S. Singh, S. N. Laxminarayan, X. Zeng, and L. Reden, “Shape recovery algorithms using level sets in 2-D/3-D medical imagery: A state-of-the-art review,” IEEE Trans. Inf. Technol. Biomed., vol. 6, no. 1, pp. 8–28, Mar. 2002. M. Ali Akber Dewan received the B.Sc. degree in computer science and engineering from Khulna University, Bangladesh, and the Ph.D. degree in computer engineering from Kyung Hee University, Seoul, South Korea, in 2003 and 2009, respectively. From 2003 to 2009, he was a Faculty Member in the Department of Computer Science and Engineering, Chittagong University of Engineering and Technology, Bangladesh. From 2009 to 2012, he was a Postdoctoral Fellow at Concordia University, Montreal, QC, Canada. He is currently a Postdoctoral Fellow at École de technologie supérieure, University of Quebec, Montreal, QC, Canada. His research interests include image processing and computer vision, machine learning, biometric recognition, motion detection and tracking, and medical image analysis.

M. Omair Ahmad (S’69–M’78–SM’83–F’01) received the B.Eng. degree from Sir George Williams University, Montreal, QC, Canada, and the Ph.D. degree from Concordia University, Montreal, QC, Canada, both in electrical engineering. From 1978 to 1979, he was a Faculty Member at New York University College, Buffalo, NY, USA. In September 1979, he joined the Faculty of Concordia University as an Assistant Professor of Computer Science. He joined the Department of Electrical and Computer Engineering, Concordia University, where he was the Chair with the department from June 2002 to May 2005 and is currently a Professor. He holds the Concordia University Research Chair (Tier I) in Multimedia Signal Processing. He has authored extensively in the area of signal processing and holds four patents. His research interests include the areas of multidimensional filter design, speech, image and video processing, nonlinear signal processing, communication DSP, artificial neural networks, and VLSI circuits for signal processing. He was a Founding Researcher at Micronet from its inception in 1990 as a Canadian Network of Centers of Excellence until its expiration in 2004. Previously, he was an Examiner of the Order of Engineers of Quebec. Dr. Ahmad was an Associate Editor of the IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS PART I: FUNDAMENTAL THEORY AND APPLICATIONS from June 1999 to December 2001. He was the Local Arrangements Chairman of the 1984 IEEE International Symposium on Circuits and Systems. In 1988, he was a member of the Admission and Advancement Committee of the IEEE. He has served as the Program Co-Chair for the 1995 IEEE International Conference on Neural Networks and Signal Processing, the 2003 IEEE International Conference on Neural Networks and Signal Processing, and the 2004 IEEE International Midwest Symposium on Circuits and Systems. He was a General Co-Chair for the 2008 IEEE International Conference on Neural Networks and Signal Processing. He is the Chair of the Montreal Chapter IEEE Circuits and Systems Society. He was a recipient of numerous honors and awards, including the Wighton Fellowship from the Sandford Fleming Foundation, an induction to Provost’s Circle of Distinction for Career Achievements, and the Award of Excellence in Doctoral Supervision from the Faculty of Engineering and Computer Science of Concordia University.

M. N. S. Swamy (S’59–M’62–SM’74–F’80) received the B.Sc. (Hons.) degree in mathematics from the University of Mysore, Mysore, Karnataka, India, in 1954, the Diploma degree in electrical communication engineering from the Indian Institute of Science, Bangalore, India, in 1957, and the M.Sc. and Ph.D. degrees in electrical engineering from the University of Saskatchewan, Saskatoon, SK, Canada, in 1960 and 1963, respectively. In August 2001, he was awarded a Doctor of Science (Honoris Causa) degree in engineering by Ansted University, Penang, Malaysia.

728

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 8, NO. 5, OCTOBER 2014

Currently, he is a Research Professor in the Department of Electrical and Computer Engineering at Concordia University, Montreal, QC, Canada, where he served as the Founding Chair of the Department of Electrical Engineering from 1970 to 1977, and Dean of Engineering and Computer Science from 1977 to 1993. During that time, he developed the Faculty into a research-oriented one from what was primarily an undergraduate Faculty. Since July 2001, he holds the Concordia Chair (Tier I) in Signal Processing. He has also taught in the Electrical Engineering Department of the Technical University of Nova Scotia, Halifax, NS, Canada, and the University of Calgary, Calgary, AB, Canada, as well as in the Department of Mathematics at the University of Saskatchewan. He has authored extensively in the areas of number theory, circuits, systems and signal processing, and holds five patents. He is the coauthor of two book chapters and six books. He was a Founding Member of Micronet from its inception in 1999 as a Canadian Network of Centers of Excellence until its expiration in 2004, and also its Coordinator for Concordia University.

Dr. Swamy is a Fellow of a number of professional societies, including the Institute of Electrical Engineers, U.K., the Engineering Institute of Canada, the Institution of Engineers, India, and the Institution of Electronic and Telecommunication Engineers, India. In 2008, Concordia University instituted the M. N. S. Swamy Research Chair in Electrical Engineering as a recognition of his research contributions. He was inducted in 2009 to Provost’s Circle of Distinction for career achievements. He was conferred in 2009, the title of Honorary Professor at the National Chiao Tung University, Taiwan. He has served the IEEE in various positions, including President-elect in 2003, President in 2004, Past-president in 2005, the Vice President (Publications) from 2001 to 2002, Vice President in 1976, and the Editor-in-Chief of the IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS PART I: FUNDAMENTAL THEORY AND SPPLICATIONS from 1999 to 2001. He was the recipient of many IEEE-CAS Society Awards, including the Education Award in 2000, the Golden Jubilee Medal in 2000, and the 1986 Guillemin-Cauer Best Paper Award.